Squishy Sound Sessions for Project Blob

26p2jq

http://bit.ly/Studio2Blob

Project Plan Link

On March 29th, 2017, I had been approached by Kirsty, a games student. She wanted to know if I could help her team do audio for a game project.

On April 6th, I filled out my production plan for the project – game audio & music for Project Blob.

On April 7th, 2017, I had my first recording session for Project Blob.

On April 9th, I was working on some of the menu sounds from home.

On April 10th, 2017, I had the second foley recording session.

On April 13th, through to April 15th, I spent time working on the music.

On April 16th, I did some last minute processing to the last of the samples.

On April 17th, I gave Blob Squad everything they asked for, and on time.

On April 26th, I was invited out to an event where Project Blob was showcased.

Squishy Sound Sessions for Project Blob

Sounds & Synths for Trap Music – Production Techniques

sflosstra

Trap Music is a form of hip-hop which typically utilises thin 808 hi-hats and snare samples. It originated first in the 90s, but wasn’t exactly mainstream until T.I.’s second studio album “Trap Musik“, where both the name for the album and style originated from being ‘trapped’ in the ghetto’ – where drugs were sold and gang violence was the most prevalent.

The aesthetics of trap today are usually quite dark and edgy, with deep, droning moog & sine basses, and melodies crafted from chilling, percussive synths such as bells, plucks, and eerie vocal cuts. The synths are often very ‘echoey’ due to the effects of time-domain based plugins such as reverbs and delays. A lot of modern trap could be classified as ‘dark ambient’ if it weren’t for the bright drum samples that fuel the beat.

Starting off as a form of hip hop, it’s now widely considered a huge EDM phenomenon, influencing a lot of today’s mainstream pop music. Modern trap also often borrows the sampling aspects of its hip hop roots, whether it be vocals or instruments, or basically any random sound that’s been tuned to a pitch (EDM influences). The recent mainstream attention that dubstep managed to attract in 2010 also gave Trap another source of inspiration to borrow from (high pitched, screechy sounding synths). Polish producer ak9’s track entitled “GØД managed to encompass elements of EDM, dubstep and hip hop, making it what we would describe today as trap music:

An important production technique in trap music is Sidechain Compression, which I’ve touched on in earlier blogs. Since kick drum samples and 808 bass hits are both pretty subby, they will clash for the frequency space in the mix if you just try blending the kick and 808 together without it.

Normally instead of using Fruity Limiter to side-chain my basses to my kicks like this guy did, I use Gross Beat, which is not a compressor plugin but a time and volume manipulation plugin, which has side-chaining presets. Now that I’ve researched it a little, I feel like this method is probably easier and would take less time in the long run, without having to automate in and out different instances of gross beat running different volume envelopes for specific kick patterns.

In an attempt to make my own trap track (entitled ‘Underpass‘ below), I tried creating some sounds of my own.

The main sound of this track is a ‘clangy’ sound which came from a session in class where we formed groups and took the location recorders and microphones outside to record a bunch of foley samples – one of which was this clanging sound I made from hitting a metal pole. In FL Studio 11, after some top & tailing processing I assigned this sound to a pitch (F#) and then created a simple melody, which to me sounded dark, like you’re walking down some unfamiliar alleyway. The melody consisted of syncopated noted and tight rolls which added to the mysteriousness of the atmosphere I was creating. Then I added an EQ which emphasised the tones of the melody, some reverb to give it the echoey trap feel, heavy compression to accentuate the reverb as if it was part of the actual ‘synth’ i was making, and then to top it off, put an instance of Gross Beat on the channel, which helped me side-chain it to the kick. After watching the video on sidechain compression earlier, I realise I probably could’ve just used the same instance of fruity limiter that I used to compress the sound to side-chain it to whenever the kick hits, but I can keep gradually working at introducing that technique to all of my future projects.

This song was the main influence for Underpass:

I pitched the track to a couple of classmates and the feedback was mostly positive, where the only improvements they could suggest were to give it more variety in terms of stereo field, but that’s something I can keep working on later. For now, I’m happy with what I achieved, and I feel it represents today’s iteration of the genre well.


References:

Bein, K. (2012, July, 3) It’s a Trap! An 11-Part History of Trap Music [Web Article] Retrieved from http://www.miaminewtimes.com/music/its-a-trap-an-11-part-history-of-trap-music-from-dj-screw-to-gucci-mane-to-flosstradamus-6475986

DJMag.com (2013, February, 28) Trap Music: Under Lock & Key [Web article] Retrieved from http://djmag.com/content/trap-music-under-lock-key

Haithcoat, R. (2012, October, 4) What the Hell Is Trap Music (and Why Is Dubstep Involved)? [Web Article] Retrieved from http://www.laweekly.com/music/what-the-hell-is-trap-music-and-why-is-dubstep-involved-2408170

I’mAMusicMogul.com (2016, September, 2016) THIS IS THE MOST USED SYNTH/SOUND IN RAP, HIP HOP AND TRAP (TUTORIAL) [Web Article] Retrieved from http://blog.imamusicmogul.com/2016/09/this-is-the-most-used-synthsound-in-rap-hip-hop-and-trap-tutorial/

Pepin, C. (2016, June, 12) Genre Breakdown: What is Trap Music? [Web Article] Retrieved from http://thesixthirty.com/ravefaced/genre-breakdown-trap-music/

Sounds & Synths for Trap Music – Production Techniques

The Febs – When It’s Morning (After Feedback)

So in class we ended up showing the lecturers our mixes and we all got some general feedback. I think the main thing my lecturer was concerned about with a few of us was the panning on the drums, it sounded like we might have panned our overheads wrong in some cases, and he worded it like “it feels like the toms are all over the place when the fills come in”.

More specific feedback I received was that the kick felt a little too boxy, there was a little too much sibilance in the vocals, and that I should probably ease a bit off 0.00dB a bit on the maximus, because it felt like it was about to clip. Otherwise everything was pretty damn good in my mix.

In that same class our lecturer showed us something we could do to replace a kick or snare track with ‘signal generator triggers’. The way it works is this, create an aux track with a signal generator on it (whatever kind of tone you need – low sine tone for the kick and a noise tone for a snare bottom) and put a gate on it, with the kick or snare feeding into the gate as the key side-chain input, so the tone triggers every time the kick or snare does.

The changes I made to the mix I feel accurately fixed these problems and then some, so I’ll walk through some of the changes I made.

  • Shaped the kick EQ to focus less on the mids and more on the lower mid .frequencies and turned up the high mid frequencies at 3kHz and 6kHz a bit too.
  • Use sidechain triggers to generate a fake kick and snare using low sine (82Hz) and white noise generators. Had to EQ the generated snare a little bit because it was a bit too obvious that it was white noise.
  • Put a Mod Delay III on the egg mic double bass to make it a little bit wider.
  • Changed the overhead EQ a bit to focus a lot more on high mids, because the cymbals weren’t sounding bright enough.
  • Put a De-Esser on the lead vocals.
  • Duplicated the guitar in the last chorus onwards and dropped it down an octave in pitch just to give it a bit more variation & colour in the mix.
  • Panned the guitars a bit more apart.

hrwhh

Overall it was some really good feedback and it was thanks to that I could improve my mix and overall make it brighter. I think if you compare the last mix to this one, it’s like this one’s the one where you’re sober and the last one is like after you’ve had a few drinks!

The Febs – When It’s Morning (After Feedback)

Royal Artillery – The Brakes (Mixing)

I messed up big time with this mix, because I actually ended up mixing the wrong track – our lecturers wanted everyone to mix the second track by these guys that we recorded, but I ended up missing the memo and mixing the first track instead. So I’m going to have to explain what I did on the other one until the point I realised I was mixing the wrong track.

  • Edited the intro and instead silence I thought it’d be fun to copy paste the drum hit from the start halfway through the intro to hype the listener up a bit more.
  • It was virtually impossible to get rid of the bleed in the vocals so I took the most audible part of the vocals and copy pasted it to where the other vocals were supposed to be.
  • With the vocals that I could actually use, I used a gate to try and remove as much bleed as I could, then a compressor to bring up the levels, EQ notch cut at 230Hz to get rid of the noise a bit further, with some boosts either side of that (Channel Strip), and used Mod Delay III to give them a slapback delay effect.
  • In some parts I duplicated the top guitar gab and transposed it up an octave, just for a bit of variety.
  • At the end I edited in another guitar chord by copy pasting what was already there and transposing it using elastic audio, because it felt like it needed to go to that chord right there at the end.
  • Gated the Kicks, Snares, Toms and also removed silence on tom tracks to remove bleed and gain more control over what I was mixing.
  • EQ – low mid boosts, high mid cuts on toms. Same on Kick channels but more smoother (boosted kick out high a bit more to get more of the impact), snare cut at 550Hz because I didn’t like that tone and slight high shelf boost. Boosted lows and low mids on mid guitar cab for lack of sub cab – boosted high mids on high cab.
  • Used parallel compression – normal drum submix + compressed submix that had a Lo Fi distortion plugin on it + EQ’d to take out most of the lows and mids. Just to give the drums a bit more fuzziness and presence in the mix.
  • Panned the drums as if I was sitting at the kit. Mod Delay III used on high guitar cab and vocals.
  • Kick Out, Snare Top & Bottom, Overheads, Guitars, Drum Para. Comp. submix, Ribbon room mics, and vocals all sent to a reverb aux track (D-Verb) – decay time of 1.2 secs.

Now this is the part where I realised I fucked up – after printing the mix for track one I found out we were supposed to do track two, so I kind of “cheated” in a way by obliterating everything in the edit window and importing track 2’s audio files, but hey it kinda worked, the only thing I had to do then was to Implement the sub guitar cab into the mix. There was a problem with the recording though, as it sounded kinda rattle-y at some parts, but I kinda liked it, it added to the overall grungy vibe of the track. So I didn’t bother editing it out. There was also a recording of one of the guys saying “fuck yeah” so I threw that in at the end for shits and gigs.

After all this, I printed the track to a stereo audio track and saved the session. As I was printing it though I was riding the reverb fader, so I was practically automating it in certain parts to give the mix some variety. I then closed Pro Tools and opened up FL Studio 11 to master the track using Maximus. I just don’t trust Pro Tools with mastering yet for some reason, I’d rather stick to the devil I know.

  • Pre-gain on Low Band: +7dB
  • Pre-gain on Mid Band: +10dB
  • Pre-gain on High Band: +9dB
  • Post-gain on Master: +3dB

The printed file from Pro Tools was very quiet hence the dramatic pre-gain levels. Also, used an automated volume control to top and tail the mix – it took a few times to get right… After that, another mix was in the can.

Overall these guys were super fun to record and even more interesting to mix, so props to Royal Artillery!

Royal Artillery – The Brakes (Mixing)

The Febs – When It’s Morning (Before Feedback)

Here’s my walk-through on how I made the mix for “When It’s Morning” by The Febs.

The song was intended to have a very loose ‘pub rock’ feel so I kept that in mind while working through editing and mixing the song.

Setting Up

The first thing I did was gain stage everything to make sure it was all hitting the light green on the meters before touching the faders.

Editing

Before I started on editing I already knew what I wanted to do. The first thing I did was edit out the bleed on the tom drums tracks because I wanted more control over what I wanted to mix – bleed in a song like this isn’t usually a big issue but I still like to reduce it wherever I can.

There was a part in the first verse, particularly where he sings the line “the line at the taxi’s gonna send me insane – hey there’s that guy from before, he’s still trying to tune that girl Jane”, between “insane” and “hey there’s that guy”, where the vocalist kinda sounded like he was talking over himself (just the way it was punched in during overdubs). The “me” and “in” syllables were too long so i stretched them and synced it up with the beat a bit more so it flowed naturally.

elastic audio vocals 'send me insane'1

Another thing I did to give the mix a bit of personality was place a quiet cough at the start of the song. The cough was recorded during overdubs when we were recording vocals, and it was mainly just a happy little accident. But I like it there.

Spectral

Next I put a whole heap of EQs on the tracks. Mainly just cleanups because I wanted to retain the ‘live’ feeling of it all, but there were some things that needed to be done.

  • Boosted low mids and high mids on kick. Cut sub frequencies and the really high ones.
  • Snare channels – boosted low mids and used low and high passes to cut any extreme frequencies – more boosting on snare top channel.
  • Hi-hats – completely cut everything under 150Hz and slight boost of high mid frequencies from 3kHz-15kHz.
  • Toms – same as kick but the low mids in question were a bit higher.
  • Did not touch overheads with EQ – reverb submix was EQ’d though, boosting high mids and slightly low mids, cutting lows with a high pass.
  • Double bass – cuts to high & high mid frequencies. Double bass channel had all frequencies above 2kHz completely cut, and egg mic channel’s eq looks like a dolphin facing the left.
  • Acoustic L/R boosted highs and mids, cut lows.
  • “guide” acoustic guitar track had a low and high pass on it and just retained the mid frequencies from 100kHz to 5kHz.
  • Vocals – mainly boosted low mids and high mids on all channels. Room channels I boosted the lows a lot more than the highs.
  • Electric guitar – boosted the mids about 5dB and the highs 1dB.

Overall I feel like the spectral balance of the track is pretty even – we have the drums bass and vocals hanging in the lows to low mids and the guitars cymbals in the mid to high range, which helps them “shine” in the mix a bit.

Dynamics

I used some compressors and expander gates here and there to control gain levels on things such as vocals and the overheads, and also to make a few things a little clearer by removing bleed and noise on channels such as the tom drums & double bass.

Panning

On the drums I basically panned everything as if I imagined myself sitting at the drums. except leaving the kick snare and hi-hats in the middle. Coming from an electronic music background I like to leave those things in mono. It doesn’t feel right to tune low frequency heavy stuff to me.

Acoustic guitar L and R were panned both ways 80% each, because I still wanted to have a tiny bit of the left on the right and vice versa. I don’t think we were supposed to, but I ended up using the guide acoustic guitar as a middle channel just to thicken up the acoustic guitars a little bit and also ensure there was a strong enough mono guitar signal for listeners with mono speakers.

I decided to use both electric guitar mic recordings but panned the 414 to the left and the 57 to the right. I think I might have misread them as left and right channels at first but I liked the result so I kept it and went with it.

The lead vocals I duplicated and moved the copy out of sync a little using Mod Delay III to give it a stereo ‘slapback’ effect, and both signals were panned 40% each way. The backup and harmony vocals were panned slightly to the right, because that’s just where I imagined them coming from. The backup room channel though was panned to the left as if the backup singer was singing from the right and facing left.

The drums, bass, acoustic guitar, electric guitar and vocals were all sent to stereo submix aux tracks, where I then panned the acoustic guitar and electric guitar further left and right respectively.

Other Effects

I used some reverb on the overheads by sending the two channels to a stereo aux track which had a D-Verb plugin on it with a slight decay time of about 500ms. That signal was then compressed and EQ’d. I used D-Verbs on the vocal room mics, you can hear it the best in the breakdown section where it’s just the singing and the kick drums.

Also used the Lo-Fi plugin for some fuzzy distortion on the double bass egg mic channel. Just in case radio listeners won’t be able to hear the sub frequencies, I saturated them a little.

Mastering

Once I was happy with the levels of everything I printed the mix to a new stereo track and then saved the session. I then took the printed audio into FL Studio 11 where I proceeded to master the track using Maximus – a mastering plugin that can also be used as a multi-band compressor, limiter, gate and de-esser. I automated master volume fades at the start and end of the mix, to top and tail it, then used maximus to boost the high band +3dB, mid band +2dB & cut the low band -1dB, because I felt like the bass may have been a little too muddy therefore I backed it off in the mastering stage.

mastering in fl 1

mastering in fl 2

Overall I felt like I did a really nice job with this mix, and any feedback I receive for it would only make it better!

The Febs – When It’s Morning (Before Feedback)

ADR Project Completion & Exhibition

This week is the week where we officially concluded our audio replacement project task with an exhibition to the other members of the class. Above is the final version of our proposed deliverable – a re-dubbed animation originally created by Michael Cusack/Gillsberry [http://www.gillsberry.com/]. I uploaded the video onto my channel which has about 3.5k subscribers, so I knew there’d be at least a few people out there that would see this and like what they saw. I also shared the link to the video on facebook where it got a fair bit of attention by my friends and family (oh god, that’s right, I did a voice for that video).

The feedback we received for the project from our lecturer was overall positive with some things that could’ve been improved here and there, such as more emphasis on the actual foley sounds that were implemented into the recording, since the music somewhat drowns them out, just so we could demonstrate our skills a little better. That, and there was probably not really that much to do for a team of three people.

I managed to show the project to Cusack and he loved it, basically saying he wouldn’t change anything about it and overall thinks it’s great. Not as constructive as I probably would’ve hoped but at least we got some validation.

I’m still pretty happy with how it all turned out though, the music adds a really nice touch to the whole ‘poshness’ of the new dialogue, and it wouldn’t really feel the same without it. The amount of foley wasn’t over the top or complicated, and it was true to the original in that aspect – the original only really had the train station ambience and the dialogue – if we went further and tried to put in too many sounds that weren’t originally there, it might have been too chaotic or take away from the rewritten dialogue. The audio was recorded and processed with no problems relating to quality or massive noise floors, and didn’t sound over compressed, so we chose the right microphone for the job (AKG414), and the right space to record it in (the C24’s foley booth).

When it came to being a voice actor, it was definitely a new experience and probably better left to somebody more talented (haha!) but I had fun with it, and tried a lot of different things on the fly which in some cases replaced some of the lines we had already scripted. Improvisation – never underestimate it.

The team was great to work with, we all had a clear idea of what we had to do regarding our roles & responsibilities, and we all pulled our weight. Sometimes we got sidetracked but we were still able to pull ourselves back together and get to work. It’s good to have fun and enjoy your work because it creates a positive environment, but you’ve still gotta work because sometimes it can be detrimental to productivity. Luckily it wasn’t, because this project was something we were all really interested in, and once we got the ball rolling, it didn’t stop rolling – excuse the metaphor. We still quote our lines from this project from time to time when chatting with each other to have a bit of a laugh. There was never a dull moment working on this project.

ADR Project Completion & Exhibition

Vocoder Effects – Production Technique

part_of_dj_or_live_edm_set-up

A Vocoder is an electronic speech synthesiser which takes both a modulator and a carrier, and blends them together, resulting in a heavily effected hybrid synth/vocal sound, hence the name origin ‘vocal’ and ‘coder’. A modulator is a signal which controls the filters of the carrier, and it is usually sung or spoken vocals. A carrier can be anything depending on the plugin or hardware, but it is typically a synth patch played through a MIDI keyboard. The way this works is somewhat similar to side-chain compression, as you have a reference signal or ‘key input’ which controls the level of compression on the affected signal.

Vocoders are used in both studio productions and live shows. The use of vocoders has been prominent in various styles of electronic music productions for many years. The very first iterations of electronic speech synthesisers can be traced back to the innovations of Homer Dudley in the late 1930’s, but the first implementations of vocoders in music productions didn’t occur until the 60’s, the first of which was developed by Wendy Carlos and Robert Moog. Some of the first artists to use such effects include Afrika Bambaataa, Herbie Hancock & Kraftwork. Vocoder effects were also used in the score of Stanley Kubrick’s controversial classic “A Clockwork Orange”.

An alternative and vastly creative use of vocoders is using the effect on sounds which are not vocals to create something new. Recently, Mike Shinoda from Linkin Park created a new sound from a cowbell loop using a vocoder and some other guitar pedal effects:

Happy, little, musical, accidents.

Inspired by this, I had a go at experimenting with a vocoder myself. Similar to Shinoda, I also ran some percussion loops through a vocoder, except the results were different because I used a different carrier source signal:

Fun fact, it was believed for a long time that Cher’s vocals on her song “Believe” were vocoded, but it turns out they were just extremely dramatic settings on the equally famous and infamous Auto-Tune by Antares, which is a pitch correction effect plugin – NOT to be confused with electronic speech synthesisers.

I really enjoyed experimenting with this vocoder plugin, and I’d be interested in looking at some more advanced plugins in the future.


References:

Apple Inc. (2010, January) Logic Studio: Instruments – A Brief Vocoder History [Online Documentation] Retrieved from http://documentation.apple.com/en/logicstudio/instruments/index.html#chapter=10%26section=10%26tasks=true

FutureMusic (2015, September, 30) A brief history of vocal effects [Magazine Article] Retrieved from http://www.innovativesynthesis.com/introduction-to-vocoders/

Wikipedia (2017, April, 23) Believe (Cher Song) [Wiki Article] Retrieved from http://en.wikipedia.org/wiki/Believe_(Cher_song)

Wikipedia (2017, April, 26) Vocoder [Wiki Article] Retrieved from http://en.wikipedia.org/wiki/Vocoder

Vocoder Effects – Production Technique

Studio Time w/The Febs + ADR Wrap Up

This slideshow requires JavaScript.

On Monday the whole class was all individually allocated 4 half hour time slots to go into the Neve studio and assist in the recording of a band. While recording, we were also doing a live mix of the audio to try and get a feel for the songs before we go off and do our individual mixes. The Febs were easy to work with – there was clear communication between the engineers and the band members, and we knew exactly what they felt like trying performance-wise and what points in the song they wanted to be dropped in on for takes. Some tracks were recorded to a click but there was one that was not, and also had a different live room setup. There were a few instances in the tracks with the clicks where we tried changing the click track to be a bit faster or slower until we found a tempo where they could deliver the most comfortable performance.

studio

Friday came around and we were back in the C24 for our final ADR session. I ended up editing both mine and Fraser’s lines just in case he liked my edit better, and to my surprise he didn’t end up finding time to do an actual edit. The only bad thing that really happened though was for some reason the script file for Elastic Audio was missing from my protools application files, so I could not use elastic audio to stretch some lines to be a bit more in time with the animation. We tried doing a bit of that in the studio, which Blake later ended up redoing himself because a couple of lines were noticeably garbled from the time stretching, and so he tried to stretch the files as much as he could without the lines sounding too garbled. Fraser and I also had a bit more time to edit our audio individually, and I noticed that there were a few takes in there that could have been better so I swapped them before we ended the session. We also had Rachel with us who was booked to use the studio after us but came in to give us feedback as we went, because it’s nice sometimes, having a fourth set of ears.

wrap-up

With that all being said, we’re now done with recording, and it’s over to Blake now to do the final mix. I have posted a draft of the animation with our dialogue to slack for feedback, but it was low quality due to the type of file I rendered it as, so we got a few comments about that, but there wasn’t much I could do. Fortunately Blake uploaded a 2nd draft in better quality which got some more helpful feedback for us.

Studio Time w/The Febs + ADR Wrap Up

Side Project Ideas & Pheasant Hunting

maxresdefault2

“Ah, splendid. Those pheasants won’t hunt themselves.”

This week, the rest of the script was written for our audio replacement project, and some ideas for a side project spawned out of nowhere.

A classmate took to facebook to rant about & dispute a video which stated that men have to always pay for dates because females are always spending money on themselves to “look as good as they do”. I jokingly commented something along the lines of “let’s do a diss track of her as a side project”, and he actually took it into consideration, with another one of my classmates jumping in and showing his interest too. So I worked on a beat for about 45 minutes, it didn’t take long to produce since I was thinking about all of our hatred for this video combined.

Friday came around and we had our second studio session in the C24. Today we had scheduled to start the dialogue but because recording the foley didn’t take as long as we’d thought it would, we’d already started recording half of the lines. Fraser and I still had not finished writing the script by today, because we weren’t sure whether we would be improvising for this second half (because there’s a lot of random nonsense at the end in the original) or actually writing it (so if people do pay attention there are diamonds in the rough, so to speak). We decided on the latter, so I assisted Fraser in writing the second half of the script before we did any more recording. Then we recorded the lines and ended up changing a few of them as we went, because some just didn’t work at all with the motion of the mouths in the animation.

This slideshow requires JavaScript.

Fraser and I decided to work on separate vocal edits for next weeks session – editing our own vocals and deciding which of our own takes are our best ones.

Side Project Ideas & Pheasant Hunting

Parallel Compression – Production Technique

pcomp-06a-pcompconfig

Compression occurs when a signal hits a specified threshold (in dB) and is reduced in loudness by a set ratio. More can be learned about basic compression here.

The term Parallel Compression comes from the idea of two or more copies of the same signal playing at the exact same time, with one signal processed by a compressor. With ample mixing, this technique can be used to add more presence to a certain instrument or sound.

Parallel compression is commonly used on drum kits, especially in rock and metal sub-genres, to highlight the driving force of the performance. There are several different techniques you can use in parallel compression for different purposes in order to achieve different results.

Since I mainly use FL Studio, here’s a video by EDMProd clearly explains the basic routing (signal flow) process for parallel compression in FL Studio 11:

Specifically with drums, you can choose to reinforce the whole drum kit by using a brick-wall limiter (pretty much anything higher than a 55:1 compression ratio) and dialling in as much of that compressed signal as you need. However, it does not stop there – you can adjust the attack and release parameters to compress certain transients that may be quieter than the rest of the kit.

For example, the snare may not feel ‘strong’ or ‘punchy’ enough compared to the rest of the kit, but with the use of the API-2500 compressor by Waves, you can set the ‘tone’ of the compressor to either “Loud”, “Medium”, or “Normal”. Both “Loud” and “Medium” compress the low and high frequencies, with the former favouring the low frequencies, and the latter favouring the mid frequencies. This, combined with slow attack and release times can result in the snare being given more presence in the mix.

maxresdefault

In electronic music productions, parallel compression is not just used for ‘reinforcing the sound’, but also for creativity. Last year I worked with a classmate on a pop track entitled “Move On“, which used parallel compression in a way that was experimental (by my standards!). I took the output of my drum sub-mix, side-chained it to a new track, put some effects on the track such as distortion, delay, mono-summing, reverb, and a high pass filter, then essentially squashed the audio. Then I put a volume envelope on the compressed signal to give it a sweeping sound. Blended with the original audio, it gave it a pretty cool backing effect that I was fairly proud of. Here’s an example of the drum beat before and after I did this:

It’s very important to note that you may come across phasing issues in your productions if you aren’t careful with your routing, but there are ways around it. One workaround I use in particular is putting a compressor on the dry track that you don’t actually want to compress (sacrilege!) but not adjusting any of the parameters since the default attack time, if remained untouched on the compressed signal, would match up with the wet channel’s attack time ‘latency’, thus the signals would be “in phase” again. This won’t work though if you’re feeding the dry channel into the wet channel, as it will just give the wet signal even more latency and will still be out of phase.

Parallel compression can be used for corrective and creative purposes, which proves that it is a must-know technique for anyone who is keen for work with audio productions.


References:

Kärkkäinen, I. (2011, October, 22) Smashed up – a parallel compression tutorial [Web Article] Retrived from http://www.resoundsound.com/smashed-up-a-parallel-compression-tutorial/

Robjohns, H. (2013, February) Parallel Compression – The Real Benefits [Web Article] Retrieved from http://www.soundonsound.com/techniques/parallel-compression

Weiss, M. (2011, October, 17) 2 Effective Ways to Use Parallel Compression [Web Article] Retrieved from http://theproaudiofiles.com/two-ways-to-use-parallel-compression/

Parallel Compression – Production Technique