When analog mastering was more common, this was used a lot more often, but it’s still valuable.
In short, if a mastering engineer had a really great sounding compressor, but the track didn’t need compression, they’d leave the peaks alone but still use the unit for subtle timbre shaping and gain staging.
Most digital compressors lack the nuance for this, but 2 good options I’ve found are Eventide’s Omnipressor, and Tokyo Dawn Lab’s Molotok - the latter being free if you’re interested.
The Omnipressor, with a 1:1 ratio and no measured compression or expansion, still introduces a strong third-order harmonic.
More noticeably, it maximizes the signal pretty significantly. By adjusting the input and output, I can control the level of maximization and prepare the signal to hit the next processor, whatever that might be.
Molotok, again with no measured attenuation, introduces something similar to a 6dB/Octave HP filter, which, although it looks aggressive, is subtle enough for mastering with only a 0.5dB dip at 50Hz.
Additionally, everything above 200Hz is boosted by 0.2dB while the air is boosted 0.3dB. Very subtle, but when mastering, little changes like this make a difference.
The harmonics are much more aggressive, though, with both even and odd overtones at high amplitudes.
Hopefully, in the future, more digital compressors impart unique timbres when peaks aren’t attenuated, but for now, these are 2 good options.
Let’s check out the difference Omnipressor makes without the compressor or expander engaged.
Watch the video to learn more >
This is another compression technique that has been lost. In short, parallel compression, in particular aggressive parallel compression, isn’t a set-it-and-forget-it situation.
Depending on the instrumentation, pumping will become more or less noticeable at various points in the song.
With that in mind, it’s a good idea to find the settings you like for parallel compression and then listen to the track from start to finish; meanwhile, take notes of any sections in which you notice audible pumping.
Then using automation, reduce the output gain of the compressor during those sections.
Alternatively, if the compressor is on an auxiliary track, automate the channel fader during these passages.
Personally, I enjoy using the latch function and manually adjusting the aux track’s level as the song plays. Once I get a good performance, so to speak, I’ll switch the automation to read.
The minor inconsistencies give it a bit more character.
Speaking of parallel compression, let’s combine a few concepts to create a technique that’s both creative and practical.
One of the best parts of parallel compression is the additional overtones - typically, the more aggressive the compression, the more pronounced the waveshaping, resulting in higher amplitude harmonics.
The only issue is the compressor is waveshaping based on the waveform’s peak, which may or may not be musically related to the song - for example, the peak could be vocal sibilance or something along those lines.
To create more musical parallel compression, or parallel compression in which the overtones are more often tied to the song’s key, let’s create our auxiliary track and first insert a linear phase EQ.
Then, we’ll find the fundamental frequency, which should be the root note of the song’s key. Let’s say the song is in the key of A Minor, meaning the fundamental is A.
Or if I didn’t know this, I could observe the analyzer, pinpoint the highest amplitude, low frequency, and this will likely be the root note.
With a bell filter, I’ll amplify the fundamental a fair amount. We’ll compensate for this later so we can make the boost more aggressive.
I also like to find the perfect fifth and boost that. If you’re more music theory leaning, you’ll know this is the note E, but if you’re like me and more geared toward numbers, you can use the ratio of 3:2, which will always give you a perfect fifth.
For example, if the fundamental is 55Hz, we could multiply 55 by 3, resulting in 165. Then, divide 165 by 2 to find the frequency of the perfect fifth.
In this example, the frequency is 82.5Hz, which again is E - just a different way of finding the same thing.
Once that’s boosted, insert your compressor of choice for parallel compression and compress heavily.
With in-key elements boosted, it’s much more likely that the waveshaping occurs to in-key elements, resulting in harmonics that are musically related.
After the compression, insert another linear phase EQ, and attenuate the fundamental and perfect fifth until the signal sounds balanced.
So basically, this is an emphasis, de-emphasis technique used to create more harmonious parallel compression.
Let’s take a listen.
Watch the video to learn more >
So, we’re talking about a different type of compression here - not peak compression but the compression used to encode a track from WAV to MP3, AAC, etc.
This is an issue I notice with a lot of mixes and masters, but it’s rarely talked about.
Basically, the de-correlation of high frequencies causes more artifacts when the track is converted to a lossy format than if the high frequencies were correlated.
By de-correlated, I mean the left and right channels have differing information. By correlated, I mean the left and right channels have identical information.
This is becoming more of an issue with stereo expander plugins - for example, if I used this Izotope Imager plugin and expanded the highs significantly, then when the track is converted for Spotify, YouTube, etc., it’s a lot more likely I’ll hear those weird artifacts associated with lossy formats.
A big part of reducing file size when encoding is attenuating high frequencies - if we compare the same section of a song, one that’s a 320kbps MP3, the other a 24-bit 48kHz WAV, notice how much high-frequency information is lost - and that’s a high-quality MP3 bounced out from a DAW.
When artifacts are created for the left and right channels identically, they’re much more likely to be masked by other high-energy, centered information.
But if the artifacts occur on the side image, there’s a lot less there to mask them—additionally, now they’re providing spatial and directional cues that help them stand out.
Long story short, stereo expansion is fine within reason, but know that the more you de-correlate high frequencies, the more aggressive the artifacts will be after encoding.
But that depends on if someone can figure it out.
So, the Fast-Fourier Transform has been an absolute game-changer for audio processing.
If you’re unfamiliar, it takes EEG data and converts it into phase, amplitude, and frequency data that can then be assigned to bins or chunks of information.
For example, Izotope RX is an FFT editor. It maps out the bins, with the X axis representing time, the Y axis representing frequency, and variable colors representing changes in amplitude.
I’ve made some more in-depth videos on this, but in short, allocating the information this way allows for much more accurate changes to the amplitude of these bins when compared to traditional EQ.
So, my question that I haven’t been able to find any info on: is it possible to rearrange these bins so that the x-axis still represents time, but the Y-axis becomes the amplitude.
If the amplitude information is already known and can be mapped, then if the bins were arranged in the way I described, sections of the dynamic range could be easily selected and adjusted.
Currently, this is already possible with a traditional FFT editor; it’s just not optimized for it.
By re-arranging the bins, compression, expansion, maximization, gating, and any other manipulation of the dynamic range could be done incredibly accurately and without the need for a threshold function.
For example, say I have a noisy track. Traditionally, I’d use a gate to attenuate or downward expand the noise whenever the performance wasn’t present.
This works, but the noise is still present when the performance occurs.
Additionally, gates can be finicky with the attack and release, ratio, lookahead, etc., and noticeable artifacts definitely pose a problem.
With a rearrangement of the bins that organize them based on time and amplitude, I could select all info between, say, -140 dB and -80dB and significantly reduce the gain. This reduction would occur whether the performance is present or not, just about completely eliminating the issue instead of attenuating it only when the performance isn’t present.
This is just one example of many in which FFT editing could completely change dynamics processing, but again, it depends on whether it’s possible at all.