Mid-side processing has become increasingly popular in audio production, especially in plugin form - but mid-side routing can also be utilized. Mid-side routing offers more flexibility and greater control of your signal’s timbre, stereo width, dynamic range, and more - making it truly helpful.
To do this takes some unique routing. I’ll use 2 busses, both of which I’ll make pre-fader so that they’ll send the signal despite the fader’s level - which I’ll completely lower.
I’ll set both sends to unity, and name my busses mid and side respectively. Using Voxengo’s Mid-Side encoder on both channels, I’ll mute the side channel on my Mid bus and the Mid channel on my Side bus.
I was concerned about phase cancellation when first doing this, but after completing a null test I found that the summed mid and side signal and my original stereo signal are completely identical.
Now I can process my mid and side separately by inserting plugins on the busses. Additionally, but changing the channel fader level, I can affect my stereo image.
If I want a processor to affect the entire signal in a collective way, I’ll insert it on my master output.
Phase is truly important - this is especially true during a mastering session in which any phase change to a signal can affect the entirety of the signal. By using a natural or linear phase setting, we can create analog-sounding changes, or avoid these phase changes altogether.
Natural phase settings emulate the effect that analog components have on the phase of a signal. By looking at a frequency and phase analyzer, we notice that the phase behaves in a unique way when changes to the frequency spectrum on made.
When it comes to linear phase, we can avoid phase changes completely. Since the signal is delayed, the processor can affect the signal and all changes without altering the phase.
I like to use a low-linear phase setting since this will reduce the effect of pre-ringing - or in other words, the mild phase cancellation that occurs from our DAW’s latency compensation of a linear phase processor.
Even if the mix you’re working on is of a lower sampling rate, it still makes a lot of sense to use a sampling rate of 96kHz or higher when mastering. This will reduce aliasing distortion, as well as lessen the effect of pre-ringing when using linear phase processing.
If we use a higher sampling rate, we give our processors a higher frequency to which they can process the signal. In the case of 96kHz, our processors will be able to process up to 48kHz. This will greatly reduce the effect of fold-back or aliasing distortion since fewer frequencies will go above the frequency cutoff.
When it comes to linear phase, these processors work by delaying the signal by a set number of samples. If the sampling rate is higher, more samples will happen per second, in turn reducing the amount of time for which the signal needs to be compensated by your DAW.
When mastering, we need to keep in mind how our masters will sound over multiple popular playback systems - including headphones and earbuds. We can use various sources to find the frequency response of these playback systems, and then emulate them using an EQ in our DAW.
One great source for this is RTings.com, which has tested the frequency response of multiple headphones and earbuds. Let’s use the new Apple AirPods as an example since this is a popular playback system.
Use the peaks and dips chart, we can see how the signal is shaped by the AirPods.
As the last insert in our mastering chain, we can emulate this response to better understand how this product will be heard by consumers. It goes without saying, but be sure to take this EQ off before exporting your master.
A lot of genres have been affected by particular processors in more ways than we realize - that said, sometimes a processor type is heavily tied to that genre and the expected sound of a master. It’s entirely possible to use a processor that does not work well for a genre.
For example, Dance music is heavily associated with sharp transients; if you were to imagine this genre without these transients, it probably wouldn’t sound like Dance music.
With that in mind, an Optical compressor, which is great at smoothing out a sound really has no place on a dance track. Although optical compressor may sound fantastic for R&B or smoother genres, it’ll take away a lot of what makes Dance music unique.
When emulating analog equipment, it is a good idea to use a processor that takes into account the complexities of analog equipment. This is just as true when mastering since we want our masters and processing to sound as complex, detailed, and nuanced as possible.
For example, Satin Tape by u-he is a really complex plugin that takes into account how the multiple functions of a tape machine coexist and affect one another.
This means that altering the input will affect all aspects of the plugin’s performance since each part is program-dependent. Additionally, these types of plugins give you the opportunity to create techniques that otherwise wouldn’t be possible with a more basic analog emulator.
Although it’s becoming more popular to master music to “target loudnesses” due to loudness normalization, loudness and how it's achieved is more involved than a simple metric. Loudness is genre-specific and should be considered something that can be changed based on the genre being mastered.
For example, rap music should rarely be mastered to -14 LUFS - the reason being, it simply doesn't produce the timbre associated with rap music. Even after normalization, other rap songs will sound more indicative of their genre, than the -14 LUFS master.
So in short, although there are certainly reasons to try and target a specific loudness, instead keep normalization in mind, and find a compromise between the genre you’re working on, and the streaming service from which your master will be played.