An investigation of the pre-delay parameter of the Lexicon 480L reverb plug-in
The weirdest Neumann mic you've ever seen!
Is your audio interface fast enough?
What is production? Part 4: Mixing
Is there a special way to record the piano?
Responding to a client's requests for changes - a vital skill in audio
Setting a noise gate for a bass guitar with amplifier noise
Even the best sound engineers in the world can't be trusted - apparently
Three types of musician you'll prefer to work with in the studio, and one type that you won't
The difference between DAW filters and synth filters
A common question received here at Audio Masterclass and Audio Masterclass is, "In what order should I connect my processors and effects?". Of course, this question applies equally to hardware processors, analog or digital, and to DAW plug-ins.
Firstly, let's consider why you need to use plug-ins or hardware processors and effects at all? Yes, why?
One reason is to correct, ameliorate or compensate for some defect in the original signal. Suppose for instance you had recorded an acoustic guitar and you found there was an annoying resonance in the lower midrange, then you would use an EQ plug-in to correct this problem.
Another reason is to make a signal sound better, either individually or in the context of the whole mix. So you might find a recording of a violin or other stringed instrument too dry. You would add reverb to make the sound more rich and lush.
In my opinion, it is better to correct faults first, then think about improvements. Indeed, how can you improve something while there are faults clouding your judgment? Would you not first tidy your room, then clean it, then arrange the pot plants and ornaments?
If a vocal is excessively sibilant, then processing is easier on the cleanest, purest version of the signal you can obtain, which of course will be the signal before any other processing. Because de-essing is quite tricky to get right, in my view it should be tackled immediately. De-essing should take place especially before compression, which will make the task very much more difficult.
If a signal has a fault in terms of frequency balance, then it makes sense to correct this fault before further processing. Why would you process a signal that had a defect? If the result turned out well, it could only possibly be by chance.
When the signal is as clean, clear and crisp as you would ideally have liked the recording to have been in the first place, then you can compress. You can do this either because you want to reduce the dynamic range quickly and easily, or because you simply like the sound of compression.
If there was some noise or other unwanted sounds in your original recording, then with the power of your DAW you can simply edit out sections where the instrument isn't playing. When it is playing, it will almost certainly mask the noise, unless there was a serious problem that you really should have attended to during the session.
I have placed the expander/gate at this point in my list because this is where it usually comes in my own personal thought process. However you might want to place it before the compressor, on the grounds that compression always increases the noise level, but if you have expanded or gated the signal then there is less noise to compress. If you edit out the non-playing but noisy parts of the track, then you are effectively doing this.
Another point of view is to place the expander/gate after the compressor if the noise level wasn't objectionable before compression, but it is after compression.
As you can see, the positioning of the expander/gate is very optional. The differences are subtle but worth experimenting with.
There is often a case for EQing a compressed (and perhaps expanded or gated) signal. This might be because you find you can get a more pleasing sound with EQ, or to fit the instrument or vocal into the context of the mix as a whole. Whereas previously you used EQ to correct a fault, now you can use it to make improvements. In other words, if your original recording was perfect in every way, you would now be making it even better. The same logic applies if you have had to de-ess, EQ, compress and expand or gate to make it as perfect as possible.
In normal circumstances, reverb is used to emulate the ambience and reverberation of real-life acoustic spaces, so it is applied to signals that have been processed to perfection.
You can place your reverb plug-in at a different point in the plug-in sequence, but this would be for a special effect. This would not be the way to obtain a natural sound quality, but if you are looking for something unusual, then experimenting with the placement of the reverb plug-in is one way to do it.
One last point
Remember that there are no rules in recording. Whatever sounds good to you, and your client or the market you sell into, is most definitely the right way to do things. But the above sequence of processes and effects is in most cases the best way to work.