During the previous part, we introduced the concept of the 3 dimensions of mixing and, we discussed in depth about the Y-axis (equalization).
During this entry, we are going to cover the other two dimensions (panoramization and depth).
The 3 dimensions of mixing (continuation)
The X-axis: Panoramization
Why do we need to panoramize?.
If we were recording with a couple of ambience mics the performance of a band or orchestra, each instrument would be naturally placed in its original place of the stereo image. But, in Studios (and, specially Home Studios), each instrument tends to be recorded separately, using one or more mics that take a mono image of each instrument.
When reproducing all tracks, since all them have a monophonic content, all the instruments will be heard as if they were stacked in the middle of the stereo image.
We have seen that, thanks to the corrective equalization (Y-axis), we are achieving to clearly distinguish every instrument but, with the panoramizatoin, we will try to re-positionate each instrument in the right horizontal position of the stereo image, as if we were in front the band or the orchestra.
As in all the musical production process, more than rules there are recommended practices but, it seems that everybody agrees that the center of the stereo image should be cover exclusively by the kick drum, the snare, the bass guitar and the main vocals. Position of the rest of instruments can be determined with more freedom.
If we want to positionate each instrument of the Drums Kit, we should visualize ourselves in front the drums. If the drummer is right handed, we will have the hi-hat slightly at left hand, one tom slightly at left hand and the other tom slightly at right hand, the bass tom far away at right hand, etc.
If the drummer is left handed, some of the individual elements will be panoramized the opposite way.
Usually, we tend to exaggerate a bit the position in the panorama to leave some gap between instruments, in a way that we can increase the intelligibility of the overall sound.
The issue with a multi-instrument like the drums kit is that, each mic gets the sound of the instrument that is being miked (Snare bottom, by example) but, also some part of the rest of the drum kit.
Additionally, the ambience and overheads mics (that mainly get the sound of plates) are catching also the sound of the overall kit.
So, you should carefully hear the ambience tracks while panoramizing, because, it makes not sense to panoramize the hi-hat to left if it sound to right in the overheads and ambience tracks.
You will easily notice that when putting all the drums together, something seems to be wrong and, this is probably because you placed in the wrong space some of the individual instruments, which enters in conflict with the information that you are (maybe unconsciously) hearing in the ambience and overheads tracks.
Usually, the lead guitar is being panormized to right hand, more or less around a quarter of the right panorama. Keyboards and, filling pads are usually placed to left hand, etc.
Regularly, to fade a track away from the center, makes it to sound apparently quieter and, you will need to raise a bit the track volume to compensate the loss of loudness.
Usually, after placing each instrument in the right stereo position, we should achieve a balanced loudness for each channel. Differences over 1 dB between channels are often difficult to manage with dynamics processors, perhaps giving too much strength to one side while weakening the other one.
The key in mixing is: balance in everything you do.
Below, a picture with an example of panoramization (click on the image to see it full sized):
Z-axis: mix depth
Alright, we were distinguishing each instrument with the help of equalization and, we were able to place horizontally such an instrument in the stereo panorama, as in a bidimensional picture. But, something really important is still missing, the relative depth of the instrument (distance) respect to the hearer.
If we watch to any orchestra or band, some instruments are more in front than others.
In the typical rock band, the singer is always in the first row. In the second row, you can see the guitars. In a third row the bass guitar and, in the last row the drums.
But, even in drums, each drum stays at a different depth. Plates are first, then the toms, then the hi-hat, the snare and finally the kick drum and bass drum.
So, the idea is to create some sensation of depth and positionate each instrument in its right depth. But, how do we push back some instrument in the mix?.
The way our ears are identifying the relative distance to a source of sound is analyzing the reverberation than comes combined with the original sound. Depending on the amount of early or long reflexions, we are able to determine the distance (depth) between us and the source.
The sources nearest to us are richer in early reflexions, the farest sources are richer in secondary reflexions. The closest are richer in trebles, the farest in low frequencies, etc.
Reverberation times are very important, also. Short times give the sensation of proximity, while longer times give sensation of remoteness.
I usually prepare three auxiliary buses, each one with a different type of reverberation:
- A reverberation type Room, with only early reflexions and a very short time, emulating an small room, where you would record the vocals or guitars and, that helps to push the instrument to the front, thanks to the Haas effect.
- A reverberation type Hall, with a mix of 40% of early reflexions and 60% of secondary reflexions and, a middle time, that gives a sensation of space.
- A reverberation type Plate, with just secondary reflexion and a long time, which helps to push back the instrument and to open the stereo image.
I usually don't send the bass to any reverberation, since it tends to sound undefined and to be lost in the mix but, sometimes, it works with a reverberation of type Room, every time that we can Equalize the sound of the reverberation effect (like in the Sonnox Reverberation plugin, that allows full equalization of the reflexions).
The vocals are usually sent to the three reverberation types, adding the exact degree of each type.
The guitar is sent to the Room type (to restore the sensation of the room where it was recorded) and, to the Plate type, to push it back after the singer.
Some times, we can also use son kind of short delay, type Slap Back, with just one or two echoes and, a very short type, which pushes to front the instrument (Haas effect). For this kind of effect, I am using the Delays from the plugin EchoBoy by SoundToys.
The election of the type of reverberation is of high importance and, it's very related to the personal likes of each one and to the type of sound that we wanted to achieve for the final mix.
There are convolutive reverberations that are, basically, some software that reads a file containing the sequence of impulses of the echoes produced in a real environment and, that were recorded and sequenced to be used by such programs. The good thing of such a programs is that you can always add more "impulse files" to reproduce many other real environments, with a certain accuracy.
This kind of reverberation has a lot of fans but, even that I've tried several (Waves, some free ones, Perfect Space, Wizooverb W2, etc.), I wasn't convinced by none of them. I don't mean that they are bad, I mean that I wasn't able to achieve good results with none of them. Also, this convolutive software is really heavy for your PC resources.
There are digital reverberations, some are better than others. Absolutely all those that I've tested have some kind of "digititis" (digital artifacts) and, none liked me except the Reverberation plugin of Sonnox (the one that I am currently using). I find Sonnox Reverberation as easy and intuitive and, is the one that allows me to faster get the wanted results.
There are also some mixes reverberations, like the Wizooverb W2 by Wizoo, that allows you to choose between several digital and convolutive models. This software is very interesting to check that not always a convolutive reverberation is the solution to your needs. Even that this software liked me way more than other convolutive and digital software, I prefer the Sonnox, because I find it more intuitive and faster to get my results.
In the following screen cap, we can see the overall aspect that the three reverberation types that I use have (click on the picture for full size):
To left hand, the short reverb, with early reflexions and a short time.
Middle one is the the middle reverb, with a middle time and a mix of early and late reflexions.
To right hand, the long reverb, just with late reflexions and a long time.
Notice that this plugin allows you to fully control every parameter of the reverberation, from a single screen.
By understanding the concepts behind the 3 dimensions of the mixing, we will be able to make every instrument to be distinguishable from the rest, to put each one in the right place of the space and, bringing back the reverberation ambience that was removed during the recording in dry environments.
To be continued...
We have seen practically all the basis operations of the mixing stage but, we still need to review the tools than sculpt the dynamics in our mix and, that help to project it to a commercial loudness.
In my next blog entry, I will discuss about what Dynamic Range means, what are transients, attack, release and rest of dynamics concepts, among about the dynamics tools, like compressors.