26 March 2013

Home Studio: Mixing - Part 6

Introduction

Once we discussed about the basis tasks of the mixing phase, I would like to add some additional notes that can help you (they helped me) to achieve better mixes.

How to enhance your mixes?

Paradoxically, some of the basic things that help to achieve better mixes have nothing to see with plugins or outboard gear.


1. To accoustically conditionate your Room 

There is no way to avoid it, believe me. You can think about this all you want but, without a minimally conditioned room where to mix, where you can remove reverberation, echoes, resonances and other things that the own room sums or rest to the sound you are mixing, there is no way to achieve a good mix that can be transported to other systems or environments, out of your mixing room.

Shape and dimensions of your room, the furniture, decorative elements, walls, building materials, windows, doors, etc... everything summed together, generate certain anomalies that alter the way your mix is being reproduced through your monitors.

Plain surfaces are highly reflexive and, the sound bounces from side to side, creating some anomalies related to echo and reverberation. This anomalies are creating an effect called comb filter.
Inside square or rectangular rooms, the corners trap the bass frequencies that enter in resonance, dramatically affecting the reproduction of your mix.
These resonance frequencies due to the dimensions of the room are better known as room modes.

There are some software solutions that solve some of the issues, like IK Multimedia's ARC System, analyzing how the reproduced sound deviates from the source sound, allowing to create a correction curve that equalizes the output in a way that "removes" most of the room issues.

But, even that such kind of software helps a lot (even just to visualize in which frequencies are you having the issues), it isn't the definitive solution, just a compromised solution.
Only Bass Traps can really get rid of the issues with low frequencies.
Some other issues, as the fluttering echo, among others, aren't solved with a simple corrective equalization.

There exist some free software (as  http://www.hometheatershack.com/roomeq/),  that allow you to analyze the issues in your room, even using your PC Card. It's a good starting point to try to enhance your mixing environment.
To acoustically isolate a room is an operation that consist into coat the room with materials that can absorb a high amount of the sound, avoiding that the sound can leave the room and, that any external sound can enter in the room.

To isolate, what you need more is mass, that is, you need high density and thick materials. Since not all materials can absorb the same range of frequencies, usually, a solution consists in "sandwiches" of several isolant materials, including air cameras between them.

So, that idea that the "egg cardboard boxes" can isolate a room is completely wrong. Without Mass, there is no way to isolate a room.

To acoustically conditionate a room is, however, is a different thing. It consists in to correct the sonic deviations that the room introduces in our reproduction of our mix.
In this case, we use absorbent elements, that have the function to absorb or reduce (not to completely remove) the content in a certain range of frequencies. The mass, the shape and schema of its surface, its molecular structure, among many other variables, allow to filter with different percentages, different frequencies.

Standard Absorbent Panels get rid of the broad range of frequencies but, to handle low frequencies, we need Bass Traps, that require different materials, shapes and place in the room.

Apart of the absorbents, we use also diffusers,  which goal is to avoid a continuous feedback of the reflexions against the wall. They are often used to correct some issue derived from reverberations and echos, helping to "dry" the room.
The typical "egg cardboard box" can be used as a diffuser (not as an isolant).

There are some few things that we can easily do as, by example, to put a massive carpet (dense, heavy) on our room's floor, to absorb floor reflexions.
We can try to put some high density foam on the corners or, some kind of high density board, to try to reduce the issues with low frequencies.
We can put some seller, with irregular sized books, CDs and any sort of irregular objects, to help to deviate reflexions, etc.

But without going for an expensive professional audio solution, maybe some kit of acoustic treatment, as the Auralex' Project 2, could be a good way to enhance our mixing room, attacking all kind of issues simultaneously.


2. Correctly place your monitors


Nearfield monitors should be placed at 30 degrees of your hearing spot. The distance between both monitors and, between the center of such a distance and the hearing spot should be the same. So, we form an equilateral triangle were the 3 vertex are: the two monitors and the hearing spot.

You could use some of those cheap laser levelers that emit two rays, with variable angle, to correct the position of your monitors.


3. Callibrate your monitors

A calibration standard means that with your volume knob at 0 dB, you should have a loudness of 83 dB SPL in your hearing spot, while reproducing Pink Noise of broad frequencies, between 500 Hz and 2 KHz, emitted with -20 dB RMS.

To measure the sound level (SPL) you will need a RTAS or SPL measurement unit (as the T.Meter MPAA1). The meter will give you the level in dB SPL of the sound projected by your monitors to your hearing spot. You should move your monitors' volume control until you read 83 dB SPL.
When you read 83 dB, mark that volume control position with a zero (0) value.



To emit a signal of Pink Noise, with a frequencies range between 500 Hz and 2 KHz you will need, or a plugin able to reproduce such a noise or, to download some file containing the proper material for this calibration.
In Pro Tools, you can use the plugin called Sound Generator, in an instrument track, to generate the Pink Noise at -20 dB, with all the bandwith. After such a plugin, you can put a plugin equalizer, to cut frequencies below 500 Hz and above 2 KHz.




Leave all faders, of all buses to zero, including the output fader. If you have some RMS meter (as the TT Dynamic Range or IXL Inspector), you could clearly verify that the RMS of the generated Pink Noise is of -20 dB.

But... all this for what?.
Nice question.
83 dB SPL is the standard level used in cinema.

If you have your volume control with the zero mark and, such a zero mark corresponds to 83 dB SPL (of the Pink Noise described above), anything that you mix in this position of your volume control and, that you are satisfied with its sound, will achieve it without clippings, overs or similar.
If you achieve a nice mix in that control position, you could masterize later the mix without issues and, it will be translated to any other kind of device or environment without the issues introduced by some A/D or D/A converters.

This calibration level corresponds to the use of the K-20 meter designed by Bob Katz.
To masterize or, when doing a model, we will usually compress the program between 6 to 8 decibels.

Repeat the described operation but, this time, move your volume control down until your meter reads 77 dB SPL. This corresponds to reduce the volume in 6 dB. Mark this position as -6.
You will usually masterize at this level, pop material or any other kind of music not so energetic.
This corresponds to the K-14 meter designed by Bob Katz.

Repeat once more the operation and, lower a bit more the volume, until your meter reads 75 dB SPL.
This corresponds to reduce the volume in 8 dB. Mark this position as -8.
You will masterize at this level, sonic material that requieres a high sonic impact, as rock.
It corresponds to K-12 meter of Bob Katz.
So nice, so good. Monitors calibrated... what else?

To mix, always use the mark 0.
You can check the mix at different volume levels, for sure but, if the mix sounds good in your mark 0, you should achieved a dynamic mix, without excessive compression and perfect for the masterizing phase. This mix can be the track of any film without needing to adjust anything else.

To masterize, you will move the control to the marks -6 or -8, depending on the type of material and destination (CD, broadcast, MP3, etc).

Theoretically, the use of such control marks avoids the need to work with meters, to control if the resulting sound remains between the proper levels, and you shouldn't be afraid of clipping or overs.

4. Control the mix with mono and dithering of 16 bits

If your Monitor's Control System, your Audio Card or any Meter plugin has a Mono button, put it on to check the goodness of your mix in mono.
The Mono mode is very useful to check the relative volume of all instruments that conform the mix.
It's also a good tool to check if there is some instruments with overlapping frequencies and, to check if the corrective equalization that we are applying to each instrument is effectively helping to better perceive each instrument individually.

One more  thing that drove my crazy for a long time was the need to convert from a good mix in 24 bits to 16 bits (to test a master in a CD or to convert it to MP3).
Most of the material that was sounding good at 24 bits, started to sound confused, with excessive low content and with a clear lost of spatial information.

The Sonnox Limiter plugin, that I am using as a brickwall limiter, at the end of the mix bus, has an useful button that allows to hear the mix with dithering of 24 bits (the usual while mixing) or of 16 bits.
I've often discovered that a mix that sounds nice at 24 bits can sound really bad at 16 bits. This button is very useful to better adjust the compression level, amount of reverberation and rest of processing that you are doing to the whole mix.
When I think I am done, I always check the mix with dithering of 16 bits and, fix any issue, before bouncing the final material.
Practically every DAW comes with some plugin for dithering, that we can use at the of the masterizing chain.
Activate the dithering at 16 bits to check if your are overdoing your process.


When you are converting from a higher resolution (24 bits) to a lower one (16 bits), you are loosing 8 bits of information. Those trailing bits are responsible of the finest nuances of the sound and, often they have spatial information (reverberation, echoes, etc).

This lost of information makes the sound to be "quantized" or stepped, in a way the it sounds artificial and tasteless.
Dithering tries to get rid of such issue. Dithering consist in a mathematics algorithm that generates a noise of very low volume (few decibels), with different curves (types). The sum of such a noise and the truncated information when downgrading the bits, help to "reestablish the lost bits", in a way that the converted sound is closer to the original sound, with more resolution and detail than without applying the dithering.

Every time that we would bounce our mix at 16 bits (for CD, MP3, ...), we should apply the dithering. Results are easier to foreseen if, during our masterizing or mixing, we control the goodness of the mix in our project resolution (32 or 24 bits) as well as in the final bounce resolution (16 bits).

21 March 2013

Home Studio: Mixing - Part 5

Introduction

In the previous blog entry we have introduced the concepts of sound dynamics and dynamic range. Now, we can start to talk about the tools that modify the dynamics of the sound, which are broadly used during mixing an masterizing tasks.

Compressor

Imagine that we did our mix (following the technique of the 3 dimensions, mentioned before) and we achieve some final volume similar to the picture below (click on the picture for full size).



We already discussed how this TT Dynamic Range meter works so, now, we should be able to understand this picture.
We can see that average RMS are around -27 dB, while peaks RMS are around -14 db, while the dynamic range is 12.5 and 10.5 dB, left and right channels, respectively.
Definitively, not a very commercial loudness. Our mix will sound very quiet, compared to any commercial mix.

We could simply move the fader control of the mix, raising the gain about 13.5 dB, to make those RMS to be placed around -13.5 dB, what will leave the mix with peaks RMS around -0.5 dB, more or less.

But, since we are talking about Peak RMS, some peaks will "overflow" that target Ceil of -0.5 dB, producing clipping. This can be solved with the use of a Limiter but, once again, we will clip the peaks, anyway (because the compression ratio of a limiter is infinite).

So, the natural solution would be to lower a bit the channel fader until we have no clipping anymore (that red light that lights from time to time), but this will lower again the average RMS of the program and, therefore, the loudness of our mix. So, we will achieve a nicely sounding mix (dynamic range enough) but, sounding clearly lower than the commercial mixes of our competitors.

The set of peaks of the attack phase of a track are being called transients. In a track or the whole mix, not all the peaks have the same height (volume). We could say that a very low percentage are the ones that "overflow" and, those are the ones that would force us to lower that fader, loosing the mix some strength.

The Compressor is a tool that "compresses" the peaks, that are over a certain loudness level, to make them suit a determined dynamic range. Even that this is its main function, it has several other functions, since it's able to alter the whole dynamic curve of the material being processed, accenting more those parts that we are interested on more.

Let see a compressor plugin, by example:

Not all compressors have same names for their controls, but all more or less (some have some more knob than others) have same basic controls.

The gain or make-up is a control that allows us to raise the overall loudness of every bit of the material before to start the compression task.
Following with the previous example:



Imagine that we would like to achieve a dynamic range of about 12 dB. We wanted to put our average RMS more or less around -12dB. Before starting to compress, we saw that the average RMS were around -27, therefore, we should raise everything 15 dB (27 - 12 = 15), so we would put that gain control on +15 dB.

If our Ceil was set at -0.5 dB, to avoid clipping and overs when translating the mix to lower resolutions (16 bits, MP3, etc), by raising the average RMS to -12 dB, we will have a dynamic range reduced to 11.5 dB and, the average peaks will "overflow".

The control named threshold determines which is the minimum loudness level that the sound must reach for that compressor to start compressing the peaks. Everything that falls below the threshold level remains uncompressed and with a higher gain (determined by the gain or makeup control) but, everything that is over the threshold will be compressed. How it will be compressed depends on the rest of controls that we are going to describe.

To determine the threshold loudness, the better is try and fail. If the threshold was very low, we will compress practically everything. If the threshold was very high, we will compress just the higher peaks.
The correct threshold is that loudness were the interesting things are happening in our mix. If we raise or lower the threshold we will notice that radically changes the result, the accent of the song. Therefore, it is of high importance to exactly determine that threshold value to give to our mix the exact accent we wanted to achieve.

The control named attack determines how long should the compressor to wait, from the exact instant that a signal crossed the threshold level until it starts to compress such a signal. If we lower the attack, the compressor works earlier, lowering every single peak. Overdone, we can lose the percussive characteristic of our mix.

With high attack settings, we will allow the most of transients to cross the threshold. We will retain the punch but, we will go back to the initial issue: some peaks will overflow.

Normally, we need short attacks to tame the peaks (by example excessive percussive sounds as a bass guitar, kick drum or snare) and, we use longer attacks to maintain the punch in tracks with weak punch.
The release control takes care of the last phase of the ADSR curve of the compression task. When we reach the threshold level, and the time established with the attack is over, the compression will take place and, this compression will take the time determined by the release control. Therefore, long releases add sustain to the material, while short releases allow to work just in the starting parts of the sound (attack phase). To give more sustain to a weak bass guitar, or to give it some density, we usually raise the release time.

The combination of attack and release controls, define how the peaks are being tamed and, they are able to give some punch or to smooth the material.

The Sonnox Dynamics compressor has one more additional control, named hold that works before the release time, in an analog way as we already discussed when we introduced the ADSHR curves. This control isn't so usual in other compressors.

Finally, we have the ratio of compression, that determines the proportion to which the original loudness will be reduced. In this image, we see a ratio o 3.02:1, that means that any peak over the threshold level will reduce its loudness 3.02 times. So, the excessive 3.02 dB will be converted to just 1.0 dB, 6.04 dB will be reduced to 2 dB, and so on.




Some compressors include some algorithms that handle in different ways the transition area around the threshold, often named the "knee" of the compression curve. You can see in the above picture, that for this example, we are using a soft 5 dB knee, which smooths nicely the compression effect in this transition area.

Even that the Compression concept seems to be unique, the way as it is being implemented by each designer is always different. Not all compressor react at the same speed, some introduce their own fingerprint to the sound (and very specially, tube compressors).

There are some mythical compressors, like the UA 1175 or the LA-2A (used in drum kits and bass guitars, by example), that clearly add some color to the signal. The Fairchild 670 (used in electric guitar tracks, by example) is also coloring the sound. There are many other mythical compressors, usually forming part of a famous channel strip or mixing board.

Optocompressors (with optical cells) are often used for vocals, because they have a faster reaction time.

At the end, there are several types of compressors, that react very differently. Some seem to work better for certain instrument tracks, while other are more universal.

For every track, try as many compressors as you have and, choose the one that brings to you the most satisfactory result for your needs.

In this last picture, we can see the results of the compressor, in the TT Dynamic Range meter.
We achieve to raise our average RMS around -12 dB (as we wanted).
We balanced both channels (-12.7 and -12.5, against the initial -12.5 and -10.5).
We lowered the peaks up to -0.2 dB.
And, we achieved a dynamic range of 11.7 and 12.1 dB (very close to our target 12 dB).

For sure, those numbers say nothing if we don't hear and like the resulting compression effect, but they help us to understand how are we using the compressor and, if we are overdoing or underdoing the effect, while achieving a reasonable dynamic range and, removing the excessive overflowing peaks.


The Limiter

The Limiter is a particular case o a compressor, which compression ratio takes an infinite value. As we saw above, one compressor can allow to some instantaneous peaks to escape (to maintain the punch) but, in our final mix, we need that no peaks can produce a digital clipping, to avoid introducing distortion in our final mix.
The limiter fixes a Ceil, or the maximum loudness that every single bit of sound can reach and, any signal exceeding such a Ceil, will be lowered up to the Ceil value.

As it is an specialized compressor, it shares great part of their functions (control knobs).

The input gain is the same gain or make-up control already discussed. We already saw the attack and release controls and the knee.
The most characteristic value of a limiter is the output level or Ceil. Any signal over such a value will be lowered down up to the Ceil value.
Simplifying a lot things, when there are two or more peaks in the same row, with same volume, overs and overs will occur when we transport the digital mix to other equipment or audio processors.
Many limiters aren't able to recognize such an overs and to react accordingly (by example, lowering more one of the three peaks in a row with same volume).

Brickwall Limiters have often special algorithms that analyze the input material with some advance (look ahead) to introduce the right corrections to avoid such an issues.
The limiter in the below picture has some other functions that we aren't going to discuss here, because they are uncommon to most of limiters, with the exception of the auto-gain control, that can be found in more cases.

Decompressors o expanders

The expanders are dynamics tools that try to do the inverse work to a compressor.
If the function of a compressor is to raise the average loudness, compressing the dynamic range; the expander tries to increase the dynamic range, lowering the average loudness and, increasing the difference between Peaks RMS and average RMS.

They are used often as an attempt to return back some dynamics to a material that was previously compressed in excess and, they are being used in that way just before applying a softer compression to the material.

Probably, expanders are more habitual during masterizing, by example, to decompress a recording that passed through the channel strip of a mixing board and, that resulted in an excessive compression.

Its uses can seem a tad arcane. First, you want to control compressors, before you can understand how to correctly decompress.


Gates

If the Limiter established the maximum Ceil for loudness, the Gate establishes the minimum floor.
The Gate ensures that only those signals with a loudness over its threshold will be accepted (they cross the gate).

This concept seems very good to, by example, avoid the floor noises of the amp to cross the gate but, the real thing is that a perfect control of a Gate is a real headache.

Shares with a compressor the threshold, the attack (time that the signal must be over the threshold to open the door) and the release (time that the gate will stay open when the sound falls below the threshold).

Tweaking such a controls is a delicate task and, you can even ruin the track, if the softer passages are very close to the floor noise.
But gates are being used for more creative things also. By example, to control the amount of snare "pap" that you send to the reverberation effect. We can control that just the loudest beats go to reverberation, avoiding that the quietest parts could create unnecessary echoes. This can help also to better define the beat.

Transient modifiers

They take nice names as transient modelers, transient designers among other similar names.
Their main goal is to alter just the transients (attack) of the material, without affecting the rest of the ADSR curve. They can reduce (without clipping) or enhance the peaks of the attack phase, increasing or reducing the punch of the sound.

As we have seen, a compressor could leave some peaks to escape, to maintain the punch. The transient modifiers are able to modify their volume and duration in time.

We will see again some controls, as the threshold, gain and ratio but, this time, the compression effect will be oriented to just the transients, not to the whole signal.



De-Esser

A de-esser is a ultra-specialized compressor. Instead of working over the whole material, it works over a very narrow band of frequencies. By default, they are programmed to work over the range of frequencies where the human voice produces that sibilant sound.
The goal is to detect such an "s" and to compress them, reducing their loudness, making them smoothly sounding.



Most of the channel strips in the professional studio mixing boards, or in professional preamps dedicated to vocals, can include a de-esser module.

Even that they were designed to get rid of sibilant sounds, their particular ability to compress just certain range of frequencies, make them very useful to accurately act over certain ugly noises (fret noise in bass guitars, by example), reducing their impact without heavily affecting the rest of the material.


Other dynamics tools

They will be always very specializes versions of a compressor.
There are multi-band compressors, that differentiate the material in 3 or more ranges of frequencies, allowing individual compression control for each range.

Other compressors will work over the stereo image, dividing the signal into ghost center and side signals, allowing individual compression to each part.

RNDigital has compressors even more Gothic.
The Dynamizer is some kind of multi-band compressor but, instead of to divide the material in several bands of frequencies, it works dividing the material in several bands of loudness, instead. So, we can compress very differently the quieter and the louder parts. It's a very interesting concept.

At the end, any tool able to alter the sound dynamic or its dynamic range is a dynamics tool.


To be continued...

Well, I think I've covered mostly everything.
One of the darkest areas of mixing is the use of dynamics tools and, I guess I was of help in this area.
As most of the dynamics tools are just specialized compressors, to clearly understand how any usual compressor works will help you to understand how those specialized versions work.

I think i will end this set of entries with some tricks to enhance your mixes, in the next entry. Stay tuned.

19 March 2013

Pedals: Dry Bell's Vibe Machine - First contact

Introduction

Yes, I love Vibe pedals but, I hate every issue that such a kind of pedals bring with them.
First, the size. A correct univibe pedal, with optical leds and all the candy are big and, take lot of room in your pedal board.
Second, the tone sucking. Since vibe pedals were designed more for keyboards than guitars, they had input and output impedances not so appropriate for electric guitars.

This is my fifth vibe pedal and, I've owned very good ones. Previously, I had a Voodoo Lab Vibe, a Roger Mayer Voodoo Vi be+, a Fulltone MDV2 and a Lovepedal Picklevibe.

The Voodoo Lab has a very close vintage sound (just the chorus side of the Univibe) but, it tends to be lost in the mix.

The Roger Mayer is a sophisticated vibe, with a more studio-like sound and, lot of tweakability but, it's big as a hell and, it has more controls than a plane.

The Fulltone MDV2, includes both, the vintage and modern sounds but, to me, it has same impedance issues as most of Fulltone's pedals and, the speed (or depth) depends on the rocket position and, since you have to push the on/off button by moving the rocket, you lost your setting every time that you switch it on or off.

The Lovepedal Picklevibe, isn't based in the real Univibe effect. It goes closer and, it's very easy to use, since it has a single knob but, this is also a limitation, since you cannot easily modify the depth of the effect. It takes the less room in your pedal board of any vibes, any way, giving you an usable sound.

I wasn't expecting to buy a new pedal for a long, long time, once my pedalboard has been wamplerized but, when I saw the demo of Brett Kingman (as interesting as always), I felt in love with such a pedal.
It had everything I dreamed in a vibe pedal.

So, here we are. I had the vibe for a while but, the instability of my mains made me to wait until having power enough to perform my tests. Otherwise, neither the amp, neither pedals sound reasonably good.

This pedal is the only pedal made by Dry Bell, in Croatia and, to my understanding, this is the best available vibe pedal nowadays but, it has a price tag that will avoid lot of people to enjoy it. It's a pity.


Presentation

The pedal comes inside a white carton box, that includes inside the real pedal box.
The pedal comes wrapped in bubble plastic and, the user's manual is just a single sheet with all the information you need to run such a pedal.
Together with the pedal, they are giving you a pick-holder and, a hand-written note thanking you for buying its pedal.

I would say, this is one of the more professional pedals that I've ever seen. It has anything you can imagine a vibe should have and, everything is packet in the size of a Boss pedal (even smaller than my Wampler pedals).

When you open the pedal case, all the room is filled with the circuit board so, the drawback is that there is no room for a battery. You need to run this pedal with the help of an AC adaptor, being able to work between 9V to 16V.

This pedal drains a lot of current (85 mA maximum) so, be sure that you feed it up with a proper output of your power brick. I had to balance all pedals between two Voodoo Lab Pedal Power 2 units and, the Vibe is running in output 5 (output 5 and 6 can deliver a maximum of 250 mA, while outputs 1 to 4 and 7-8 deliver a maximum of 100 mA).


Testing it

The pedal has two internal jumpers.
The first one corresponds to an output buffer, that comes activated from factory.
The second one corresponds to control of an external expression pedal, that comes deactivated from factory.

I've first deactivated the output buffer, just to check how it worked. Results weren't so satisfactory, since the pedal became a real tone sucker. With the output buffer and input buffer off, this pedal mimes the impedance issues of any vintage vibe and, depending on the rest of your pedal board, it can be a real mess.
I didn't liked it when stacked with my overdrives, distortion or fuzz.

So, I've opened again the unit and, set the jumper as default (output buffer on).
This doesn't seem to alter the sound of the pedal itself and, helps a lot to the rest of the chain of pedals.

This pedal comes with a switch (on its front) with two modes: original and bright.
The original mode has an impedance level similar to the original Univibe, while the bright mode switches on an input buffer.
As I was using the Wampler Decibel+ buffer/booster, the effect of the bright mode is very subtle but, I guess, for people that doesn't loads a buffer in it's pedal board, this mode (together with the output buffer) will be of great help to place the vibe in his/her pedal chain without negatively impact the rest of pedals.

The pedal comes with the two modes of the Univibe pedal: Chorus and Vibrato.
I've just checked the Chorus mode, that corresponds to the mythical Univibe sound. I would probably check the vibrato mode some other day. I wasn't so interested on such a mode.

The Voodoo Lab vibe was implementing just that mode, as well as the Picklevibe.
The Roger Mayer Voodoo Vibe+ has both (even more than two!!!) , as well as the Fulltone's unit.

Anyway, with the Decibel+ buffer running at the beginning of the chain and, the output buffer of the Vibe Machine on, the sound was absolutely awesome.

This unit has some trim pots on its sides, that can be tweaked to exactly dial the right volume and symmetry of the wave, among other settings for an external expression pedal.
I didn't feel myself as needing to tweak such a trim pots, since the sound I was achieving with the factory settings was good enough.
This symmetry knob is one of the characteristics that I've already seen in Roger's Mayer Voodoo Vibe+.

For better results, the booster section of the Decibel+ should be set up at low settings. Not because of the vibe itself but, because when it stacks into gain pedals, the sound can go a bit confused and undefined.
If you run a booster at the beginning, be sure to tweak it while the vibe is stacked in a hard-clipping pedal (as a fuzz).

Anyway, any potential impedance issue can be solved with the help of the output and input buffers, while maintaining a good vibe sound and, that's simply awesome. Good job!.


The Video

I've recorded a video with my first contact with this pedal.
I know the pedal isn't sounding alone in any case. At least, the buffer/booster (Decibel+) and the delay (Tape Echo) are always on.

But, a pedal that doesn't integrates in my pedal board interest me nothing.
I like to test pedals in their real environment, and not just alone, where they can sound awesome but, produce some impedance issues when stacked with the rest of the pedal board.
Just take this into account.



My impressions

This is one of the best Vibe clones, in the same ballpark of the Roger Mayer Voodoo Vibe+ or the Fulltone MDV2, but the only one sized as a Boss-pedal. Only the Lovepedal Pickle Vibe is smaller than this one but, with it's very limited.

It has anything you can find summing up all vibe pedals, including: switchable input and output buffers, input for an expression pedal, input for an external commandment pedal and trim pots for effect volume and symmetry. And, the two original Univibe modes: chorus and vibrato.

The switchable output and input buffers allow this pedal to be stacked in your pedal board with ease. The output buffer (on by default) affects really not to the original Vibe sound and, avoids sucking the tone to the rest of pedals. The input buffer (bright mode) affects more to the original sound and, has practically no effect if you are already running a buffer or buffered pedal before the Vibe.

Without any doubt, this is the more interesting pedal I've seen in many time, it's really pedal board friendly.

On the negative side, you cannot run this pedal by batteries, since there is no physical room inside the box for a battery and, it drains lot of current (maximum, 85 mA) so, be sure to check the maximum current that your power brick can give and try to balance pedals along the different outputs or, just run a dedicated AC adaptor for this unit.
But, honestly, I prefer to sacriffy the battery and have more room in my pedal board so, to me, this is not a real issue.

The real issue with this pedal is its price tag, that's really high. Since it's being produced in Croatia (out of the European Community), you have to pay the extra Customs fees, along with the shipping costs, what puts the final price of this pedal on the Stratosphere.

Dear boys of Bry Bell, you've done an impressive pedal, just what the doctor recommended to any vibe lover but, with that price, you are avoiding your pedal to be spread to practically every pedal board.
Can you please reconsider your price tag?.
If you do, I bet your pedal will rule vibe's world!.



Home Studio: Mixing - Part 4

Introduction

We already reviewed the typical tasks of the mixing stage and, discussed about the 3 dimensions of mixing.

There are some techniques and tools destined to modify the sound dynamics and, that are broadly used, during the mixing phase and during the masterizing phase (but, in different ways).
Before talking about compressors, decompressors and limiters, we should first understand what the term "sound dynamics" mean and, the most important, what "dynamic range" means.

Sound dynamics

Even if two different instruments (by example, a guitar and a trumpet) play the same note, there is clear difference in the sound produced by each one. The fingerprint of each instrument is totally different.
Apart of the fundamental note, each instrument generates other frequencies of minor order (harmonics, overtones...), which content and intensity highly varies from instrument to instrument.

This characteristic fingerprint of each instrument is what is being called timber.

Apart of this relationship between the fundamental note, harmonics and overtones, we can analyze how the sonic wave evolute with the time. Probably the acronym ADSR (Attack, Decay, Sustain and Release) is familiar to you. Lately, we can even talk about ADSHR, if we include the Hold phase between Sustain and Release.

Next picture shows the different phases of a note C emitted by a clarinet.



Ataque = Attack
Caída = Decay
Sostenimiento = Sustain
Extinción = Release
Axis are: Volume (Amplitud) and Time (tiempo).

The particular relationship between Volume and Time is being called the Volume Envelope.

Apart of the timber, the way as the same note varies along the time is very different from instrument to instrument, also. Even the same instrument, depending on the interpretative intention, can have different curves for same note (by example, if we play the guitar with fingers or a pick, if we play harder or softer...). Even that the timber is the same, the ADSR curve is different.

The Attack phase is the ramp from the beginning of the sound until the sound reaches its higher volume. Is a very short time, where the volume goes from zero to maximum.

Once the maximum volume has been reached, the volume drops (Decay) until reaches certain level, that is being maintained for a while (Sustain). When we stop acting on the instrument, this sound starts to fade out (Release). Lately, a new phase was introduced (Hold), between Sustain and Release, because the sound drops quickly after the sustain but, it's maintained for a while in the same low volume before definitively fade out (release).

The time that each note played by each instrument stays in each of the ADSHR phases is what is known as the sound dynamics (the way it moves).

Compressors, Decompressors, Limiters, Gates and similar dynamics tools are all affecting to one or more parameters of that envelope, re-shaping the final 'look' of such a curve. The timber remains the same but, the sound dynamic is being modified.


Dynamic Range

This is a very important concept, as soon as we start to alter the sound dynamics during our musical production.

The maximum volume of a track is being known as a peak volume. During the reproduction of a track, there are certain instants of time where volume reaches peaks, way over the average volume level of the track.
Usually, the Attack phase of each note the responsible for such a peaks, while the average volume level is associated to the Sustain phase.
The loudness of the track is the perceived volume. Don't confuse loudness with volume. Volume is a measurable value (by example, decibels of sound pressure), while the perceived volume (or loudness) is a subjective value, that depends on the hearer and, isn't measurable.
Contrary to what you could think, what really determines the loudness is the Average Volume of the track and, not the peak volume.
The Average Volume is being measured with the RMS (Root Mean Square) method, a mathematical formula that calculates the average volume of an audio signal.

If, for a certain fragment (that can be all the track or song), the maximum peak volume is of +6dB and, during the same fragment we are having a RMS of -18dB, we will say that we have a Dynamic Range of 12 dB (abs(18) - abs(6) = 12 dB).
So, the Dynamic Range is the margin of decibels between the RMS and the peak value for a certain sound fragment.

To check the dynamic range of any part, track or complete mix, it's very useful to have some meter. I personally use the TT Dynamic Range Meter (see picture below).


If we have a look to the meter in the above picture, each channel is being measured separately (side bars in the meter). The narrower bar corresponds to the average values of the Peaks (Peak RMS), representing the value of the maximum peak with an horizontal red line; this value is being shown in the Peak boxes, above (-0.5 Left / -0.9 Right).
The bigger bar, close to the narrower one, represents average RMS and, the actual value is being represented on the bottom, in RMS boxes (-12.0 Left / -12.3 Right).
The two center bars are the current dynamic range of each channel (differences between RMS peaks and RMS averages).
Ok, Ok... what a nice meter but... all this for what?.
Be patient, here we go.

People that is currently ruling Masterizing are claiming that there is a loudness war in musical productions, to achieve a mix with the maximum loudness. The more we rise RMS, less difference between the average volume and peak volume. The sound becomes compressed and lacks expression, accents, forming a sonic wall of sound.

Then, which is an acceptable dynamic range?. It all depends on the destination of the musical production.

Without using compression, the sound generated by a philharmonic orchestra has a dynamic range between 20 and 24 dB.
For Classic Music and Cinema, Katz recommends a dynamic range of, at least, 20 dB.
For Pop and Radio, a range around 14 dB (Radio Broadcasters process each material, maximizing it to homogenize the volume of every thing they emit; this values ensures a good result after their processing).
For a harder musical style but, maintaining expressivity and dynamics, a range around 12 dB.
By example, recordings of Led Zeppelin, Jimi Hendrix and rest of the icons of rock of that era, have dynamic ranges between 10 and 11 dB (some songs a lesser of 8 dB, some songs a greater range of 12 dB). Those, I've been personally measured with the TT Dynamic Range tool.
Katz, among many others, consider that was during that epoch were the best recordings were made, reaching the loudness to the right level, while maintaining the quality and nuances of the sound.
For sure, the ear is the final judge but, it is true that I am always trying to achieve a dynamic range between 8 and 12 dB for hard rock and, I always take a look to the meter while working with compressors, to clearly see if I am overdoing the effect.

Alright, alright... but... this translates to what?.

When we are mixing, and very specially while masterizing, we are fixing a Ceil value close to 0 dB (by example, -0.5 dB), as the maximum peak value that our mix can reach. Note that values higher than -0.3 dB will produce clipping and overs during further digital processing in other equipment.

If our maximum peak (limiter Ceil) is of -0.5 dB and, we wanted to maintain a dynamic range of 11 dB, to properly represent our mix, our RMS should be around -11.5 dB. This is the maximum loudness level that we can offer.

We will see the importance about all this when we describe how an audio compressor and limiter work.

To easily control this aspect, Bob Katz designed the meters for his K-System (K-20, K-14 and K-12).
Depending on the destination of our mix (as mentioned above), your goal is a dynamic range of 20 dB (for Cinema), 14 dB (Radio, Pop) or 12 dB (harder music) and, therefore, you choose the corresponding meter.
Every time that your RMS are around the 0 mark and, every time that your Peak RMS aren't constantly in the red zone of the meter, you will achieve a powerful mix, with a good dynamic range.

For this purpose (and many others), I am personally using the IXL Inspector by RNDigital, that implements the 3 K-System meters (among others).

In the above picture, a track measured with a K-20 meter. You can see that RMS are in the red zone and, they are there constantly. This track would be very compressed if it was a Classic piece.

Let see same track analyzed thru a K-14 meter.


Now, we can see that the music isn't in the red zone but, they are constantly in the yellow zone.
This track would be adequate for pop but, it could be heard as excessively compressed if we send it to a Radio Broadcast.

Let see the same with a K-12 meter.


Even that the RMS are still on the yellow zone, they are way below the +4 dB limit (fortisimo) and, they reach such a zone very sporadically. The red zone is shortly reached in very punctual moments during the song. We are in a dynamic range very acceptable for a Rock mix.

If I rise the compressor gain, in the mix bus (exaggeratedly), we will see that the RMS are permanently in the Red zone.


This mix will sound really loud (because RMS are very close to peak values) but, it will sound very compressed and, probably tasteless (even that this is difficult with Sonnox plugins and more evident in other less sophisticated plugins).


To be continued ...

Before discussing about the dynamics tools, as compressors, limiters, decompressors and gates, I've considered as necessary to introduce the concepts of sound dynamics and dynamic range, because those are the aspects being modified by those tools.
Once those concepts are clear, we will see in a next entry how those tools affect the sound dynamics and dynamic range.

17 March 2013

Home Studio: Mixing - Part 3

Introduction

During the previous part, we introduced the concept of the 3 dimensions of mixing and, we discussed in depth about the Y-axis (equalization).
During this entry, we are going to cover the other two dimensions (panoramization and depth).


The 3 dimensions of mixing (continuation)

The X-axis: Panoramization

Why do we need to panoramize?.

If we were recording with a couple of ambience mics the performance of a band or orchestra, each instrument would be naturally placed in its original place of the stereo image. But, in Studios (and, specially Home Studios), each instrument tends to be recorded separately, using one or more mics that take a mono image of each instrument.

When reproducing all tracks, since all them have a monophonic content, all the instruments will be heard as if they were stacked in the middle of the stereo image.

We have seen that, thanks to the corrective equalization (Y-axis), we are achieving to clearly distinguish every instrument but, with the panoramizatoin, we will try to re-positionate each instrument in the right horizontal position of the stereo image, as if we were in front the band or the orchestra.

As in all the musical production process, more than rules there are recommended practices but, it seems that everybody agrees that the center of the stereo image should be cover exclusively by the kick drum, the snare, the bass guitar and the main vocals. Position of the rest of instruments can be determined with more freedom.

If we want to positionate each instrument of the Drums Kit, we should visualize ourselves in front the drums. If the drummer is right handed, we will have the hi-hat slightly at left hand, one tom slightly at left hand and the other tom slightly at right hand, the bass tom far away at right hand, etc.
If the drummer is left handed, some of the individual elements will be panoramized the opposite way.

Usually, we tend to exaggerate a bit the position in the panorama to leave some gap between instruments, in a way that we can increase the intelligibility of the overall sound.
The issue with a multi-instrument like the drums kit is that, each mic gets the sound of the instrument that is being miked (Snare bottom, by example) but, also some part of the rest of the drum kit.
Additionally, the ambience and overheads mics (that mainly get the sound of plates) are catching also the sound of the overall kit.

So, you should carefully hear the ambience tracks while panoramizing, because, it makes not sense to panoramize the hi-hat to left if it sound to right in the overheads and ambience tracks.

You will easily notice that when putting all the drums together, something seems to be wrong and, this is probably because you placed in the wrong space some of the individual instruments, which enters in conflict with the information that you are (maybe unconsciously) hearing in the ambience and overheads tracks.

Usually, the lead guitar is being panormized to right hand, more or less around a quarter of the right panorama. Keyboards and, filling pads are usually placed to left hand, etc.
Regularly, to fade a track away from the center, makes it to sound apparently quieter and, you will need to raise a bit the track volume to compensate the loss of loudness.

Usually, after placing each instrument in the right stereo position, we should achieve a balanced loudness for each channel. Differences over 1 dB between channels are often difficult to manage with dynamics processors, perhaps giving too much strength to one side while weakening the other one.
The key in mixing is: balance in everything you do.

Below, a picture with an example of panoramization (click on the  image to see it full sized):



Z-axis: mix depth

Alright, we were distinguishing each instrument with the help of equalization and, we were able to place horizontally such an instrument in the stereo panorama, as in a bidimensional picture. But, something really important is still missing, the relative depth of the instrument (distance) respect to the hearer.

If we watch to any orchestra or band, some instruments are more in front than others.
In the typical rock band, the singer is always in the first row. In the second row, you can see the guitars. In a third row the bass guitar and, in the last row the drums.
But, even in drums, each drum stays at a different depth. Plates are first, then the toms, then the hi-hat, the snare and finally the kick drum and bass drum.
So, the idea is to create some sensation of depth and positionate each instrument in its right depth. But, how do we push back some instrument in the mix?.

The way our ears are identifying the relative distance to a source of sound is analyzing the reverberation than comes combined with the original sound. Depending on the amount of early or long reflexions, we are able to determine the distance (depth) between us and the source.

The sources nearest to us are richer in early reflexions, the farest sources are richer in secondary reflexions. The closest are richer in trebles, the farest in low frequencies, etc.
Reverberation times are very important, also. Short times give the sensation of proximity, while longer times give sensation of remoteness.

I usually prepare three auxiliary buses, each one with a different type of reverberation:
  • A reverberation type Room, with only early reflexions and a very short time, emulating an small room, where you would record the vocals or guitars and, that helps to push the instrument to the front, thanks to the Haas effect.
  • A reverberation type Hall, with a mix of 40% of early reflexions and 60% of secondary reflexions and, a middle time, that gives a sensation of space.
  • A reverberation type Plate, with just secondary reflexion and a long time, which helps to push back the instrument and to open the stereo image.
I usually send the snare drum and plates to the reverberation type Plate and, the whole drums kit to the reverberation type Room, adjusting the send level for each instrument to put the instrument in the desired depth and, avoiding excessive reflexions (that could make the kick drum to sound double, by example).

I usually don't send the bass to any reverberation, since it tends to sound undefined and to be lost in the mix but, sometimes, it works with a reverberation of type Room, every time that we can Equalize the sound of the reverberation effect (like in the Sonnox Reverberation plugin, that allows full equalization of the reflexions).

The vocals are usually sent to the three reverberation types, adding the exact degree of each type.

The guitar is sent to the Room type (to restore the sensation of the room where it was recorded) and, to the Plate type, to push it back after the singer.

Some times, we can also use son kind of short delay, type Slap Back, with just one or two echoes and, a very short type, which pushes to front the instrument (Haas effect). For this kind of effect, I am using the Delays from the plugin EchoBoy by SoundToys.

The election of the type of reverberation is of high importance and, it's very related to the personal likes of each one and to the type of sound that we wanted to achieve for the final mix.

There are convolutive reverberations that are, basically, some software that reads a file containing the sequence of impulses of the echoes produced in a real environment and, that were recorded and sequenced to be used by such programs. The good thing of such a programs is that you can always add more "impulse files" to reproduce many other real environments, with a certain accuracy.

This kind of reverberation has a lot of fans but, even that I've tried several (Waves, some free ones, Perfect Space, Wizooverb W2, etc.), I wasn't convinced by none of them. I don't mean that they are bad, I mean that I wasn't able to achieve good results with none of them. Also, this convolutive software is really heavy for your PC resources.

There are digital reverberations, some are better than others. Absolutely all those that I've tested have some kind of "digititis" (digital artifacts) and, none liked me except the Reverberation plugin of Sonnox (the one that I am currently using). I find Sonnox Reverberation as easy and intuitive and, is the one that allows me to faster get the wanted results.

There are also some mixes reverberations, like the Wizooverb W2 by Wizoo, that allows you to choose between several digital and convolutive models. This software is very interesting to check that not always a convolutive reverberation is the solution to your needs. Even that this software liked me way more than other convolutive and digital software, I prefer the Sonnox, because I find it more intuitive and faster to get my results.

In the following screen cap, we can see the overall aspect that the three reverberation types that I use have (click on the picture for full size):


To left hand, the short reverb, with early reflexions and a short time.
Middle one is the the middle reverb, with a middle time and a mix of early and late reflexions.
To right hand, the long reverb, just with late reflexions and a long time.
Notice that this plugin allows you to fully control every parameter of the reverberation, from a single screen.


Briefing it...

By understanding the concepts behind the 3 dimensions of the mixing, we will be able to make every instrument to be distinguishable from the rest, to put each one in the right place of the space and, bringing back the reverberation ambience that was removed during the recording in dry environments.


To be continued...

We have seen practically all the basis operations of the mixing stage but, we still need to review the tools than sculpt the dynamics in our mix and, that help to project it to a commercial loudness.
In my next blog entry, I will discuss about what Dynamic Range means, what are transients, attack, release and rest of dynamics concepts, among about the dynamics tools, like compressors.

Home Studio: Mixing - Part 2

Introduction

In part 1 we talked about the basic ideas around Digital Audio, and explaining what the sampling frequency and bits resolution are and how they affect to our work.

In this part, we will try to describe the tasks that are part of the mixing phase, trying to distinguish the boundaries with the mastering phase.

Mixing tasks

The final goal of the mixing phase is to obtain a mix where every instrument is clearly distinguishable from the rest. The personal touch gives some spicy to the mix to make it appealing to rest of people.

Let breakdown the different tasks that take place in the mixing process.

Track cleansing

The first task of the mixing is to clean the recorded takes. It's impressive the amount of noises that mics catch. We aren't conscious of all those noise until  we carefully hear the track. Small click (pickup selection), background noises (electrical hums or buzzes), far sounds (door slams, children playing, ...).

If we have some good cleansing software, as the Sonnox Restore, it can be of great help to remove unwanted noises from our tracks and, very specially, clicks and clacks.
If we don't have such a tools, we will need to remove each click by hand, editing the track.
We will first delete the portions of track where there is no musical information, but just "silence" or "noise". We want to keep just the parts that are necessary to create music, nothing else.
At least, we should clean the start and finish of our song, when we go from silence to music and, from music to silence, because it's there where you can easily detect such a noises.

The Bass Guitar is an instrument that generates high energy in bass frequencies. For a speaker, to be able to reproduce a low frequency, it needs to consume a lot of energy and, therefore, if it must handle both, low and high frequencies at same time, low frequencies will dominate, wasting practically all the available energy to represent the low frequencies and, probably the range of frequencies that interest most to the human ear will be weak.

Therefore, some engineers tend to cut the Bass Guitar tracks to the exact time of the note that is being played at every instant. This means a tone of chunks in the Bass Track but, it that way, they are releasing unnecessary energy that can be used by rest of instruments.

When we have no other solution, we can try a Noise Gate in that track, to help to control the amount of signal that we want to cross the gate, filtering the floor noise. The issue with Noise Gates is that they can ruin the musicality of that track. If you are a guitarist with a very dynamic playing (high differences between the quietest and the louder notes), it's possible that part of the quietest sounds will be lost or the trails will be cut. Trying to recover those parts, lowering the threshold of the Gate will bring back the unwanted noise and, even in an ugly intermittent way that is worst than leave the noise there.
Why is it important to clean tracks?

During our mixing processes, you will be in the need of raising up the average volume perceived in the mix. Sound Processors, as Compressors, will potentiate that sound but, the drawback is that they will also raise the floor noise present in the track. So, the less noises in the track, the cleaner sounding will be the final mix.

Also, cleaning the parts of the track without useful information, will leave more room to rest of instruments to be represented with a clearer image.

Equalizing

One of the main tasks of the mixing phase is the equalization of the several tracks, to distinguish each instrument and make them more presents in their natural frequency ranges. There are two kind of equalization: corrective and cosmetic. We will discuss each one later.

Panoramize

One more basic task of the mixing is to panoramize. It consists in to put each instrument in a certain position inside the stereo image, where we would naturally find such a instrument in a real situation (concert, gig, ...).
We will go in details later.


Modify dynamics

We modify the dynamics of the sound by using dynamics processors, as gates, compressors, decompressors and limiters.
Those tools change the perceived average volume, tame peaks and can modify some basic characteristics of the sound, as the attack or release.
We will develop this aspect in depth in other blog's entry.


Restoring the spatial information

Most of the Studio recordings are made in what is called a Dry ambient. Those rooms were designed to minimize the sound reflexions, removing reverberations and echoes. The geometry of the room makes that some frequencies resonate (excessive reinforcement of a certain frequency). This resonating frequencies are known as Room Modes. Room Modes distort the original sound, exaggeratedly reinforcing some frequencies, that usually are in the range of basses and subbasses. In any case, we would end recording something different of the original source.

The drawback is that those dry takes aren't musicals. So, after having some good image of the original sound source, we need to re-create the ambience where we would like it was sounding.

In real world, each sound is bouncing in the objects around and, therefore, the sonic information that reaches us is the mix of the original source with the sum of the different reflexions on the objects around. For this, in Studio they use some controlled ways to reproduce such a reflexive ambience, with the help of reverberations and echos, which parameters are being controlled and doesn't depend on the Room Modes.
This also allows that a recording take in a very small room can sound as if it was recorded live in a concert.

We will discuss this subject later.


Apply effects

Here is where the Art begins, where the Sound Engineer adds his/her own tricks and experience, to enhance the nuances of the mix and make it distinct. From effect pedals or effect racks, up to external manipulations with equalizers, dynamic processors, filters, etc.


The 3 Dimensions of Mixing

This concept was introduced by Tyschmeyer, in his books and videos, and I find it really easy to understand and very illustrative. So, I am glad to share it with you.

Imagine your mix as a cube of 3 dimensions. The X-axis  is the front face of the cube and, will represent the panoramization of tracks. The Y-axis is vertical, the height, and will represent the equalization of tracks. The Z-axis is the depth and represents the depth of each instrument in the mix.


Y-Axis: Equalization

After cleansing the tracks, the most important task is the equalization.
With all the tracks without panorama and, without any kind of additional processing, we should be able to clearly distinguish every single instrument.

In the worst of the cases, our song will be reproduced in a mono environment, where the stereo panorama will not help. A correct equalization will make it distinguishable even in the worst cases.

Imagine that we have a picture of each instrument and that, we thrown all them over a table. If there is not a certain transparency in some parts of each picture, we will be able to distinguish just the instrument that is in the upper picture.

Or imagine a building full of windows. What we want to achieve is to see just one instrument across each single window.

With the help of a corrective equalization, we will try to bump or dim certain frequency ranges, in a way that we can leave each instrument clearly represented in their natural frequencies and, removing any possible conflict with other instruments.

For a corrective equalization, we need a "surgerical tool", that don't introduce any kind of "digital artifact". A good example is the Sonnox Equalization.
Usually, for each instrument, we can choose two regions were to enhance its own nature: in the range of its fundamental frequencies (the set of frequencies corresponding to its scale) and, in the range of its harmonics.

To do good corrections in equalization, we need some knowledge about the frequencies that are natural to such instrument in one side and, in other side, how each range of frequencies (sub-lows, lows, mids-lows, mids-highs, highs, super-highs) affects to the instrument timber.

Results of bump or dime the perceived volume in each of those areas, for an specific instrument, can enhance the intelligibility of the instrument or totally ruin its sound.

One of the first Equalization tasks consist in to dismiss the content in sub-low frequencies (below 50 Hz and, very specially, below 30 Hz). Why?. We already explained that speakers waste lot of energy to reproduce low frequencies and we will need such an energy to reproduce rest of frequencies.

If your instrument isn't rich in low frequencies (Electric Bass Guitar, Kick, Contrabass, etc), it's low frequencies content help nothing to the mix and, therefore, you can dismiss the low frequencies below 90 Hz (in some cases below higher frequencies, in some cases below that frequency).
This gives some air to the low frequencies instruments and benefits their representation.
There is no Magic Receipt. Each instrument has its own representative frequencies and, even same instruments can be slightly different in their timber. But, alright, I will share with you some notes (taken from another places) and that I find of true help.
Kick Drum

Any apparent muddiness can be removed dismissing around 300 Hz.
You can try an slight bump around 5 and 7 KHz, to add some high frequencies.

50-100Hz ~ adds bottom
100-250Hz ~ adds roundness
250-800Hz ~ muddiness area
5-8kHz ~ adds presence
8-12kHz ~ adds air

Snare Drum

You can try an slight bump between 60 and 120 Hz, if the sound is really snappy.
Try to increase around 6 KHz to get that "pap" sound.
100-250Hz ~ Adds fullness
6-8kHz ~ Adds presence

Hi-Hat and Plates

Any muddiness can be fixed dismissing around 300 Hz.
To add some brightness, try an slight bump around 3 KHz.
250-800Hz ~ Muddiness area
1-6kHz ~ Adds presence
6-8kHz ~ Adds clarity
8-12kHz ~ Adds brightness

Bass Guitar

You can try to enhance around 60 Hz, to add some body.
Any muddiness can be removed dismissing around 300 Hz.
If you need more presence, try bumping around 6 KHz.
50-100Hz ~ Adds bottom
100-250Hz ~ Adds roundness
250-800Hz ~ Muddiness area
800-1kHz ~ Fattens small speakers
1-6kHz ~ Adds presence
6-8kHz ~ Adds presence in highs
8-12kHz ~ Adds air

Vocals

It's the complexer equalization. Depends a lot on the mic used to record the voice.
Anyway, try a bump or cut around 300 Hz, depending on the mic and song.
Apply an slight bump around 6 KHz, to add some clarity.

100-250Hz ~ Adds "in your face" presence
250-800Hz ~ Muddiness area
1-6kHz ~ Adds presence
6-8kHz ~ Adds clarity but also sibilance
8-12kHz ~ Adds brightness

Piano

You can remove any muddiness dismissing around 300 Hz.
Try applying an slight bump around 6 KHz, to add clarity.
50-100Hz ~ Adds bottom
100-250Hz ~ Adds roundness
250-1kHz ~ Muddiness area
1-6kHz ~ Adds presence
6-8Khz ~ Adds clarity
8-12kHz ~ Adds air

Electric Guitar

Also, complexer equalization. It depends again on the used mic and the song itself.
Try dismissing or enhancing around 300 Hz, depending on the song and the target sound.
Try to bump around 3 KHz, to add some edge to the sound or, cut here to add some transparency.
Try to bump around 6 KHz to add some presence.
Bump around 10 KHz to add brightness.
100-250Hz ~ Adds body
250-800Hz ~ Muddiness area
1-6Khz ~ Cuts the mix
6-8kHz ~ Adds clarity
8-12kHz ~ Adds air

Acoustic Guitar

Any apparent muddiness can disappear dismissing between 100 and 300 Hz.
Apply gentle cut around 1 to 3 KHz, to raise the image.
Apply gentle bump around 5 KHz, to add some presence.
100-250Hz ~ Adds body
6-8kHz ~ Adds clarity
8-12kHz ~ Adds brightness

Strings

Completely depends on the song and the target sound.

50-100Hz ~ Adds bottom
100-250Hz ~ Adds body
250-800Hz ~ Muddiness area
1-6hHz ~ Adds crunchyness
6-8kHz ~ Adds clarity
8-12kHz ~ Adds brightness

This was just an small example about how the different ranges of frequencies can affect to each instrument.
When we hear our mix, it is necessary to ask ourselves what is missed and what exceeds in each instrument, to perform the right corrections in its equalization.

There are several diagrams available in Internet, like the interactive diagram of frequencies that you can find in this link: (http://www.independentrecording.net/irn/resources/freqchart/main_display.htm), which helps a lot to understand in which frequency ranges moves each instrument, as well as what happens with an excess or a lack of representation in each range.

The quicker way to understand how each region affects to the instrument is to try exaggerated bumps or cuts in each area. Exaggerating the effect, we will easily understand how that particular range is helping to define the timber of the instrument.

You should be able to find many other articles and tricks related to corrective equalisation of the several instruments but, your ear will be schooled with practice. In the beginning, isn't easy to valuate the impact of each correction. Be patient, it worth every minute you lost training your ears.

To finish this introduction to corrective equalization, just to indicate that there exist certain well known competitions between instruments that share some representative frequencies.
In the low side, the Kick drum competes with the Bass Guitar.
In mids, vocals compete with the Snare drum.

It's a good idea to draw a map of corrections of frequencies, in a way that we can avoid to reinforce the same range for two different instruments that compete for that range. What we enhance in one instrument should be dismissed in the other, for that particular range.
Most of engineers prefer to cut instead of to bump ranges. By dismissing certain range, you are automatically freeing such a range for other instruments. So, you wouldn't need to bump the range in the competing instrument to make it distinguishable from its competitor.

So, in the Y-axis, we are stacking the instruments "vertically", in the frequential spectrum, in a way that each one can take "its head" out of one of the "windows".

Here you have an example of a corrective equalization I did for one of my songs. Click on the image to see it full sized.



Left to Right and, Up to Down: Kick drum, Snare drum, Hi-Hat, Timbali, Overheads, Bass Guitar, Electric Guitar and Vocals.

It makes no sense that you try to copy it, because it should work in your mix.
Just try the indications above and find what works better in your mix.


To be continued...

Well, I wrote about the Y-axis more than I expected so, therefore, I will explain rest of dimensions in next entries. But, I honestly think, this part is a very good starting point for anywho that feels lost in mixing tasks.

16 March 2013

Home Studio: Mixing - Part 1

Introduction

Indeed, what I really love is to play guitar. I enjoy playing guitars, amps and pedal effects.
My problems started just the day that I've decided myself to "record one song".

If you have money enough, you can go straight to the best Studio your money can afford, you can even rent some session musics there and you have your work done, just doing what you really like: playing guitar.

But, people like myself already have spent the few money he had in guitars, amps and pedal effects (because is what we really love) and, therefore we have not founds enough to support long studio sessions.

Things go even worst if, like myself, you aren't a pro guitarist. You want to finish things really quickly  but, that simple idea puts a lot of pressure over you and you finally see things taking way more time that they would took if you were at home, relaxed.
Recording is the most important phase of the musical production. We already know the sentence: "Garbage In, Garbage Out". The higher quality our takes will have, the best results we will have during the following phases: mixing and mastering.

Perhaps, we had the money to go to a Studio and we succeed recording good tracks.
Or, maybe we invested in our own home studio system, purchasing some mics, pre-amp, audio card and we have some proper room to record our amp' sound.
Or maybe we went the silent way, using speaker simulators, ISO boxes, DI boxes, preamps, rack amps, amp simulators or any other trick that allowed us to have a good track to start our mixing work.

So, well, imagine that we already have a good set of tracks, well recorded, and that we want to do the mixing, because we wanted to save money, because we want to learn or whatever other valid reason.
We have a nice DAW software, with lot of tools but... How do we start (OMG!)?.

Every phase in the musical production is, simultaneously, art and science.
Science because the corrections and transformations that we do to sound, have origin in the set of technical knowledge related to sound and audio gear used to represent it and to transform it.
Art because the ways to process the sound are infinite and, everyone tries as many paths as he can imagine to represent exactly that sound that has in mind and that, so difficult to achieve it seems.

During several entries, with same title, I will try to share some basis knowledge and, some tricks I've been collecting, while I was fighting myself with my own Home Studio.


Digital Audio

If you are still reading, you are probably a guitarist trying to mix in his/her own PC and, due to the high cost of outboard gear, you would be trying to Mix ITB (In The Box).

The mix inside the PC is a digital mix, while the original sound is analogic.

The analogic sound is being characterized by its continuity. The signal constantly varies but without gaps.
The guitar' sound (and, overall, any other instrument), is usually recorded using mics, which work analogically also.
The weak mic signal is being, lately, amplified across a quality pre-amp (that allows the best representation of the audible frequencies) and, that amplified signal is being routed to an analogic mixing console (Studios) or to the input of some audio card.

Even in Recording Studios, the analogic technique based on old multi-track recorders leaved pass to the digital mixing technique, because its infinitely easier to work with a DAW software (as Pro Tools, Logic, Nuendo, Sonar...).

Most of the issues, when processing sound, occur just when we cross the borders between those two very distinct worlds: from analogic to digital and, from digital to analogic.

When the analogic sound is being recorded in our PC, it's being converted to a digital format.
When we hear our digital mix by our studio monitors, the sound it's being converted back to analogic.
If we send some track to an external device (compressor, equalizer, by example) and, we record back the resulting processed sound, we are double-converting, from digital to analogic and from analogic to digital.
Every time that we cross the frontier, there is some kind of degradation of the sound.

At this point, you will realize that one of the most important things in your studio gear are the audio converters (A/D and D/A). Each of your studio elements should count with high quality converters.
What quality level?. The best you can afford.

In Studios, you will find sophisticated outboard gear dedicated exclusively to the tasks of converting information A/D and D/A. They are really expensive and, every time that the signal needs to be transformed between both worlds will be routed through such sophisticated devices.

In our home studio, would be enough to get a good audio card with proved quality converters.


How do we represent the analogic audio in a digital format?

The A/D converters have as their task to get a sample of the electric values of an input signal, several times in a second. You should be already familiar with some typical values for sampling frequencies: 11 KHz, 22 KHz, 44.1 KHz or 48 KHz.
Why those values and no others?. Are they arbitrary?.

Some researches in audio world demonstrated that the human ear can hear as continuous one discontinuous sound if we send 44,000 samples in a second (44 KHz).
This is quite similar to what happens with a cinema film. We "build" a continuous visual flow with pictures that are being projected 24 times by second.

In case of audio, the converters are the devices that "build" the continuous analog flow from the discontinuous bites of the digital samples. And, converters are also the responsible to split the analog audio in small bites that are stored in a digital format.

The Musical Industry fixed the value of 44.1 KHz for audio CDs.
But, if 44 KHz are enough, why 44.1 KHz?.
It seems that the conversion tasks introduce some issues, related to the filters used in Converters. Those extra 0.1 KHz are enough to solve the issues with those filters.

If we use sampling frequencies of 44 KHz, we can represent the whole range of frequencies that the human can hear (from 20 Hz to 20 KHz). But, you will see that most of audio devices are using sampling frequencies way lower, as 22 KHz or even 11 KHz.

With 22 KHz, we can represent just half of the frequencies that we can represent with 44 KHz.
With 11 KHz, we can represent just a quarter of the frequencies that we can represent with 22 KHz.

Which range of frequencies are "deleted" depend exclusively on the design of the particular device.

Devices can use some psycho acoustics tricks to force your brain to "fill" the gaps in the sound information.
The range of frequencies that are fundamental for us are the Middle frequencies and, that's a trick that many algorithms use to compact the size of the information, preserving the "important" frequencies and removing the less "relevant" ones.

Usually, the first that I missed when working with lower sampling frequencies is the spatial information, the dimensionality of the sound (echos, delays, reverbs,...).
Professional Audio works with 48 KHz and, for higher definition environments, 96 KHz.

Alright!.
We know that the sampling frequency has two missions: the more samples I take, the more close to the real thing will be the digital representation of the original analogic signal and, greater will be the range of frequencies that I would be able to represent and reproduce.

But, we still have to clarify one more aspect of the digital representation of sound: the bits depth (or resolution). What's that?.

As you probably know, in the digital world, any kind of value can be expressed just using a string of 0 and 1. The length of that string of zeros and ones will limit the maximum value that we can represent with such a string.

By example, with 16 bits, we can represent 2^16 values (65,536), half positive, half negative. Don't forget that the analogic audio follows a sinusoidal function that takes both, positive and negative values.
With 24 bits, we can represent 2^24 values (16,777,216), half positive and half negative, what starts to be an important amount of values.
With 32 bits, we can represent 2^32 values (4,294,967,296), an impressive amount of more than 4 hundred millions of different values.
But, how important is this?.

The sound taken by the mic and lately amplified by the preamp, is a continuous electrical signal, that varies it voltage. The differences between the lowest and maximum voltage will determine the dynamic range of the sound. The measured value of such a voltage, in a given instant of time, is what is being stored in a digital sample, which bits depth will allow to distinguish more or less voltage levels.

With 32 bits, we can divide the whole range of infinite voltages in 4 hundred million of different steps.
With 24 bits, we can divide the whole range of infinite voltages in 16 million of steps.
With 16 bits, we can divide the whole range of infinite voltages in just 65 different steps.

Since the number of steps (or values) is finite in the digital audio, we need to assign a digital value to a range of analogic values, we need to group together the infinity voltages between two given values and, we assign just a digital value to that range. Imagine, all analogic values between +0.040 and 0.045 will be assigned to the digital value +0.040.

So, you can imagine that the more the number of "boxes" that we have to pack together ranges of analogic values, the more accurate the representation of the analogic value can be.

The D/A Converters, are creating a continuous sound using discrete digital values. Between two digital values, the converter interpolates a series of values to fill the time gap. The closer each pair of samples will be in time (higher sampling frequency) and the more differentiated (higher bits depth), the less wrong results will come from such a interpolation.


How much digital audio weights?

Alright!. Now, we know that to digitally represent audio, we need to use two important variables: the sampling frequency and the bits depth. So, how much room it takes?.
ENOUGH!.

1 minute of audio, at 16 bits and 22 KHz needs 21,168,000 bits (2.52 MB !!!).
1 minute of audio, at 24 bits and 44.1 KHz needs 63,504,000 bits (7.57 MB !!!).
1 minute of audio, at 16 bits and 44.1 KHz needs 42,336,000 bits (5.04 MB !!!).

And, multiply those by two, if we work in stereo (two channels), by five or seven, if we work in some multi-aural, multi-speaker environment.

The Audio CD format uses 16 bits of depth and 44.1 KHz of sampling frequency. Capacity of an Audio CD is around 650 Mb so, it will allow to store around 64 minutes of stereo music (650  / (2 * 5.04)), making simple calculations.

Now, imagine that you record several takes for a guitar track (to choose the best or to mix the best parts of each take), together with similar takes for the bass guitar, with a great amount of takes for drums and voices... yes... audio eats disk space really fast.

At this point you will realize that, to have a quality sound stored in your PC, you will need a huge amount of disk space, in the PC or in any other external storage device that allows you to save your work during the different stages.


So good but, then, which digital format do we need?

Long time ago, when I've started my own learning around audio world, I've read some interesting work from Helsinki University that was discussing about the impact of sampling frequency and bits depth. This work stated that the quality of sound benefits more of a higher bits depth than of a higher sampling frequency.
So, you achieve better results working at 24 bits and 44.1 KHz instead of doing it at 16 bits and 96 KHz.

Big Studios, with great resources can work up to 48 bits and 96 KHz but, for more modest Studios and, Home Studios, 24 bits and 44.1 KHz is an excellent compromised solution.
Once again, it would be better to work at 32 bits and 48 KHz than to work at 16 bits and 96 KHz.

According to the people that really knows something about all this (not myself), as Bob Katz and similar, all audio digital transformations that work with 32 bits (or higher), floating point, have not significant lose of information, neither introduce "digital artifacts". Moreover, there is the only way to correctly represent the spatial information (reverberation, echo, etc.).

When we go down, from 24 bits to 16 bits, we are loosing 8 bits of information. To say it in some way, we are cutting the tail (the fine nuances) of the audio value represented.

There is a technique, named dithering, that basically consists into introducing a floor noise to the converted signal, in a way that this floor noise seems to mathematically correct most of the interpolation errors that happen together with this lose of accuracy.
If you have some digital audio outboard gear, it's very important that it works internally at 32 bits or higher bits depth. The dithering is only necessary when going down from a higher resolution to a lower resolution, as an artificial trick to try to restore the original signal (for which part of the information has been lost).

If you work at 24 bits, in your DAW and with any digital outboard gear at same resolution and frequency, you will only need to do a dithering at the very end of your mixing, when you render the final song to a CD format (16 bits / 44.1 KHz). The less times you go from higher to lower resolution, the higher quality will achieve in your mixes.

If you have outboard gear (or even software plugins) working with at lower resolutions than your project's one,  you will loose information every time you send the track to any of those devices or plugins and, you will introduce the floor noise of the dithering operation. The more these kind of operations happen in your mix, the more the floor noise and the more your signal goes far away from the original analogic signal and, the more the digital artifacts that are being introduced.


To be continued ...

I wanted to start with these very basis concepts of audio digital, because there are even people running small studios that doesn't seem to be aware about these "little digital audio details" and, they are working with 16 bits and 44.1 KHz in their mixes, just because it takes less disk space. They even don't know nothing about dithering and what happens when you mix stuff at different resolution and frequency.
If you mix yourself or you go to an small Studio, now you have some valuable information to determine the technical level of your "Audio Engineer" and, to guess the quality of the results.

I think that knowing the basis ideas about Digital Audio, will allow you to better choose your tools (DAW, plugins, outboard gear, etc.).

There are many other critical concepts, as the jittering, which I will not discuss about, since they happen in more sophisticated environments, with lot of outboard gear and, where there is a mix of analogic and digital electronics devices.

In the next blog entry related to this matter, I will try to talk about the different phases of the mixing, as well as about the 3 dimensions of mixing.