Signal/CreatingContent

From HackerspaceWiki
Jump to: navigation, search

Things to avoid[edit]

Extreme stereo[edit]

Extreme stereo is when you have audio on the extreme left or extreme right of the stereo channel, but no audio on the other channel. Because only one ear (or speaker) gets a sound, the overall effect is a decreased volume. It is not pleasant to listen to extreme stereo.

When mixing your audio, try to avoid balancing your audio at the far ends of the stereo channel. It is ok to balance them slightly off-center. For example, if you have two participants it creates an illusion of space putting one of the participants slighlty to the left of the center and the other slightly to the right of the center.

Large differences in dynamic range[edit]

The dynamic range is the variance in amplitude. When digitising audio at 16-bit, each sample ranges between -32767 and 32767. This is the full dynamic range. What often happens when recording audio, is that the different participants only use a portion of the available dynamic range, caused by different voice loudness or a varying distance from the microphone. Even for a single speaker, the dynamic range varies for example when the speaker moves back and forth from the microphone during the recording.

To correct for those differences, we employ 'dynamic range compression'. A more advanced mixing desk will have compression on each individual microphone input, and some portable audio recorders have a form of compression available.

Audacity is a popular free/libre open source software package to edit audio, and can apply dynamic range compression (or just 'compression') to your audio clips. Here is an example before compression:

Before compressor.png

In the first 3/5th of the waveform we see the iterviewer asking a question, in the second 2/5th the interviewee answers. Notice how the answer is using much less dynamic range.

And this is after:

After compressor.png

The difference is obvious. Not only are interviewer and interviewee now employing the same range of amplitudes, we see that both have been amplified to use as much of the dynamic range as possible. Good compression is a bit of an art, but by playing around on a short clip you may get a feel for it and improve your recording. The compressed audio is much more pleasant to listen to, because the volume is not constantly changing. It is easier to understand what is being said.

Plopping and hissing sounds[edit]

When recording speech, plosive sounds uttered by the participants may result in really sharp and short bursts of audio that sound unpleasant. A simple way to prevent most of this is to employ a filter or wind-screen to the microphone.

Professional studios usually employ screens mounted between the person speaking and the microphone. Another option is to use foam caps covering the microphone. These are dirt-cheap at your local pro-audio store. At Signal, these are known as 'microphone condoms'.

Some tips and tricks[edit]

Splitting a .mov file recorded Astera-style[edit]

What software does Astera actually use to produce these movs?? The result is a .mov file with two seperate tracks (as opposed to one stereo track), one for each side of the (skype) conversation.

On linux, one could do:

mplayer RoomIOI-ChrisJohnRiley_2010-08-19.mov -aid 0 -ao pcm waveheader,file=roomioi-0.wav,fast mplayer RoomIOI-ChrisJohnRiley_2010-08-19.mov -aid 1 -ao pcm waveheader,file=roomioi-1.wav,fast

To get both participants in a seperate wav file. Load these into audacity for post-processing; apply compression to both channels, make sure the levels are about right and then combine both into one stereo track. Avoid extreme stereo (see above).