Arts >> Music >> Recording Music

Basics of Sound Engineering

Sound engineering involves controlling an audio signal for capture by a recording device such as a tape recorder, or amplifying it for use in a live performance. A sound from an audio source such as a drum kit, guitar or flute is converted to an electronic signal, which the sound engineer then manipulates using various devices.
  1. The Recording Process

    • The recording process involves setting up a signal path so that the sound from an audio source is captured on some type of media. Audio can be recorded onto many different formats of tape, from consumer cassettes to professional multitrack tape. It can also be stored digitally on a computer or stand-alone digital recorder. Often audio will pass through a mixing console and other equipment, such as effects processors, before being recorded. This enables the sound engineer to maintain a high degree of control over the recording process.

    Microphone Technique

    • A sound engineer uses many different types of microphones.

      In order for a sound engineer's equipment to manipulate an audio signal, the sound must be converted to an electronic signal. Microphones convert acoustic sounds (vocals or acoustic guitar, for example) into a signal that the engineer can amplify, mix with other signals or record. The knowledge of where to place microphones in order to capture the best sound from an instrument is a vital part of sound engineering. There are many types of microphones for recording different sources and instruments.

    Live Sound

    • Live sound engineering requires the performer's sound to be amplified so that the volume is suitable for the particular venue. The goal of live sound engineering is to make sure the sound from the performers is mixed correctly, amplified to the correct amount and heard as evenly as possible throughout the venue. The live sound engineer controls the audio using a mixing console situated in front of the stage.
      The performers also have on-stage speakers directed at them. These are called fold-back or monitor speakers and allow the performers to hear their individual instruments at a specific level in the overall mix of instruments.

    Mixing

    • Each instrument has its own volume fader.

      Mixing takes place in both live and recorded sound. In a live setting the sound engineer must balance the overall sound of the individual performers while they are playing. Each instrument has a separate volume fader on the mixing console, allowing the engineer to adjust them individually and ensure that the levels are balanced. The process is similar in a recording environment except the audio source comes from a previous recording rather than a live performance. Each separate instrument has a separate level fader. For example, you might mix together kick drum, snare drum, high hat, bass, guitar, keyboard and vocals. In this instance you have seven faders to balance against each other in order to obtain the required sound.

    Effects and Processors

    • Sound engineers use various effects and processors to enhance live or recorded audio, particularly during the mixing stage. All or part of an audio signal is routed through an effects unit, which can be a piece of hardware or software, and altered in some way. Two of the most common processors are equalizing and compression. An equalizer (EQ) boosts or attenuates frequencies, accentuating certain instruments or eliminating unwanted frequencies. Compression modifies the dynamic range of a piece of audio so that the overall loudness is increased.

      Commonly used effects include reverb, delays and phasers. A reverb is a very short delay or echo, often used on vocals, which simulates a physical environment such as a bathroom, tunnel or concert hall. A delay gives an echo effect, while a
      phaser creates sweeping effects by splitting the audio signal and offsetting the two parts by tiny amounts.

Recording Music

Related Categories