A very simplified explanation would be that any sound source created vibrations that propagates through the air. Even if it’s not completely true, closer to reality would be saying it causes compressions and thinning of the air. But let’s stick with vibrations for now, or even better, waves (as in soundwaves).

These waves moves like a chain reaction in the air.

When the waves reach our ear, it affects the tympanic membrane in the ear that causes a chain reaction in the mechanical part of the ear, eventually creating tiny electrical signals that our brain interprets as sound.

If we look at ourselves, our primary source of sound is the glottis in the throat. As we push air through our throat we control the glottis (or vocal cords) to vibrate and those vibrations propagates through our throat, up in the scull, getting acoustic resonance support from different chambers inside our skull and throat, finally formed by the shape and movements of our mouth and lips, exciting as spoken or sung words, or what ever we are aiming at to mimic.

Should we look at technology, the equivalent of the our glottis would be a loudspeakers. Their purpose is to get the air moving and recreate the waves (the sound) we want to listen to again. And on the other side, the equivalent of our ear would be the microphone that captures (records) the waves (the sound) we want to recreate and listen to again.

Now that we know that sound is waves that are transported through the air (or any other matter, but let’s not get into that now) there are two more things we need to understand about these waves. Amplitude and frequency.

If we look at the amplitude first. This is a way of determine the energy or the force of the wave. How hard it will hit the membrane (or ear). The harder the impact, the louder we experience the sound.

The scale the amplitude is measured in is decibel. Or deciBel, dB for short.

Now, there are a million (well, not a million, but many) different dB scales, depending if you measure the pressure of the sound or if you are into digital or analog recording, or looking at the actual electrical signal that the soundwaves generate. There are a lot of material to read about decibel, and you can start by reading the wiki article if you are interested to know more about it. It may look like greek, but it is the reason why I do not go in depth with it.

The lower the energy the sound, the lower dB value it will get (no matter what scale you use) and the more silent we experience the sound (as do the microphone).

The more energy in the sound, the higher dB value it will get (no matter what scale you use) and the louder we experience the sound (as do the microphone).

There are two aspects of amplitude that you need to be aware about. The first located in the silent part of the scale and the other in the loud part of the scale.

I start with the loud part of the scale.

Should the sound contain very much energy it is very loud, and may cause damage in our ears. Too much exposure may cause permanent damage to the ears.

As for microphones, even if the membrane not very often break due to too much energy in the sound, it still causes unwanted distortions in the signal, since the membrane can not move past it’s maximum range, even if there is more energy that keeps pushing on it.

This results in a clipped signal which you can see (if you use some kind of audio recording software) as a straight line on the soundwave.

When the energy is too much for the membrane to handle is very individual. You’ll need to test your microphone and find the boundaries it has.

General thumb rule is mic close to sound source gives a greater risk of unwanted distortion, too far from source gives too much of reflecting sound in the room, rather than the sound source itself (which may not always be unwanted).

On the other end of the scale there is when the sound is too silent. Every electronic (analog or digital) component has it’s own noise. The lower the better. And of course the lower the components self noise, the more expensive it gets (in general). If you want to read more about this, look for signal/noise ratio. The basic principle is that you want the sound from your sound source to be louder than the noise that the components emit. Much louder. A low quote between the signal and the noise means that the unwanted noise from the equipment is a big part of your wanted sound (the noise pollutes your signal).

Frequency. This is the last aspect of what sound is (that I will go into here).

The scale you measure frequency is Hertz (Hz) and what it measures is the distance between two cycles. Should you compare it to waves on the ocean, the frequency would be the distance from the top of one wave to the top of the next wave. The longer distance between the tops, the lower frequency. The shorter distance between the waves, the higher frequency. (Same example with amplitude would measure the height of the waves, the higher a wave, the more energy, giving a high dB, the lower the wave, less energy, giving a low dB.)

A low frequency gives a low (dark) sound and a high frequency gives a high (light) sound. If you sit in front of a piano, you have darker (lower) tones to your left. They have lower frequency. And to your right you have lighter (higher) tones. They have higher frequency.

The human ear can catch frequencies from 20 Hz (the lowest) and 20 000 Hz (often written as 20 kHz as in 20 kilo (1000) Herz). As we age, most of us lose the highest frequencies.

Different sound sources emit noise in different spectra of the frequency range. I.e. the human voice has a span of 200 Hz to 3000 Hz (3kHz) – roughly, a violin that has about the same range as here a cello goes from 65 Hz to 1050 Hz, and a piccolo flute around 550 Hz to 4000 Hz (4 kHz).

A strange thing about most sound sources is that you have the pitch of the sound (the precise vibration it causes) but it is not only that frequency of that tone you hear, the sound is built up by overtones and subharmonics in different combinations, which adds up to the whole character of the sound.

+