A microphone

Download Report

Transcript A microphone

Session 12
Block 5
Entertaining and explaining
Arab Open University
T215B
Communication and
information technologies (II)
1
Session Outline
• Part 2: Sound capture and processing (Cont.)
• Recording sound
• Processing sound
• Listening to sound
Arab Open University
• Sound transducers
• Microphones
• Loudspeakers/headphones
2
3. Sound transducers
• A microphone converts the energy of a
sound wave to electrical energy
• A loudspeaker or a pair of headphones
carries out the reverse process.
• Microphones, loudspeakers and
headphones are all sound transducers!
Arab Open University
• A transducer is a device that converts
energy from one form into another.
• In the case of sound transducers:
3
3.1 Microphones [1/14]
• by using electromagnetic induction,
• by using electrostatic induction,
• by using the piezoelectric effect.
Arab Open University
• A microphone converts the variations in air
pressure that form sound waves into equivalent
variations in electrical voltage.
• There are three main ways of doing this:
4
3.1 Microphones [2/14]
• Electromagnetic Induction based Microphones:
• Electromagnetic induction was discovered by Michael Faraday in
1831.
• Electromagnetic induction is used in microphones (yet not all
microphones)
• A microphone that uses electromagnetic induction is called a
moving-coil or dynamic microphone.
• Electric motor effect: The reverse process!
• Placing a conductor in a magnetic field and applying a voltage across
it such that a current flows will cause the conductor to move (or at
least try to move).
• The electric motor effect is used in most loudspeakers and
headphones.
Arab Open University
• Electromagnetic induction is a physical effect whereby if an
electrical conductor is moved in a magnetic field, it has an electrical
voltage induced in it.
5
3.1 Microphones [3/14]
Arab Open University
• Moving-coil Microphones (electromagnetic induction):
For an animation Visit: http://www.edumedia-sciences.com/en/a562-microphones
6
3.1 Microphones [4/14]
How a moving-coil microphone works?
• The diaphragm is a lightweight and flexibly suspended
membrane.
• A coil is attached to the diaphragm and is suspended in a
strong magnetic field.
• The coil moves in sympathy with the diaphragm movement.
• Due to electromagnetic induction, the moving coil suspended
in a strong magnetic field induces a similar voltage variation
across the ends of the coil.
• The small induced voltage can then be amplified to produce a
more usable electrical signal.
Arab Open University
• When sound waves reach the diaphragm, they cause it to vibrate
in sympathy with the pressure variations.
7
3.1 Microphones [5/14]
• Moving coil Microphones (electromagnetic induction):
• Advantages:
• Disadvantage: Moving-coil microphones tend not to be as
sensitive as electrostatic microphone types.
• Usage: Moving-coil microphones are most often used as
handheld microphones for singers and speakers, where
ruggedness is more important than sensitivity.
Arab Open University
• Moving-coil microphones are typically quite rugged.
• Moving-coil microphones are able to convert sounds more or less
over the full range of audible frequencies.
8
3.1 Microphones [6/14]
• Electrostatic induction based microphones:
• Electrostatic induction is another physical effect whereby the
electrical charges in an object are redistributed because of the
presence of nearby charges.
• The condenser microphone
• The electret microphone.
Arab Open University
• There are two types of microphone that use this effect:
9
3.1 Microphones [7/14]
Arab Open University
• Condenser Microphone (Electrostatic Induction):
10
For an animation Visit: http://www.edumedia-sciences.com/en/a562-microphones
3.1 Microphones [8/14]
How a Condenser Microphone works?
• There are two conducting plates, separated by a very thin air space,
which are charged by placing a voltage between them.
• One of the plates is fixed.
• The other plate forms a lightweight diaphragm:
• When the diaphragm moves, the electrostatic induction effect
causes the charge to change resulting in a current flow between the
plates.
• The induced current variations (that follow the air pressure
variations) cause similar voltage variations across a resistor that is
placed in the connection to the power supply providing the charging
voltage.
• These small voltage variations can then be amplified to produce a
more usable electrical signal.
Arab Open University
• It moves towards and away from the fixed plate in response to the air
pressure variations caused by sound waves.
11
3.1 Microphones [9/14]
• Condenser Microphone (Electrostatic Induction):
• Advantage:
• Condenser microphones have a much better frequency response
than all of the other types of microphones
• their performance all round is better than the other types.
• Condenser microphones can be quite fragile
• Usage:
• Condenser microphones are usually used in recording studios and
in fixed locations that are at a distance from the sound source.
• Being relatively fragile, they are not very suitable for handheld
applications.
Arab Open University
• Disadvantage:
12
3.1 Microphones [10/14]
Arab Open University
• Electret Microphone (Electrostatic Induction):
13
3.1 Microphones [11/14]
How an Electret Microphone works?
• In the gap between the plates there is an insulating film that is
electrically polarised during manufacture so that it has a built-in
electrostatic charge
• An external charging voltage is not required
• When the diaphragm plate moves in response to sound waves, a
similarly varying voltage is induced between the plates.
• The induced voltage can be amplified to produce a more usable
electrical signal.
• Because the signal source is very small and is easily affected by
interference, it is usual to integrate a small amplifier into the
microphone housing.
• This means that a power supply is usually also required for electret
microphones as well as the condenser types.
• This supply is known as phantom power
Arab Open University
• it is the electrostatic equivalent of a permanent magnet.
14
3.1 Microphones [12/14]
• Electret Microphone (Electrostatic Induction):
• Advantages:
• Electret microphones are very cheap to produce
• Electret microphones are quite rugged.
• Electret microphones have a good frequency response.
• Electret microphones are commonly used in computers and
digital cameras.
Arab Open University
• Usage:
15
3.1 Microphones [13/14]
• Piezoelectric effect based Microphones:
Arab Open University
• Piezoelectric transducers exploit the phenomenon that certain types
of insulating materials, known as piezoelectric materials, develop an
electric charge when they are mechanically deformed.
• Once again, the basic construction of a practical device is a sandwich,
with the piezoelectric material in the middle and conducting surfaces
on either side.
16
For an animation Visit: http://www.edumedia-sciences.com/en/a562-microphones
3.1 Microphones [14/14]
How a Piezoelectric Microphone works?
• The greater the flexing of the diaphragm, the greater the distortion
of the piezoelectric material and the greater the voltage.
• Usage:
• Piezoelectric microphones used to be very common for low-cost
applications such as domestic tape recorders, but have largely been
supplanted by electret microphones.
• Piezoelectric transducers are still used as contact microphones for
amplifying acoustic instruments (check the guitar picture in slide 16).
Arab Open University
• One of the conducting plates is a flexible diaphragm which flexes in
response to sound waves.
• Flexing of the piezoelectric material generates a charge on its
surface, which induces a voltage on the conducting plates adjoining
it.
17
Session Outline
• Part 2: Sound capture and processing (Cont.)
• Recording sound
• Processing sound
• Listening to sound
Arab Open University
• Sound transducers
• Microphones
• Loudspeakers/headphones
18
3.2 Loudspeakers/headphones [1/6]
• Converts variations in electrical
voltage to equivalent variations
in air pressure that we then
perceive as sound.
• There are fewer types of
loudspeaker than there are
types of microphone
• The vast majority of
loudspeakers use the
electromagnetic effect.
Arab Open University
• A loudspeaker or pair of
headphones carry out the
reverse process to that of a
microphone
19
3.2 Loudspeakers/headphones [2/6]
• A loudspeaker (electromagnetic induction):
• A lightweight cone is flexibly suspended at its outer rim.
• The suspension allows the cone to move backwards and forwards
over a range of about a centimetre.
• A relatively large signal current is supplied to the voice coil, which is
attached to the cone (a signal that depicts the sound to be created).
• The voice coil is surrounded by a strong magnetic field.
• The interaction of the electric current and the magnetic field
generates a force that propels the cone.
• The loudspeaker cone is driven rapidly backwards and forwards in
alternation acting like a piston
• So…?
Arab Open University
• The electric motor effect is used to drive the cone.
20
3.2 Loudspeakers/headphones [3/6]
• A loudspeaker (electromagnetic induction):
• This is potentially a source of imperfection in the sound
• Cancellation can occur if the pressure variations from one side of the cone
reach and mix with those on the other side!
• One way to stop the pressure waves from the back and front of the
cone cancelling is to keep them separate:
• For example, by mounting the loudspeaker in one side of an airtight rigid
box, with the cone forming part of the airtight enclosure.
• Now the sound from the rear of the cone is prevented from mixing with
that from the front of the cone
• The listener hears only the sound from the front.
Arab Open University
• The loudspeaker cone is driven rapidly backwards and forwards in
alternation acting like a piston:
• It alternately compresses and expands the air in front of and behind the
cone and hence, generates sound.
• So the speaker generates sound on both sides of itself! and these
sounds are completely out of phase with each other!
21
3.2 Loudspeakers/headphones [4/6]
Why more than one loudspeaker unit are generally used?
• Typically there will be a low-frequency drive unit and a highfrequency drive unit.
• Sometimes there may also be a drive unit for middle frequencies.
• Sub-bass units for extremely low frequencies are sometimes
used, but these tend to be housed as separate units.
Arab Open University
• A single loudspeaker generally cannot satisfactorily cover the
whole of the audio frequency spectrum
• Different loudspeaker units are designed to handle different
parts of the audio spectrum.
22
3.2 Loudspeakers/headphones [5/6]
• There is little potential for them to mix with the vibrations from
the other side.
Arab Open University
• Headphones (electromagnetic induction):
• Headphones usually work on the same principle as
loudspeakers, but the scale is much smaller.
• Headphones need less power and there is no need for an
enclosure as the vibrations from one side of the ‘cone’ go
straight into the ear
23
• Other Loudspeakers types:
• Loudspeakers based on electrostatic effect do
exist and can provide good results, BUT they
tend to be rather fragile and require high
polarising voltages.
• Piezoelectric loudspeakers also do exist, BUT
they tend now only to be used in buzzers and
sounders in computers, mobile phones, etc.
Arab Open University
3.2 Loudspeakers/headphones [6/6]
24
Session Outline
• Part 2: Sound capture and processing (Cont.)
• Sound transducers
•
•
•
•
•
Making a recording
Analogue or Digital?
Digital Sound
Storing Sound
Digital Audio Compression
• Processing sound
• Listening to sound
Arab Open University
• Recording sound
25
4. Recording sound
One of the main things we need to do with sound
signals is to store them!
How to do so?
Arab Open University
• What we can do with the electrical representations
of sound produced by a microphone?
• At the end we will use a loudspeaker to convert
them back to sound again…
26
• The sound electrical waveform produced by a microphone is
nothing like the smooth and constant waveforms of the pure
tones!
• The exact form of the waveform in each cycle is constantly
changing.
• ‘Real’ sound signal waveforms are neither sinusoidal nor in
the long term are they repetitive.
• A ‘Real’ sound signal is usually analysed in terms of the
frequencies it contains
• It is generally composed of a mixture of many sinusoids of differing
and varying frequencies and amplitudes.
Arab Open University
4.1 Making a recording
27
• The electrical signals representing sound that come from a
microphone are in the form of an analogue signal.
• In the past, sound signals from microphones and other sound
sources were mixed, stored and generally processed using
only analogue methods.
• Nowadays, all these processes are almost always done using
digital techniques.
• Most sound processing and storage these days is done with
sound in its digital form.
What are the advantages of working with sound in its
digital form?
Arab Open University
4.2 Analogue or digital? [1/2]
28
4.2 Analogue or digital? [2/2]
• Immunity from signal corruption brought about by
extensive processing or through transmission or
storage.
• Mixing and processing of sound comes down to a
simple process of computation (‘number crunching’)
rather than involving complicated analogue electronic
circuits and devices.
• Computer storage techniques can easily be used for
storing sound in its digital form.
Arab Open University
• Advantages of using digital techniques:
29
4.3 Digital sound [1/13]
How do we get the analogue signals given out by a microphone
into a digital signal (a number form)?
1.
2.
Sampling
Quantisation.
Arab Open University
• This is the process of analogue-to-digital conversion which is
based on two basic stages:
30
4.3 Digital sound [2/13]
• Sampling:
• “Sampling” an analogue signal is to measure the
instantaneous amplitude of the analogue sound signal at
regular intervals.
Voltage (V)
S2
S3
S1
Taken from:
http://en.wikipedia.org/wiki/Sampling_(signal_processing)
S(t)
Sampling
{S0 = S(0T), S1 = S(1T), S2 = S(2T), … }
Arab Open University
• The result is a set of voltage levels which represent the sound
signal’s level at the instants the samples were taken.
31
4.3 Digital sound [3/13]
• Quantisation:
• Quantisation is to divide the maximum voltage range of the analogue sound
signal into a number of discrete voltage bands and assimilate each sound sample
into a voltage band.
• Uniform quantisation: Each band has the same size
• Note that not all quantisation are uniform.
• in T215B we will focus on uniform quantisation
• Each Voltage band is represented by a number.
• The result of “Sampling + Quantisation” is a string of numbers at regular
intervals
Band 8
Band 7
Band 6
Band 5
Band 4
Band 3
Band 2
Band 1
Band 0
• Each number represents a particular voltage level of the sound signal at one instant.
{S0 = S(0T), S1 = S(1T), S2 = S(2T), … }
Quantisation
{Q(0T) = 4, Q(1T) = 6, Q(2T) = 8,
Q(3T)=7, …}
Arab Open University
Voltage (V)
32
4.3 Digital sound [4/13]
• The sample numbers are taken at the same rate that they were
originally generated
• For each number a voltage which is the centre value of the voltage
band that the number represents is created (this centre value is
called “Quantisation level” – To be discussed later).
• The transitions between the sample voltages are then smoothed out
to give an analogue signal.
How often does the sound signal need to be sampled?
How many voltage bands does the analogue voltage range need to be
split into?
Arab Open University
• Digital to analogue conversion:
• Loudspeakers only accept “Analogue” signals too!
• Converting the signal back into its analogue form is done by a
process called digital-to-analogue conversion .
33
4.3 Digital sound [5/13]
How often does the sound signal need to be
sampled?
What is the minimum value of the “Sampling Rate”?
• The Sampling rate is the rate at which the analogue sound signal is
sampled.
• The sample rate needs to be at least twice the highest frequency
contained in the sound.
• Why? → In order to be able to convert the full range of frequencies
that an analogue sound signal contains.
• In other words, the signal must be sampled at least twice within
each cycle of the highest frequency contained in the sound.
Arab Open University
• Sampling Rate:
34
4.3 Digital sound [6/13]
What is the minimum value of
the sound signals Sampling Rate?
• Sounds minimum sampling rate:
• The frequency range of human hearing is about 20 Hz to 20 kHz.
• The sampling frequency should be at least twice the highest
frequency contained in the sound signal:
• Sampling frequency ≥ 2 x 20KHz = 40KHz.
• The signal should be sampled at least 40 000 times each second.
• Usually a minimum sample rate 44.1KHz is usually used in many digital
systems (including audio CDs).
• With the advent of faster and faster computers and digital processing
devices, even higher sample rates are often used.
Arab Open University
• The highest frequency is 20KHz.
35
4.3 Digital sound [7/13]
• The figure below shows a sinewave (solid line) that is being sampled
less than twice each cycle.
• The samples are represented by blobs.
• When the minimum sampling rate is not respected, another
sinewave with a lower frequency can be drawn through these
samples  Shown as a dashed sinewave.
Original
Waveform
Alias: A lower
frequency
wave obtained
after a digital
to analogue
conversion
Arab Open University
What happens if the minimum sampling rate is
NOT respected?
• Aliasing:
36
4.3 Digital sound [8/13]
• Aliasing:
• Aliases occur not only in digital sound, but in other sampled systems.
• Aliases in digital sound are potentially a problem because they can be
audible.
• To avoid aliases, the minimum sampling rate has to be respected!
More than 2 samples/cycle
(minimum sampling rate RESPECTED)
Arab Open University
Less than 2 samples/cycle
(minimum sampling rate NOT RESPECTED)
37
An Alias exists: A waveform with
lower frequency that passes
throughout all samples is found.
No Aliases: A waveform with lower
frequency that pass through all
samples CANNOT be found!
4.3 Digital sound [9/13]
How many voltage bands does the analogue voltage
range need to be split into?
• Quantisation levels: – Going back to Quantisation
• Recall that the samples (from the sampling process) are allocated to
a voltage band and they are then assimilated to the centre value of
the voltage band.
• The center value of each voltage band represents a quantisation level.
• In other words, the sample is approximated to the nearest
quantisation value (quantisation level).
• The space between each quantisation level is called the quantisation
interval.
Arab Open University
• There is almost always a degree of approximation involved in the
quantisation process.
38
4.3 Digital sound [10/13]
Voltage (V) Quantised Sample
4
Real Sample
Quantisation error
3
2
Arab Open University
1
0
-1
-2
-3
-4
“Analogue signal” after Digital to
Analogue Conversion
(an illustrative drawing)
39
4.3 Digital sound [11/13]
• The group of errors that occur on the “real” voltage samples are
called “Quantisation Error”.
• Quantisation error can be interpreted as noise superimposed on the
desired signal.
• Quantisation error is treated as a kind of added noise.
• To reduce the quantisation error, the elementary errors should be
reduced!
• Errors can be reduced by:
• Increasing the quantisation levels (i.e. reduce the quantisation
interval)
• Making the signal large
• Increasing the quantisation levels AND making the signal large.
Arab Open University
How the quantisation error is minimized?
40
4.3 Digital sound [12/13]
• There is a trade-off here between quality and the number of
quantisation levels:
• In some applications, such as telephony, it is acceptable to use fewer
levels and thus reduce the amount of digital data.
• In other situations, quality is vital, so many more levels need to be
used.
• Important Note: If there is a stereo sound, then
there are two sound signals and so twice the
number of samples to deal with in the same
time.
Arab Open University
• The larger the number of quantisation levels, the better the
quality, but the greater the amount of digital data that is
generated.
41
• Activity 2.19: In an audio CD, each of the sound signals in the
stereo sound is sampled at a rate of 44.1 kHz. When an audio
CD is being played, at what rate do the sound samples appear
from the CD?
• Sol.:
• If the sampling rate is 44.1 kHz, then for each channel there
will be 44100 samples per second.
• For the full stereo sound there will be twice this number of
samples:
• Sound signal rate = 2 x 44100 = 88200 samples per second.
Arab Open University
4.3 Digital sound [13/13]
42
Session Outline
• Part 2: Sound capture and processing (Cont.)
• Sound transducers
•
•
•
•
•
Making a recording
Analogue or Digital?
Digital Sound
Storing Sound
Digital Audio Compression
• Processing sound
• Listening to sound
Arab Open University
• Recording sound
43
4.4 Storing sound [1/3]
• Once a sound has been digitised:
• It can simply be sent along a suitable transmission channel to a
digital-to-analogue converter and loudspeaker at the other end.
• Or we can STORE the string of numbers (output of the analogue
to digital conversion).
•
•
•
•
Examples:
An audio CD which works on the same light-based principle as a CD-ROM.
Flash memories as used in the modern portable sound players (iPod, etc.)
Devices that contain a standard computer hard disk.
• Digital sound samples in their raw form do occupy a substantial
amount of memory.
Arab Open University
• Any device that can store large quantities of numbers can usually
be used to store sound.
44
• Activity 2.21: Consider an audio CD that contains exactly one hour
of stereo sound. Ignoring any additional requirements for format
information and other data to ensure the integrity of the sound
samples, how many bytes of storage does the CD need to contain?
Assume the sample rate is 44 100 samples per second and each
sample requires two bytes of storage.
• Sol.:
• One hour is 60 minutes or 3600 seconds.
• If the sample rate is 44 100 samples per second, then for each of the
stereo channels, the storage requirement is 3600 × 44 100 × 2 =
317 520 000 bytes (each sample requires two bytes of storage).
• Or, for both of the stereo channels we need: 317 520 000 × 2 = 635
040 000 bytes.
Note: Original versions of audio CDs could store 650 MB, which is close
to this value; more modern CDs can store more than this.
Arab Open University
4.4 Storing sound [2/3]
45
• Activity 2.22: In activity 2.19, you found that a stereo sound
signal using the sampling rate used in audio CDs generated
88200 samples per second. I’m sure you are aware that
broadband speeds are quoted in bits per second, a bit being a
single binary digit which can be a 1 or a 0. Assuming 16 bits
need to be used to represent each sound sample from each
sound channel, what would be the minimum broadband bit
rate needed to carry a CD-quality stereo sound in real time?
• Sol.:
• Bit Rate = Sampling rate x Number of bits per sample
• If there are 88200 samples per second and each sample
requires 16 bits, then the bit rate must be 88200 x 16 = 1 411
200 bits per second.
Arab Open University
4.4 Storing sound [3/3]
46
4.5 Digital audio compression [1/7]
• Digital compression involves minimising the amount of digital
data that a sound signal requires.
• It is a way to reduce the amount of digital data without affecting
the sound (or without causing an effect that can be heard).
• Advantages of data compression for digital audio means:
• The playing time on a given storage medium is extended
• Faster transfers of the sound can be obtained for a given
transmission channel data rate.
• There are two fundamentally different ways of compressing
digital sound data :
• Lossless Compression
• Lossy Compression
Arab Open University
• More data can be stored on a given storage medium
47
4.5 Digital audio compression [2/7]
• Lossless Compression:
• In a lossless compression, the digital data is stored in a compressed
form such that it can be recovered, sample-for-sample with nothing
altered.
• Lossless data compression is commonly used by the computer industry
to reduce the amount of storage.
• A computer ZIP file is an example of lossless compression of digital data.
Compression Ratio = uncompressed file size / compressed file size
• Compressing sounds with a lossless compression:
• In general lossless compression is not very efficient with digital sound.
• A stream of sound samples does not lend itself well to the deterministic
characteristics required by lossless compression methods.
• BUT, Lossless compression is used for high quality digital audio
transmission and storage.
• For example, compression in an audio CD achieves a lossless compression factor
of around 2.5.
Arab Open University
• Reminder (T215A):
48
4.5 Digital audio compression [3/7]
• Compressing sounds with a Lossy Compression:
• With lossy compression, some information in the digital audio
signal is removed before the remaining data is compressed
using lossless compression.
• Lossy compression should not produce any noticeable effects
on the sound quality.
• This depends on what is taken away and how the final sound is
reproduced.
• Lossy compression relies upon factors of human perception
to allow some information to be removed from the data.
• For this reason, the term perceptual coding may be used to
describe lossy compression methods.
Arab Open University
• This information cannot be recovered, for once it is removed it is
lost forever.
49
4.5 Digital audio compression [4/7]
• Audio perceptual compression (Lossy Compression):
• Audio perceptual compression removes the parts of the signal
that have been found to be inaudible to human listeners.
• A decoder then reconstructs the signal which should be perceived
by the listener to be the same as the original signal.
Arab Open University
• This is usually done by analysing the frequency and amplitude
content of a digital audio signal, and comparing it to a model of
human auditory perception
50
4.5 Digital audio compression [5/7]
• MP3 audio compression (A lossy compression):
• Moving Picture Experts Group (MPEG) defines a set of video
and audio compression and transmission standards.
• MPEG has been formed by the International Organization for
Standardization (ISO)
• Different MPEG-2 layers of compression are: layers 1, 2 and 3.
• Layer 3 has become the most used and is now almost universally
known as MP3.
• MP3 audio compression offers acceptable audio quality with a
high compression ratio in the region of 11:1.
Arab Open University
• The MPEG-2 standard is one of these standards
51
• MP3 coding (lossy compression):
• The stream of digital audio samples is sent first to a filter-bank
which splits the audio into 32 frequency bands that match the
frequency characteristics of the human ear.
• The sound content of each band is analysed and coded using a
psychoacoustic algorithm such as to require the lowest
possible amount of data for the given content.
• Sounds that cannot be heard are removed
• Examples:
• Sounds masked by louder ones and sounds below the hearing threshold.
• As the ear cannot determine the position of sounds with frequencies
below 100 Hz, the stereo information for those frequencies is also
discarded.
Arab Open University
4.5 Digital audio compression [6/7]
52
• MP3 coding (lossy compression):
• By varying the sample rate the coder can allocate more
samples to complex sounds and fewer to a less complex
sound.
• The compressed digital audio data is divided into blocks and
lossless Huffman coding is used to reduce the data
requirement to a minimum.
• MP3 Decoding (lossy decompression):
• When an MP3-coded sound is to be played back, decoding
requires the data blocks to be separated out.
• Then, the frequency data can be reconstructed and used to
rebuild the original waveform.
Arab Open University
4.5 Digital audio compression [7/7]
53
Session Outline
• Part 2: Sound capture and processing (Cont.)
• Sound transducers
• Adding Effects
• Listening to sound
Arab Open University
• Recording sound
• Processing sound
54
5. Processing sound
• if you wish to reduce the level of the sound by one half, then you
simply divide each sample value by two.
• If you want to cut a part of a sound then you simply delete the
affected samples respectively.
• If you want to mix two sounds, then you multiply each sample
from each sound signal by the required fraction and add the
results together.
• Same sampling rate should here be used and the two sounds must
be synchronized.
Let us discover some editing effects!
Arab Open University
• In its digital form, any sort of processing of the sound is simply
a matter of ‘number crunching’.
55
5.2 Adding effects [1/5]
• Those that change the amplitude of the sound in some
way
• Those that affect the frequency composition of the
sound
• Other effects that carry out specialist operations
Arab Open University
• Sound effects can be subdivided into three broad
categories:
56
5.2 Adding effects [2/5]
• The most commonly used effects that affect just the
amplitude of a sound are amplification, normalisation and
fading.
• Amplification is simply increasing (or decreasing) the level of a
complete sound by a fixed amount – it’s just like turning the
volume up or down.
• Fading is the process whereby the sound level is varied in
some predetermined way as the sound progresses.
• The most obvious example of this is fading the sound in at the
start and fading it out at the end.
Arab Open University
• Amplitude effects:
57
5.2 Adding effects [3/5]
• Amplitude effects:
• How it work?
• The sound is first scanned to determine the level of the loudest part.
• The factor by which this largest sample value needs to be multiplied
so that it attains a predefined ‘normalisation level’ is then calculated.
• Every sample value is multiplied by this factor.
• What Normalisation is used for?
• This effect increases the level of a sound so that it uses the top end
of the available amplitude range.
• This is done so that low-level noise and interference that may be
introduced during later processing become less audible.
• Recall that one way to decrease the quantization error is to “make the
signal large”
Arab Open University
• Normalisation
58
5.2 Adding effects [4/5]
• The group of effects that changes the frequency
composition of a sound in some way are often called
Equalisation.
• Equalisation boosts frequency ranges that have been
reduced through a transmission medium or audio
recorder in order to ‘equalise’ signal frequencies in these
ranges back to their original levels.
• Today, equalisation covers not only boosting certain frequency
ranges but also cutting them.
• Sometimes equalisation is fixed and is inbuilt into a system.
Arab Open University
• Frequency effects:
59
5.2 Adding effects [5/5]
• Many of these are more relevant to recordings of music rather than
speech.
• Specialist effects involve altering both the amplitude and/or frequency
composition of the sound as well as processing the sound in other ways.
• Examples:
• Echo is the process whereby a delayed version of the sound is added to
the un-delayed sound.
• Reverberation is the effect heard in a large building such as a cathedral
where the sound bounces round the building as it is reflected multiple
times by the walls, floor and ceiling.
• To recreate this effect artificially, it is necessary to produce multiple delays
and add them back to the live sound in proportions that reduce as the
delays get longer.
• Chorus is an effect only heard with music, and occurs when a number of
similar instruments or voices play/sing the same tune together.
Arab Open University
• Specialist effects:
60
Session Outline
• Part 2: Sound capture and processing (Cont.)
• Recording sound
• Processing sound
• Listening to sound
• Distortion
• Noise
• Hum
Arab Open University
• Sound transducers
61
• Some technical defects commonly occur in
sound recordings
• This explains why when we hear sounds
sometimes the sound might not be as expected.
Arab Open University
6. Listening to sound
62
6.1 Distortion
• If the sound level is too low, then the background noise in the
equipment may become significant.
• if the sound level is too high, the recording equipment may not
be able to cope with the very loud parts of the sound and may
become overloaded.
• When this happens, distortion is introduced to the recorded
sound.
Arab Open University
• One of the major concerns in any recording is to ensure that
the range of sound volumes to be recorded can be
accommodated by the recording equipment.
63
6.2 Noise [1/2]
• In technical terms, audible noise implies a sound that contains a
random mixture of all audible frequencies.
• The designer should ensure that the noise level is very much lower
than the softest sound that needs to be recorded.
Arab Open University
• All recording and sound reproducing equipment will
introduce noise
• Every analogue signal process, including amplifying, mixing
and some kinds of filtering, adds noise.
• Audible noise is often used to describe any objectionable,
unwanted sound like mains hum
64
6.2 Noise [2/2]
How noise is reduced?
• Audio compression keeps signal levels as high as possible, so
that added noise will be much quieter in comparison.
• Before the sound is recorded, the signal is compressed into a smaller
dynamic range
• The signal is then amplified to a level such that it is just below the
level that would cause distortion.
• Another way of minimising noise in the analogue domain is to
make sure analogue recordings are not continually rerecorded.
• Note: Noise in digital systems occurs during the
analogue-to-digital and digital-to-analogue processes.
Arab Open University
• Audio compression is not to be confused with digital audio
compression:
65
6.3 Hum
• In most countries the mains electricity supply is provided as an
AC (alternating current) signal.
• This frequency when applied to sound waves, produces a lowpitched humming sound that is within the normal range of
human hearing.
• The mains supply can ‘contaminate’ a sound signal and
produce an unwanted low sound.
• This unwanted sound is called a mains hum.
Arab Open University
• The voltage between the live and neutral connectors of a mains
plug is alternating sinusoidally between a positive and a negative
voltage.
• The electrical signal’s frequency is usually 50 Hz.
66
6.4 Unintentional equalisation
• Unintentional equalisation occurs when equalisation is added
unintentionally through defects in the sound system.
• Example 1:
• The sort of loudspeakers found in small portable radios where
the bass response of the loudspeakers is severely lacking.
• Example 2:
• The way the sound is processed is the telephone system
where both the low and the high frequencies in the sound are
deliberately removed
• This helps in maximising the efficiency of the telephone system.
• This degrades the quality of the sound, but is acceptable for the
telephone system which is only designed for speech.
Arab Open University
• Low-pitched sounds played through such loudspeakers will not be
heard as loud as they should be (if they are heard at all.)
67
Be Well Prepared!
Arab Open University
The Final Exam is approaching!
68