3 Audio - dooncomputing
Download
Report
Transcript 3 Audio - dooncomputing
Higher Computing
Multimedia technology
Audio
I Power
Higher
Int. 2
Sound cards
Sound cards carry out the following jobs:
recording audio, playback of digitised audio,
playback of audio CDs, sound synthesis,
interfacing with MIDI instruments, digital input
and output for transferring files.
To help them carry out these tasks, sound cards
have ADCs and DACs as well as DSPs.
Sound cards use the following techniques for
capturing sound data.
Higher
Analogue to Digital Convertors
(ADCs)
The analogue signals from the array of
CCDs is fed to analogue to digital
convertors. These ADCs receive a
continuous streams of analogue current
which they convert to digital data
representing the sound.
Higher
Digital to Analogue Convertors
(DACs)
The digital to analogue converter (DAC) takes the
digital data encoding the sound and changes it
into a varying analogue signal.
This is fed out through the sound line out socket
and is used to control the diaphragm in the
speaker which creates the sound waves you hear.
Higher
Pulse Code Modulation (PCM)
A method of encoding information in a signal by
varying the amplitude of the pulses. This limits
pulse amplitude to several predefined values.
This technique is used by codecs to convert an
analogue signal into a digital bit stream. The
amplitude of the analogue signal is sampled and
converted into a digital value: It is described as
raw because the digitised data has not been
processed further, for example by compressing it.
Raw PCM sound files can be very large indeed.
Higher
PCM (Cont.)
The original signal is given.
Higher
PCM (Cont.)
This sound wave is then sampled at
regular intervals.
Higher
PCM (Cont.)
The signal is then split into different levels.
Higher
PCM (Cont.)
Each Pulse level is then recorded
giving us the digital signal.
Higher
Int. 2
Bit rate
This term is used to describe the number of
bits that are sent in one second to transmit a
sound file. Stereo CD needs a bit rate of
1378 kbps and an MP3 file 384 kbps.
Higher
Adaptive Delta Pulse Code
Modulation (ADPCM)
This compresses data that has been encoded in PCM
form. It stores only the changes between the samples, not
the samples themselves. This compresses PCM data by a
ratio of 4:1 since it uses only 4 bits for the sample change
rather than the 16 bits for the original PCM value.
Microsoft WAV format uses ADPCM. This means that
many Windows programs can play WAV files using the
Windows sound driver. WAV is the standard for storing
sound files on Windows systems and can be sampled at a
bit depth of either 8 or 16 bits and one of the following
sampling rates 11.025 kHz, 22.05 kHz or 44.1 kHz. WAV
files can be very large. One minute of sound can take up
as much as 27 MB of storage.
Higher
ADPCM (Cont.)
Higher
Int. 2
Resource Interchange File Format
(RIFF)
This is a file format for multimedia data on
PCs. It can contain bit-mapped graphics,
animation, digital audio and MIDI data.
The WAV file format is the RIFF format for
storing sound data.
Higher
Int. 2
MP3
Its full title is MPEG-1/2 Layer 3. It is a format for
compressing sounds which uses a lossy technique that
does not seriously degrade the quality of the sound
because it filters out aspects of the original sound that the
human ear cannot detect. After filtering it then applies
further compression techniques. A form of coding called
Huffman encoding is used to compress the data once it
has been captured.
One minute of music takes up around 1 MS of space.
MP3 allows compression of CDquality audio files by a
factor of 12 with little loss in quality. This explains why it
is such a widely used format.
Higher
Int. 2
MIDI
The Musical Instrument Digital Interface
(MIDI) is a standard interface used by musical
instruments like keyboards, synthesisers and
drum machines which enables notes played on
an instrument to be saved on a computer
system, edited and played back through a
MIDI device.
The information about the sound is stored in a
MIDI file which the computer can then use to
tell the instrument which notes to play.
http://www.cool-midi.com/
Higher
Int. 2
MIDI (Cont.)
When a MIDI sound is stored in a computer system the
following attributes or properties of the sound are stored:
Instrument, Pitch, Volume, Duration
Defines the instrument being played. Each built-in sound
on a MIDI keyboard has an instrument number assigned
to it. When selected the instrument number is saved by the
computer so that, on playback, the notes in the musical
piece are played with the sound of that specific
instrument.
This sets the musical tone of a note which is determined
by the frequency.
This controls the loudness or amplitude of the note.
This determines the length of a note (the number of
beats).
Tempo Rate is the speed at which the piece of music is
set.
Higher
Int. 2
MIDI (Cont.)
Advantages of MIDI
Allows musical pieces or messages to be exchanged and edited on different
computers.
It is an easily manipulated form of data. Changing the tempo is a
straightforward matter of changing one of the attributes.
A musician can store the messages generated by many instruments in one
file. This enables a musician to put together and edit a piece of music
generated on different midi instruments with complete control over each note
of each instrument.
Produces much smaller file sizes than other sound formats.
Because it is digital it is easy to interface instruments, such as keyboards, to
computers. The musician can store music on the computer and the computer
can then play the music back on the instrument.
Disadvantage
Browsers require separate plug-ins to play MIDI files.
Higher
Normalising sound files
When sound files are sampled, different sounds are louder
than others. Background noises or voices might be too
loud or too quiet. Different music tracks might playback
at different levels. To avoid this sound files are
normalised.
This means that the signal levels are adjusted so that they
all fall into line with the average volume of all of the
sounds on the recording. The normalising function on
sound editing software scans the uncompressed audio file
to determine the peak or average level and increases or
decreases the levels throughout the file to obtain the
desired volume level.
Higher
Clipping Sound Files
We have all listened to sound files that do not
sound good. Part of the sound seems unclear or
missing. The most probable cause of this is
clipping.
If a sound is recorded at too high a level then the
sound wave will be automatically clipped. This
means that the top of the sound wave is cut off.
Some sound editing software will indicate to the
user which amplitudes in a recording are being
dipped and offer the option of reducing the
recording volume.
Higher
Clipping Sound Files
Here Clipping has
occurred.
Sound wave
before
amplification
The Same sound
wave after
amplification
Higher
Int. 2
Fade
This means to gradually reduce the recording
volume of a sound so that it dies away slowly.
Many sound editors give the user graphical
controls which they can use to control the length
of the fadeout and the rate at which the volume
drops.
Most sound editing software comes provided with
fade settings, sometimes called 'envelopes' and
also lets the user define and save their own
'envelopes'.
Higher
Int. 2
Fade
Sample of music
Sample of music which has been faded.
Higher
Monophonic Sound
Commonly called mono sound, mono, or
non-stereo sound, this early sound system
used a single channel of audio for sound
output. In monophonic sound systems, the
signal sent to the sound system encodes
one single stream of sound and it usually
uses just one speaker. Monophonic sound is
the most basic format of sound output.
Higher
Stereophonic Sound
This means the audio is recorded on two
sound channels using a separate
microphone for each channel. Sounds
nearest the left microphone will record
loudest on the left channel, similarly for the
right channel.
Higher
Surround sound
This uses speakers to surround the
listener with a circle of sound.
The Dolby Surround Pro Logic
system uses two speakers in front
and two behind.
• The software uses algorithms to
create an 'all round' sound effect. It is
based on a mathematical filter which
is applied to the sound data.
• This distinguishes between the original sound and the
listener's perception of that sound in different environments
and from different directions.
• This creates the illusion of sound coming from a specific
location or reflecting off different surfaces.
Higher
Digital signal processor (DSP)
The DSP is an integrated circuit designed for high
speed data manipulation used in image
manipulation (as well as audio and other
applications, for example communications).
The DSP's main function is to compress and
decompress sound files as well as provide
enhancements to sounds, for example
reverberation.
Higher
Int. 2
Calculating the size of a sound file
We use the following formula to calculate
the size of a sound file
Sampling frequency (Hz) x
Sound time (s) x
File size (B) =
Sampling depth (B) x
Channels
Higher
Calculating the size of a sound file
(Cont.)
Use this formula to calculate the file size of
1 minute of mono sound sampled at a
frequency of 22.05 kHz and a bit depth of 8
bits.
x1
x1
File size = 22.05 x 103 x 60
sampling freq(Hz)
= 1 323 000 bytes
Time(s)
Sampling
No of
Depth (Bytes) Channels
= 1.26 MB