Audio Technology - Materi Perkuliahan-Iwan Sonjaya

Download Report

Transcript Audio Technology - Materi Perkuliahan-Iwan Sonjaya

Audio Technology
Introduction
(Konsep Dasar Audio Digital)
Iwan Sonjaya,MT
0816 956 829
[email protected]
www.iwankuliah.wordpress.com
What is sound?
Sound is a physical phenomenon caused by vibration of material
(ex.: violin). As the matter vibrates, pressure variations are
created in the air around it. The pressure waves propagate in the
air. When a wave reaches the human eardrums, a sound is heard.
Ear: receive 1-D waves.
Cochlea: convert to frequency dependent nerve firings,
sent to the brain.
Suara (Sound)
• Fenomena fisik yang dihasilkan oleh getaran
benda
• Getaran suatu benda yang berupa sinyal
analog dengan amplitudo yang berubah
secara kontinyu terhadap waktu
The wave form occurs repeatedly at regular interval or
periods. A sound with a recognizable periodicity is called
music. Non-periodic sounds called noises.
Air
pressure
period
amplitude
time
A sound frequency is the reciprocal value of its period. The
frequency represents the number of periods per seconds and is
measured in hertz (Hz). A kHz describes 1000 oscillations per
second or 1000 Hz.
Frequency Ranges
The frequency range is divided into:
Infrasonic: 0 to 20 Hz
Audio-sonic: 20Hz to 20 kHz (Human hearing frequency)
Ultrasonic: 20kHz to 1 GHz
Hypersonic: 1GHz to 10 THz
•In multimedia we are concerned with
sounds in the audiosonic range.
Frequency-Change of pitch
Amplitude
The amplitude of the sound is the displacement of the air pressure from its quiescent state,
which humans perceive subjectively as loudness or volume. Sound pressure levels are
measured in (db).
OR The amplitude of a sound is a measuring unit used to deviate the pressure wave from its
main value.
0 db - no sound
20 db - rustling of paper
35 db - quiet home
70 db - noisy street
130 db – pain threshold
Amplitude-determines
loudness of sound
Amplitude-change volume
Audio Representation on Computers
A computer measures the amplitude of the waveform at regular time intervals It then
generates a series of sampling values. The mechanism that converts an audio signal into
digital samples is the analog-to-digital converter (ADC). A digital-to-analog converter
(DAC) is used to achieve the opposite conversion.
Audio Sampling
1.
Determine number of samples per second;
2.
At each time interval determine the amplitude;
3.
Stored the sample rate and the individual amplitudes.
Sampling Rate: The number of samples per second. The CD standard rate of 44100 Hz
means that the wave form is sampled 44100 times per second.
Quantization: is a value of the sample. The resolution of a sample value depends
on the number of bits used in measuring the height of the waveform.
The sampled waveform with a 3 bits quantization results in only eight possible
values: 0.75, 0.50, 0.25, 0.00, - 0.25, - 0.50, -0.75 and -1. An 8 bit quantization
yields 256 possible values, 16 bit result in over 65536 values.
Quantization introduces noise
The lower quantization, the lower quality of the sound.
File size versus quality
1: Sampling at higher rates more accurately captures the high frequency content.
2: Audio resolution determines the accuracy with which a sound can be digitized.
3: Using more bits yields a recording that sounds more like its original.
4: High sample rate with high resolution = large files.
Here are the formulas for determining the size (in bytes) of a digital recording:
For monophonic recording:
Sampling rate * duration of recording in second * bit resolution / 8
For stereo recording:
Sampling rate * duration of recording in second * bit resolution / 8 * 2
Thus the formula for a 10 second recording at 22.05 KHz, 8 bit resolution would be:
22050 * 10 * 8/8 * 1 = 220, 500 bytes
A ten second stereo recording at 44.1 KHz, 16 bit resolution would be :
44100 * 10 * 16 / 8 * 2 = 1,764, 000 bytes
3D Sound Projection: The shortest path between sound source and
the auditor is called the direct sound path. All other sound paths
are reflected which means they are temporarily delayed before
they reach the auditor's ear.
MIDI ( Musical Instrument Digital Interface ) versus digital audio
The MIDI is a small piece that plugs directly into the computer’s serial port and
allows the transmission of music signals. Note that: MIDI does not produce sound,
only produce the parameters that are needed to be sent to the device that
translates those numbers into sound.
Physical Specification:
1: 5 pin DIN
a - pin 2: ground
b - pins 4 and 5: data
c - pins 1 and 3: unconnected
2: Shielded twisted pair of 50 feet max length
Electrical Specification:
1: Asynchronous serial interface.
2: 8 data bits, 1 start bit, and 1 stop bit
3: Logic 0 is current ON
4: Rise and fall time <= 2 msec
Data format has instrument specification, notion of beginning and end of note,
frequency and sound volume. This data grouped into MIDI messages that
specify a musical event.
A message contains 1 to 2 or 3 bytes.
1: First byte is status byte used to transmit message to a specific channel.
2: Remaining bytes are data bytes:
The number of data bytes is dependent on status byte.
MIDI standard specifies 16 channels and identifies 128 instruments. For
example, 0 is for piano, 40 for violin, 73 for the flute, etc.
Q1: Does one channel correspond to one instrument?
If an instrument is defined as a MIDI device, then typically yes, one channel will
send information to one of the instruments in the MIDI chain. If, on the other
hand, instrument is defined as the sound patch or voice being played (i.e. piano,
tuba, violin), then yes, one MIDI channel carries voice message information for
a single patch only.
Q2: Can a user define a new instrument?
No. MIDI contains only control information. The instrument heard
depends entirely on the MIDI device used to decode the stream.
Q3: Can physical modeling be included?
No. physical modeling cannot be represented using MIDI protocol.
Q4: What goes through the MIDI cable?
Voltage
1: Timed pulses of electricity – 31250 per second
Time
voltage: lo
hi lo hi
lo
hi lo hi lo hi
lo
hi
lo
bits: 1 1 0 1 0 0 0 1 1 0 1 0 1 0 0 0 1 1 0 0 1
2: MIDI Data Encoding
bits: 1 1 0 1 0 0 0 1 1 0 1 0 1 0 0 0 1 1 0 0 1
stop bit
byte
start bit
Q4: Talk about the MIDI messages ?
MIDI messages are divided into two different types:
1: Channel Message: Channel messages go only to specified devices.
A - Channel voice messages that describe music by defining pitch,
amplitude, duration and other sound qualities.
B - Channel mode messages determine the way that a receiving MIDI
device responds to channel voice messages.
2: System Message: System message go to all devices in a MIDI system because no channel
number are specified.
A - System real time messages are short and simple consisting of only
one byte. These message synchronize the timing of MIDI devices in
performance; therefore, it is important that they be sent at precisely
the time they are required.
B - System common message are commands that are used to play a
song. These messages enable you to select a song, find a common
starting place in the song and tune all the synthesizers.
C - System exclusive message allow MIDI manufactures to create
customized MIDI message to send between their MIDI devices.
MIDI versus digital audio
In contrast to MIDI data, digital audio data are the actual representation of sound, stored in
the form of thousands of individual samples (i.e. MIDI data are device dependent; digital
audio are not).
MIDI data has several advantage:
1. MIDI data are much more compact than digital audio files, and the size of the MIDI file is
completely independent on playback quality. In general MIDI files will be 200 to 1000
times smaller than digital audio files.
2. Because MIDI files are small, they don’t take up as much RAM, disk space, and CPU
resources.
3. MIDI data are completely editable.
Now for disadvantage:
1. Because MIDI data do not represent sound but musical instruments, you can certain that
playback will be accurate only if the MIDI playback device is identical to the device used
for production.
2. MIDI can not easily be used to playback spoken dialog.
MIDI Devices:
1. Sound generator: The principal purpose of the generator is to produce an audio signal that
becomes sound when fed into a loudspeaker.
2. Microprocessor: The microprocessor communicates with the keyboard to know what notes
the musician is playing, and with control panel to know what commands the musician
wants o send to the microprocessor.
3. Keyboard: The sound intensity of a tone depends on the speed and acceleration of the key
pressure.
4: Control panel: The control panel controls those functions that are not directly concerned
with notes and duration.
5. Memory: Memory is used to store patches for sound generator and setting on the control
panel.
Coding Methods
Kesawah Naik Bis....
Wahhhh ... materi kuliah hari ini
sudah habis....
Terimakasih…..
Untuk mahasiswa/i yang
tidak ngantuk dan tetap
konsentrasi Mengikuti
Perkuliahan ini.