Quantization - s3.amazonaws.com

Download Report

Transcript Quantization - s3.amazonaws.com

Audio Signal Processing
-- Quantization
Shyh-Kang Jeng
Department of Electrical Engineering/
Graduate Institute of Communication Engineering
21
Overview
• Audio signals are typically continuous-time
and continuous-amplitude in nature
• Sampling allows for a discrete-time
representation of audio signals
• Amplitude quantization is also needed to
complete the digitization process
• Quantization determines how much
distortion is presented in the digital signal
22
Binary Numbers
• Decimal notation
– Symbols: 0, 1, 2, 3, 4, …, 9
– e.g., 1999  1*103  9 *102  9 *101  9 *100
• Binary notation
– Symbols: 0, 1
– e.g.,
[01100100]  0 * 2  1* 2  1* 2  0 * 2 
7
6
5
4
0 * 2  1* 2  0 * 2  0 * 2  100
3
2
1
0
23
Negative Numbers
• Folded binary
– Use the highest order bit as an indicator of sign
• Two’s complement
– Follows the highest positive number with the
lowest negative
4
– e.g., 3 bits, 3  [011],  4  [100]  2  4
• We use folded binary notation when we
need to represent negative numbers
24
Quantization Mapping
• Quantization
Continuous values
Q (x )
Binary codes
• Dequantization
Binary codes
1
Q ( x)
Continuous values
25
Quantization Mapping (cont.)
• Symmetric quantizers
– Equal number of levels (codes) for positive and
negative numbers
• Midrise and midread quantizers
26
Uniform Quantization
• Equally sized range of input amplitudes are
mapped onto each code
• Midrise or midread
• Maximum non-overload input value, xmax
• Size of input range per R-bit code, 
R
• Midrise   2 xmax / 2
R


2
x
/(
2
 1)
• Midread
max
• Let xmax  1
27
2-Bit Uniform Midrise Quantizer
1.0
0.0
01
00
10
11
-1.0
1.0
3/4
1/4
-1/4
-3/4
-1.0
28
Uniform Midrise Quantizer
• Quantize: code(number) = [s][|code|]
0 number  0
s
1 number  0
 2R 1  1 when number  1
code  
int( 2R 1 number ) elsewhere
• Dequantize: number(code) = sign*|number|
 1 if s  0
sign  
 1 if s  1
number  ( code  0.5) / 2R 1
29
2-Bit Uniform Midtread
Quantizer
1.0
01
0.0
-1.0
1.0
2/3
00/
10
0.0
11
-2/3
-1.0
30
Uniform Midread Quantizer
• Quantize: code(number) = [s][|code|]
0 number  0
s
1 number  0
R 1

2
 1 when number  1
code  
int((( 2R  1) number  1) / 2) elsewhere
• Dequantize: number(code) = sign*|number|
 1 if s  0
sign  
 1 if s  1
number  2 code /(2R  1)
31
Two Quantization Methods
• Uniform quantization
– Constant limit on absolute round-off error  / 2
– Poor performance on SNR at low input power
• Floating point quantization
– Some bits for an exponent
– the rest for an mantissa
– SNR is determined by the number of mantissa
bits and remain roughly constant
– Gives up accuracy for high signals but gains
much greater accuracy for low signals
32
Floating Point Quantization
• Number of scale factor (exponent) bits : Rs
• Number of mantissa bits: Rm
• Low inputs
– Roughly equivalent to uniform quantization
with R  2Rs  1  Rm
• High inputs
– Roughly equivalent to uniform quantization
with R  Rm  1
33
Floating Point Quantization
Example
• Rs = 3, Rm = 5
[s0000000abcd]
scale=[000]
mant=[sabcd]
[s0000000abcd]
[s0000001abcd]
scale=[001]
mant=[sabcd]
[s0000001abcd]
[s000001abcde]
scale=[010]
mant=[sabcd]
[s000001abcd1]
[s1abcdefghij]
scale=[111]
mant=[sabcd]
[s1abcd100000]
34
Quantization Error
• Main source of coder error
• Characterized by q 2
• A better measure
SNR  10 log10 ( x 2 / q 2 )
• Does not reflect auditory perception
• Can not describe how perceivable the errors
are
• Satisfactory objective error measure that
reflects auditory perception does not exist
35
Quantization Error (cont.)
• Round-off error
• Overload error
Overload
36
Round-Off Error
• Comes from mapping ranges of input
amplitudes onto single codes
• Worse when the range of input amplitude
onto a code is wider
• Assume that the error follows a uniform
distribution
/2
• Average error power q 2   q 2 1 dq  2 / 12
 / 2
• For a uniform quantizer

2
q 2  xmax
/(3 * 22 R )
37
Round-Off Error (cont.)
2
/ q )
2
2
/ xmax )  20 R log10 2  10 log10 3
SNR  10 log10 ( x
 10 log10 ( x
2
SNR(dB)
2
 10 log10 ( x 2 / xmax
)  6.021R  4.771
16 bits
8 bits
4 bits
Input power (dB)
38
Overload Error
• Comes from signals where x(t )  xmax
• Depends on the probability distribution of
signal values
• Reduced for high xmax
• High xmax implies wide levels and therefore
high round-off error
• Requires a balance between the need to
reduce both errors
39
Entropy
• A measure of the uncertainty about the next
code to come out of a coder
• Very low when we are pretty sure what code
will come out
• High when we have little idea which
symbol is coming
• Entropy   pn log 2 (1 / pn )
n
• Shanon: This entropy equals the lowest
possible bits per sample a coder could
produce for this signal
40
Entropy with 2-Code Symbols
Entropy
Entropy  p * log 2 (1 / p)  (1  p) * log 2 (1 /(1  p))
p
0
1
• When p  0.5 there exist other lower bit
rate ways to encode the codes than just
using one bit for each code symbol
41
Entropy with N-Code Symbols
• Entropy   pn log 2 (1 / pn )
n
• Equals zero when probability equals 1
• Any symbol with probability zero does not
contribute to entropy
• Maximum when all probabilities are equal
• For 2R equal-probability code symbols
R
R
R
Entropy  2 * (2 * log 2 (2 ))  R
• Optimal coders only allocate bits to
differentiate symbols with near equal
probabilities
42
Huffman Coding
• Create code symbols based on the
probability of each symbols occurrence
• Code length is variable
• Shorter codes for common symbols
• Longer codes for rare symbols
• Shannon: Entropy  R Huffman  Entropy  1
• Reduce bits over fixed-bit coding, if the
symbols are not evenly distributed
43
Huffman Coding (cont.)
• Depend on the probabilities of each symbol
• Created by recursively allocating bits to
distinguish between the lowest probability
symbols until all symbols are accounted for
• To decode, we need to know how the bits
were allocated
– Recreate the allocation given the probabilities
– Pass the allocation with the data
44
Example of Huffman Coding
• A 4-symbol case
– Symbol
00
– Probability 0.75
01
0.1
10
0.075
0
0
1
1
00
• Results
– Symbol
– Code
11
0.075
1
00
0
01
10
10
110
11
111
– R  0.75 *1  0.1* 2  0.15 * 3  1.4 bits
45
Example (cont.)
• Normally 2 bits/sample for 4 symbols
• Huffman coding required 1.4 bits/sample on
average
• Close to the minimum possible, since
Entropy  1.2 bits
• 0 is a “comma code” here
– Example: [01101011011110]
46
Another Example
• A 4-symbol case
– Symbol
00
– Probability 0.25
0
01
0.25
10
0.25
1
0
0
1
11
0.25
1
• Results
– Symbol
00
01
10
11
– Code
00
01
10
11
• Adds nothing when symbol probabilities are
roughly equal
47