Lossless coding - Hanyang University

Download Report

Transcript Lossless coding - Hanyang University

Computer Vision –
Compression(2)
Hanyang University
Jong-Il Park
Topics in this lecture
 Practical techniques
 Lossless coding
 Lossy coding
 Optimum quantization
 Predictive coding
 Transform coding
Department of Computer Science and Engineering, Hanyang University
Lossless coding
=Error-free compression
=information-preserving coding
 General steps
1. Devising an alternative representation of the image in
which its interpixel redundancies are reduced
2. Coding the representation to eliminate coding
redundancies
Department of Computer Science and Engineering, Hanyang University
Huffman coding
 Most popular coding (Huffman[1952])
 Two step approach
1. To create a series of source reduction by ordering the
probabilities of the symbols and combining the lowest
probability symbols into a single symbol that replaces
them in the next source reduction
2. To code each reduced source, starting with the
smallest source and working back to the original
source
 Instantaneous uniquely decodable block code
 Optimal code for a set of symbols and probabilities
subject to the constraint that the symbols be coded
one at a time.
Department of Computer Science and Engineering, Hanyang University
Eg. Huffman coding
Department of Computer Science and Engineering, Hanyang University
Arithmetic coding
 Non-block code
 One-to-one correspondence between source
symbols and code words does not exist.
 an entire sequence of source symbols is assigned
a single arithmetic code word.
 As the length of the sequence increases, the
resulting arithmetic code approaches the bound
established by the noiseless coding theorem.
 Practical limiting factors
 The addition of the end-of-message indicator
 The use of finite precision arithmetic
Department of Computer Science and Engineering, Hanyang University
Eg. Arithmetic code
0.068
Department of Computer Science and Engineering, Hanyang University
LZW coding
 Lempel-Ziv-Welch coding
 Assigning fixed-length code words to variable length
sequences of source symbols but requires no a priori
knowledge of the probability of occurrence of the
symbols to be encoded
 Generating a dictionary(=codebook) as the
encoding proceeds.
 The size of the dictionary is an important parameter.
=> trade-off
 Applied to GIF, TIFF, PDF format and many zip
algorithm
Department of Computer Science and Engineering, Hanyang University
Eg. LZW coding
Department of Computer Science and Engineering, Hanyang University
2D Run-length coding
• Relative address coding(RAC)
Department of Computer Science and Engineering, Hanyang University
Lossless predictive coding
Principle: De-correlating data by prediction
= entropy reduction
Department of Computer Science and Engineering, Hanyang University
Eg. Lossless predictive coding
Histogram
Department of Computer Science and Engineering, Hanyang University
Lossy compression
 Approaches
 Predictive coding
 Transform coding
 Vector quantization
 Etc.
 Significant data reduction compared with lossless
compression at the expense of quality degradation
Department of Computer Science and Engineering, Hanyang University
Lossy predictive coding
Prevent error
accumulation
Department of Computer Science and Engineering, Hanyang University
Delta modulation(DM)
Department of Computer Science and Engineering, Hanyang University
DPCM
(Differential pulse code modulation)
 Optimal predictor:
Try to minimize the mean-square of the prediction
error
2
ˆ 2
E{en }  E{[ f n  f n ] }
subject to the constraint that
fn  en  fˆn  en  fˆn  f n
and
m
fˆn    i f n i
i 1
Department of Computer Science and Engineering, Hanyang University
Practical prediction
 Prediction for 2D Markov source
E{ f ( x, y) f ( x  i, y  j)}   2 vi hj
 Reduction of accumulated transmission error
m

i 1
i
1
 Typical predictors
A) fˆ ( x, y )  0.97 f ( x, y  1)
B) fˆ ( x, y )  0.5 f ( x, y  1)  0.5 f ( x  1, y )
C ) fˆ ( x, y )  0.75 f ( x, y  1)  0.75 f ( x  1, y )  0.5 f ( x  1, y  1)
0.97 f ( x, y  1) if Δh  Δv
ˆ
D ) f ( x, y )  
 0.97 f ( x  1, y ) otherwise
Department of Computer Science and Engineering, Hanyang University
Eg. Predictor
A
B
C
D
Department of Computer Science and Engineering, Hanyang University
Optimal quantization
 Minimization of the mean-square quantization error:
E{(s  ti ) }
2
Department of Computer Science and Engineering, Hanyang University
Lloyd-Max quantizer
 Optimal quantizer in the mean-square sense
 Method


Reconstruction level: centroid
Decision level: halfway
 No explicit closed-form solutions for most pdfs
 An iterative design procedure is applied in many cases
 Optimum uniform quantizer

(uniform q.+VLC) outperforms (non-uniform q.+FLC)
Department of Computer Science and Engineering, Hanyang University
Adaptive quantization
 Different quantization for each subimage(eg.block)
 improved performance
 increased complexity
Eg. Four different quantizers: Scaled version of the same quantizer
Notice: Substantial decrease in error
BUT small improvement in compression ratio
Department of Computer Science and Engineering, Hanyang University
Eg. DPCM vs. Adaptive DPCM
DPCM
Adaptive
DPCM
Substantial decrease
in perceived error
Department of Computer Science and Engineering, Hanyang University
Transform coding
 A reversible, linear transform is used
 Goal:
 to decorrelate the pixels of each subimage, or
 to pack as much information as possible into the
smallest number of transform coefficients
Department of Computer Science and Engineering, Hanyang University
Basis images: WHT
Department of Computer Science and Engineering, Hanyang University
Basis images: DCT
Department of Computer Science and Engineering, Hanyang University
Comparison: Energy compaction
DFT
• KLT is optimal BUT it is
image dependent!
•DCT is a good
compromise!
WHT
DCT
Best
performance
Department of Computer Science and Engineering, Hanyang University
DFT vs. DCT
2n-point
periodicity
Less blocking
artifact
Department of Computer Science and Engineering, Hanyang University
Effect of subimage size
•Complexity increases
•Performance enhances
Department of Computer Science and Engineering, Hanyang University
Eg. Block size
25% reduction
Error(8x8)
Org.
4x4
2x2
8x8
Department of Computer Science and Engineering, Hanyang University
Bit allocation
 Zonal coding
 Allocation of appropriate bits for each coefficient
according to the statistics
 Rate-distortion theory
1
2
 Eg. Gaussian pdf
R  log2
2
D
 Threshold coding
 Global threshold
 Local threshold
 Fixed (N-largest coding)
 constant rate
 Variable  variable rate. Good performance
Department of Computer Science and Engineering, Hanyang University
Zonal vs. Threshold
Department of Computer Science and Engineering, Hanyang University
Eg. Zonal vs. Threshold
Threshold
 better
zonal
Department of Computer Science and Engineering, Hanyang University
Quantization table
Z
•Different scaling for each coefficient.
•The same quantization curve for all coefficients.
Department of Computer Science and Engineering, Hanyang University
Eg. Quality control by scaling Z
34:1
67:1
Department of Computer Science and Engineering, Hanyang University
Wavelet coding
• New technique in 1990s
• Computationally efficient
• No subdivision no blocking artifact
• Good performance!
Department of Computer Science and Engineering, Hanyang University
Eg. Wavelet transform
Department of Computer Science and Engineering, Hanyang University