Transcript Slides

Compressive Sensing
Lecture notes by Richard G. Baraniuk
IEEE Signal processing magazine July 2007
Compressive Sensing Tutorial PART 1
Svetlana Avramov-Zamurovic
January 15, 2009.
Motivation

Practically in the vast majority of applications data acquisition is based on
Shannon/Nyquist sampling theorem that requires sampling rate at least
twice the message signal bandwidth in order to achieve exact recovery.

Source frequency characteristic: This requirement is not practical for video
industry since the signal bandwidth is very wide and the technology is not
feasible to achieve necessary processing rates in order to satisfy the
Shannon/Nyquist sampling theorem. Practical solution bandlimits the
signals and prevents aliasing.

Channel frequency characteristic: There is a significant class of signals
(pictures for example) that are compressible: not all the data is necessary
to transmit in order to get ‘good enough’ representation of the original
message. Practical solution introduces lossy compression processing at the
source level.

Compressed sensing is new method to capture and represent compressible
signals at the rate well below Nyquist’s rate.



Employs nonadaptive linear projections (random measurement matrix)
Preserves the signal structure (length of the sparse vectors is conserved)
Reconstructs the signal from the projections using optimization process (L1 norm)
Classical Approach: Transform coding
PICTURE
N samples
(ALL
measurements
are taken)
ORTHOGONALIZITION
Full set of projections
is found
SORTING
CODING
straightforward
decoding
Only K coefficients
K largest
coefficients selected are coded
N-K coef. dumped
K<<N
EXAUSTIVE SEARCH
Compressed sensing
PICTURE
CODING
Signal Reconstruction
Capture only significant components
Only M (K ≈ M) samples (measurements) are taken
(a) Measurements must be carefully designed
(b) Original signal (picture) must be sparse
(1) Underdetermined system
M<N
(2) Reconstructed signal
must have N components
(3) L1 norm is used to find
sparse representation
Inefficiencies of transform coding
(classical)
 Sampling required by Nyquist rate is high
producing huge number of samples N.
 Finding signal representation in an
orthonormal basis (Fourier, etc.) is
computationally intensive since ALL si must
be found and sorted in order to find K
significant coefficients.
 Along with the magnitude of K coefficients
their location needs to be coded introducing
the overhead.
Definitions
x target signal,
represenation of an image
in time or space domain
x is in N dimensional domain
 x1 
 11
    orthonormal basis  
 xN 
 N 1
 1N 
 
i othonormal vector

 NN 
N
 i1 
  
 iN 
Classical Approach: Find the signal projections on a given basis : x=  si i
i 1
sirepresenation of the image in  domain   iT x
x is K-sparce if only K basis vecotors have s  0 (thoeretical)
i
i
x is compressible if it has just a few large s coeficients and many small (practical)
i
Compressable signals are well represented by K-sparse representations.
Compressed sensing: Measure only significant components (M
 y1 
 11
y measurements
    measurement matrix  
y is in M dimensional domain
 yM 
M 1
y=x=s  Goal y  s
1N 


MN 
N)
Φ
y
Ψ
S
=
y
Θ
=
M
x
N
(a)
(b)
K-sparse
(a) Compressive sensing measurement process with a random Gaussian
measurement matrix and discrete cosine transform (DCT) matrix . The
vector of coefficients s is sparse with K = 4.
Φ (phi, measurement matrix) Ψ (psi, orthonormal basis)
Θ (theta, Compressed Sensing reconstruction matrix)
(b) Measurement process with    There are four columns that
correspond to nonzero si coefficients; the measurement vector y is a
linear combination of these columns.
From IEEE Signal processing magazine July 2007, R. G. Baraniuk, Compressive sensing
S
Compressive sensing solution



Measurement matrix must be stable and produce reconstruction of original signal, x, (length N) from M
measurements
1) Ψ othonormal basis must be selected 2), stable Φ (measurement matrix) must be designed (CS
measures PROJECTIONS of signal onto a basis) and 3) the signal x has to be reconstructed from the
underdetermined system using optimization (norm L1) linear combinations, since the projections( linear
combinations are measured) we are extracting the sparse components using optimization algorithm (L1)
with restrictions ( Rip and incoherence)
The following conditions must hold in order to find unique sparse solution

Restricted isometry property RIP

If x is K sparse and K magnitudes and locations are known than for M>K we can find solution
under the following condition:
Θ=ΦΨ MUST preserve the length of the vectors sharing the same K non-zero coefficients as s
1  


v
v
2
2
 1 
R
a 2   ai 2
i 1
ε is a small number, v is any vector sharing the same K non-zero coefficients as s
Incoherence

Rows of Φ (measurement matrix) CAN NOT sparsely represent the columns of Ψ (othonormal
basis) and vice versa. MUST be DENSE!
Direct construction of
measurement matrix (Φ)

Verifying RIP requires

zero entries of length N
However BOTH RIP and incoherence could be achieved by
simply selecting Φ randomly



N
 
K
all possible combinations of K non-
φij are independent and identically distributed random variables
from Gaussian probability density with zero mean and 1/N
variance.
So measurements y= Θ x are randomly weighted linear
combinations of elements of x.
iid Gaussian Φ (MxN) properties


Φ is incoherent with the basis Ψ=I, Θ= ΦΨ=Φ with high
probability if M≥cKlog(N/K), c is a small constant (log2(N/K+1)
so M<<N.
Φ is universal: Θ= ΦΨ will be iid Gaussian and have RIP with
high probability regardless of choice of the othonormal basis Ψ
Designing a signal reconstruction
algorithm



M measurements, y, random measurements matrix, Φ, and basis Ψ, are
used to reconstruct compressible signal x (length) N or equivalently its
sparse coefficient vector s.
Since M<N there are infinitely many solutions. Since Θ(s+r)= Θ(s)=y for
any vector r in the null space N(Θ). But we have a restriction that the
solution is sparse.
Signal reconstruction algorithm aims to find signal’s sparse coefficient
vector in the (N-M)-dimensional translated null space H=N(Θ)+s.



L2 norm (energy) minimized. Pseudo inverse is closed form to find the
solution but it DOES NOT find sparse solution.
L0 norm counts the number of non-zero elements of s. This optimization can
recover K sparse signal exactly with high probability using only M=K+1
measurements but solving it is unstable and NP-complete requiring
exhaustive enumeration of all (N K) possible locations of the non-zero entries
in s.
L1 norm (adding absolute values of all elements) can exactly recover K
sparse signals and closely approximate compressible signals with high
probability using only M≥cKlog(N/K) iid Gaussian measurements.

Convex optimization reducing to linear programming known as Basis pursuit
with the computation complexity about O(N3)
RN
S
a
b
c
(a) The subspaces containing two sparse vectors in R3 lie close to the
coordinate axes.
(b) Visualization of the L2 minimization (5) that finds the non-sparse
point-of-contact s between the 2 ball (hyper-sphere, in red) and the
translated measurement matrix null space (in green).
(c) Visualization of the L1 minimization solution that finds the sparse
point-of-contact s with high probability thanks to the pointiness of the 1
ball.
From IEEE Signal processing magazine July 2007, R. G. Baraniuk, Compressive sensing
Compressive Imaging Camera Architecture
• Single detector: By time multiplexing a single detector, we can use a less expensive and yet more sensitive
photon detector. A single detector camera can also be adapted to image at wavelengths that are currently
impossible with conventional CCD and CMOS imagers (for example IR, security applications).
• Universality: Random and pseudorandom measurement bases are universal in the sense that they can be
paired with any sparse basis. This allows exactly the same encoding strategy to be applied in a variety of
different sensing environments; knowledge of the nuances of the environment are needed only at the
decoder. Random measurements are also future-proof: if future research in image processing yields a
better sparsity-inducing basis, then the same set of random measurements can be used to reconstruct an
even better quality image.
• Encryption: A pseudorandom basis can be generated using a simple algorithm according to a random seed.
Such encoding effectively implements a form of encryption: the randomized measurements will
themselves resemble noise and be meaningless to an observer who does not know the associated seed.
• Robustness and progressivity: Random coding is robust in that the randomized measurements have equal
priority, unlike the Fourier or wavelet coefficients in current transform coders. Thus they allow a
progressively better reconstruction of the data as more measurements are obtained; one or more
measurements can also be lost without corrupting the entire reconstruction.
• Scalability: We can adaptively select how many measurements to compute in order to trade off the amount of
compression of the acquired image versus acquisition time; in contrast, conventional cameras trade off
resolution versus the number of pixel sensors.
• Computational asymmetry: Finally, CI places most of its computational complexity in the decoder, which will
often have more substantial computational resources than the encoder/imager. The encoder is very
simple; it merely computes incoherent projections and makes no decisions.
From D. Takhar at al.
A new Compressive Imaging Camera Architecture using Optical domain Compression
Compressive imaging test bed








Micro-actuated mirrors => commercially viable MEMS technology for the video/projector display market
as well as laser systems and telescope optics. Texas Instruments (TI) digital micromirror device (DMD).
TI DMD developer’s kit and accessory light modulator package (ALP).
The DMD consists of an array of electrostatically actuated micro-mirrors where each mirror the array is
suspended above an individual SRAM cell. The DMD micro-mirrors form a pixel array of size 1024×768.
Each mirror rotates about a hinge and can be positioned in one of two states (+12 degrees and −12
degrees from horizontal); thus light falling on the DMD may be reflected in two directions depending on
the orientation of the mirrors.
Desired image is formed on the DMD plane with the help of a biconvex lens; this image acts as an
object for the second biconvex lens which focuses the image onto the photodiode.
The light is collected from one out of the two directions in which it is reflected (e.g., the light reflected
by mirrors in the +12 degree state). The light from a given configuration of the DMD mirrors is
summed at the photodiode to yield an absolute voltage that yields a coefficient y(m) for that
configuration. The output is amplified through an op-amp circuit and then digitized by a 12-bit ADC.
User decides how many measurements to take (M).
Object is illuminated by LED light source, 1kHz.
Compressed Sensing:




system directly acquires a reduced set of M incoherent projections of
an N-pixel image x without first acquiring the N pixel values;
random position of the mirror(+12 or -12); programmable so we can
decide on pattern (Radamacher)
Mirrors can programmed to stay in a position longer producing better
resolution. Assumption: Object is stationary!
Summing of the light in the photo diode, each measurement
multiplexes several pixel values (activating DMD in blocks) CS
reconstruction algorithm extracts them.
From D. Takhar at al.
A new Compressive Imaging Camera Architecture using Optical domain Compression
Scene
Image
Photodiode
Bitstream
A/D
Digital Micromirror Device
Reconstruction
Single-pixel, compressive
sensing camera.
Array
Random Number Generator
Major challenges
(1) Acquisition speed
(2) DSP processing speed
The images in (a) and (b) are not meant to be aligned.
(a) Conventional
digital camera image
of a soccer ball.
(b) 64 × 64 black-and-white image x of the same ball (N =
4,096 pixels) recovered from M = 1,600 random
measurements taken by the camera in (a)
From IEEE Signal processing magazine July 2007, R. G. Baraniuk, Compressive sensing
Experimental results
Original
16384 Pixels
1600 Measurements
(10%)
16384 Pixels
3300 Measurements
(20%)
Sources of noise
(1) Nonlinearities in the photodiode
(2) Non-uniform reflectance of the mirrors
through the lens focus onto the photodiode
(changing the weighting of the pattern blocks)
(3) Non-uniform mirror positions
65536 Pixels
1300 Measurements
(2%)
65536 Pixels
3300 Measurements
(5%)
Robustness of the CS reconstruction algorithm
Suppresses quantization noise from ADC
and photodiode circuit noise.
Notes

RB in TI : can use 10 Mega pixel camera to get the resolution of 25 Mega
pixel picture! CS is scalable!

The classical approach takes the whole picture at intervals (or as a stream
in the video). How fast is signal reconstruction in CS? Can it keep up with
the moving target? In the classical approach the records are taken and after
the event the zoom in possibility is open. Cs never takes ALL of the
information and relays on random approach that statistically gives good
results but this method may not be acceptable in some applications.


But in D Takhar at al. possibility of zoom in on a part of the image as opposed to
acquiring the whole image and then extract the piece of interest. Adaptively
sectioned to highlight.
RB : just choose the measurement matrix to have random iid Gaussian
distribution.

Michael Lustig : just choosing samples at random is not good idea, because the
distance between samples are not preserved. Globally, such a sampling scheme
has uniform density but locally you will get high density areas and ‘holes’ these
holes really mess up the receiver array source of redundancy. Using Poisson disk
sampling distribution provides local uniform sampling redundancy are maximally
exploited.