Transcript Imaging

Imaging
Imaging is the process of acquiring images, the process of
sensing our surroundings and then representing the
measurements that are made in the form of an image.
Topics:
Passive and Active Imaging
The electronic Camera
Image Formation by a Converging Lens
Charge Coupled Devices
The Human Eye
Review (Electromagnetic Radiation)
Energy in the form of electromagnetic radiation reacts with earth
materials in several ways as illustrated below. In most situations,
incident energy from the sun interacts with a specific material and
it is either reflected, scattered, emitted, transmitted or absorbed
completely.
In remote sensing, we
are predominately
interested in energy
that has been reflected,
scattered, or emitted.
All regions of the EM spectrum are
suited to imaging.
Remote Sensing Instruments
This reflected/scattered/emitted energy can then be
measured using various kinds of remote sensing
instruments. Thankfully, many earth materials have very
unique spectral signatures, almost like fingerprints.
Shown above are the reflected spectral signatures of two important
alteration minerals, kaolinite in blue and alunite in red.
Wavelength is along the x-axis and is given in microns from 2.0-2.5
um. Reflectance is reported in percent from 0-1.0 on the y-axis.
Passive and Active Imaging
Passive imaging employs energy sources that are already
present in the scene, whereas active imaging involves the
use of artificial energy sources to probe our surroundings.
Passive imaging is subject to the limitation of existing
energy sources: The Sun, for example , is a convenient
source of illumination. Active imaging is not restricted in
this way, but it is more complicated and expensive
procedure, since we must supply and control a source of
radiation in addition to an imaging instrument.
Passive Imaging
Day or night, passive optical imaging systems can detect
light reflected or emmited from any external illumination
source.
Active Imaging
Unlike passive systems, active optical imaging systems
detect light reflected from an internal illumination source.
This internal illumination usually comes from one or more
white light sources (arc lamp, etc.) or lasers. The internal
illumination source may also be pulsed, allowing for depthor distance-profiling.
Passive and Active Sensors
Passive sensors detect naturally reflected or radiated energy.
Active sensors supply, or send out, their own electromagnetic
energy and then record what comes back to them. An example of a
passive remote sensing satellite is Space Imaging's IKONOS. A
common type of active remote sensing is radar.
Data: SRTM Image credit: NASA
Characteristics: C and X band, 30m pixel,
16m absolute vertical height accuracy
Processing Shown: Interferometric techniques
were used to create a Digital Elevation Model
(DEM) which was then color coded for
elevation, where browns are the highest points.
Notes: These radar data were taken from the
space shuttle using a large antenna that sent
out EM energy to the surface of the Earth
from the shuttle, and then received it back.
Mt. Fuji figures prominently in the image with
Tokyo in the foreground.
Data: IKONOS Image credit:
Space Imaging
Characteristics: Five bands, 1m
sharpened pixel resolution,
satellite based
Processing Shown: True color
image, georeferenced
Notes: Light reflecting off of the
shallow waters and reef complexes
was received "passively" by
IKONOS. The sun is the natural
illumination source.
Common types of radar interaction
Typical Sensors:
Active: RADARSAT, SRTM, ERS, SIR-C
Passive: Landsat, IKONOS, HyMap,
http://www.es.ucsc.edu/~hyperwww/chevron/I
konos
See for more information :
http://www.ccrs.nrcan.gc.ca/ccrs/learn/tutorial
s/stereosc/chap5/chapter5_3_e.html
Hyperspectral Remote Sensing
What is it?
Hyperspectral remote sensing is the science of
acquiring digital imagery of earth materials in many
narrow contiguous spectral bands. Hyperspectral
sensors or imaging spectrometers measure earth
materials and produce complete spectral signatures
with no wavelength omissions. Such instruments are
flown aboard space and air-based platforms.
Handheld versions also exist and are used for
accuracy assessment missions and small scale
investigations.
Hyperspectral Remote Sensing - How it
works ?
The samples and lines of a hyperspectral image
cube simply represent the x and y directions of
the image collection. The number of bands in
the z direction for the cube varies depending on
the instrument, but is on the order of 100 or
greater. Each pixel has one spectral signature
associated with it, relating degree of radiance or
reflectance with respect to each individual
wavelength chunk. The bands summed together
create one continuous spectra.
Hyperspectral Remote Sensing Examples
The electronic Camera
A camera uses a lens to focus part of the
visual environment onto a sensor.
The most important characteristics of a
lens are its
1) magnifying power and its
2) light gathering capacity.
Image Formation by a Converging Lens
f
object
lens
optical
axis
image
u
v
The above Figure shows: two arrows, a converging lens, and rays
of light being emitted by the red arrow. The red arrow is the
object, while the green arrow is the image that results after the
rays have passed through the lens. The Figure also displays two
focus shown as blue dots.
Image Formation by a Converging Lens
f
object
lens
optical
axis
image
u
v
The image formed by a converging lens can be made using only three principal rays.
· Ray 1 is the ray which travels parallel to the axis and after going through the lens it
passes through the focal point.
· Ray 2 passes through the center of the lens.
· Ray 3 goes through the focal point and then travels parallel to the axis after passing
through the lens. Thus any point on the object can be mapped, using the rays above,
into a corresponding point on the image. This point is located on the intersection of the
rays.
http://www.phys.ksu.edu/perg/vqm/laserweb/Java/Javapm/java/Clens/index.html
The most important characteristics of a lens are its
magnifying power and its light gathering capacity.
size
object size
image
Magnification factor is
By similar triangles , we can
also say that
m 
image size
v

u
object size
where u is the distance from an object to the lens and v is the
distance from the lens to the image plane. Hence
v
m 
u
(1)
It is usual to express the magnifying power of a lens
in terms of its focal length, f, the distance from the
lens to the point at which parallel incident rays
converge.
Focal length is given by the lens
equation
1
1 1
 
f
u v
From (1) and (2)
um
f 
m 1
(3)
Example: We need to form an image of a 10-cm wide
object, 50cm away, on a sensor measuring 10mm across.
The magnification factor we require is
hence the focal length should be
m
image size
object size
10

 0.1
100
um
500 x0.1
f 

 45.5
m 1
1 .1
we need a lens with a focal length of approximately 45mm.
The light gathering capacity of a camera
lens is determined by its aperture.
Aperture can be no larger than the diameter of the lens itself, and it
is usually made smaller this by means of a diaphragm – a circular
hole of adjustable size, incorporated into the lens. It is normal to
express the aperture of a lens as an “f-number”- focal length is
divided by aperture diameter
F-number
It is equal to the ratio of the focal length of the lens divided by the
diameter of its limiting opening (aperture): f-number = focal
length/iris diameter. Note that the number becomes smaller as the
aperture grows larger and that it must be squared to directly measure
the area which is the light gathering capacity.
Most lenses offer a sequence of fixed apertures (f2.8, f4, f8, f11)
All lenses suffer from defects or aberrations,
which can affect image quality
Spherical Aberration
As a result we have BLURRED IMAGES
Any spherical mirror or lens
will bring to a focus only
those light rays that emanate
from the radius of curvature.
Light from a distance source,
which is essentially parallel,
will not come to a precise
focus.
The effects of aberrations can be reduced by
making the lens aperture as small as possible.
Charge Coupled Devices
A charge coupled device (CCD) is a detector that provides digital
images.
- The digital format allows the images to be manipulated by a
computer, which can electronically sharpen, modify, or copy them.
- A CCD relies on semiconductor properties for its operation.
Basically, photons of certain energies strike a layer of silicon,
promoting electrons into the conduction band. A CCD collects and
counts these electrons to determine how many photons were
absorbed by the silicon.
- The CCD is a solid-state chip that turns light into electric
signals. - CCD’s have become the sensor of choice in imaging applications
because they do not suffer from geometric distortion.
CCD Sensors - BASIC DESIGN
Every CCD starts with a backing of some sort, usually glass. The
backing is then covered with metal electrodes, as shown in Figure
1 below. The bottom row of electrodes is designated as the readout
register.
Figure 1
A thin layer of silicon dioxide is placed on top of the
electrodes. Above the silicon dioxide is a layer of n-type
silicon. Finally, a thin layer of p-type silicon lies a top the ntype silicon. This creates a p-n junction that covers the
electrodes. The purpose of the silicon dioxide is to separate
electrically the silicon from the electrodes. This helps to
isolate the electrons in the silicon.
Between each column of electrodes there is a channel stop,
indicated by dashed lines in Figure 1. Channel stops prevent
electrons from flowing horizontally across the CCD array.
There are no channel stops in the readout register; in this row,
charges are free to move horizontally to the detection device.
A side view of a CCD is shown in Figure 2.
Figure 2
HOW DO THEY WORK?
Each CCD is divided into many groups of electrodes called pixels. The
exact number of pixels depends on the individual pixel size and the cost
of the array. Typically, a low-cost commercial array has dimensions of
600 x 300 pixels, while high cost arrays for scientific applications have
dimensions closer to 3000 x 3000 pixels. Each pixel has two, three, or
four electrodes in it with three being the most common.
As photons of light enter the CCD, they are absorbed by the silicon layer
if they have sufficient energy. Each absorbed photon causes an electron
to be promoted into the conduction band. During CCD photography, for
every one hundred incident photons, between forty and eighty are
absorbed by the silicon. (Conventional photographic film absorbs
approximately 2% of all incident photons.) The electrons then collect
above the center electrode, which has been positively charged.
Accumulated charge is shifted out
After the exposure is completed, all of the collected electrons are
sequentially moved to the detection device where they are counted. This
is accomplished by changing the charge on the electrodes in a timed,
sequential manner.
To illustrate, electrodes 2 and 5 are initially positively charged and so
collect all the electrons photogenerated around them, Figure 4a. Next,
electrodes 3 and 6 gradually begin to acquire positive charge while 2 and
5 gradually become negatively charged. This causes the electrons to
move to electrodes 3 and 6, Figure 4b. Then, electrodes 1, 4, and 7 gain
positive charge while 3 and 6 become negatively charged. This causes
the electrons to move once more, Figure 4c. The electrons from the
bottom most-pixel (pixel #2) are now in the readout register, electrode 7.
The readout register now begins to deliver the electrons to the detection
device by the same process of varying the voltage applied to the
electrodes, but now across the bottom row. The detection device counts
the number of electrons by applying a known voltage across the final
electrode. (The final electrode is located inside the detection device, not
on the CCD array.) The voltage will increase slightly as it picks up
electrons from the CCD. By subtracting the background voltage, the
voltage from the CCD electrons can be found. The number of electrons
can then be calculated using Faraday's constant and this voltage increase.
Electrons from pixel #1 would follow the same procedure. They would
be the next group to be moved into the readout register and to the
detection device.
Finally a computer takes the numbers from the detection device and uses
them as relative intensities to display the image on the screen. It takes as
little as 100 ms to completely move all electrons from a 2000 x 2000
pixel CCD to the detection device. Exposure time may be much longer
for low level light sources.
CCD – Charge – Coupled Devices
• rectangular array of photodetector sites built into
silicon.
• free of geometric distortion.
• heat can generate thermal electrons – dark current
• accumulated charge is shifted out
• full-frame – shifts out one row of pixels at a time
• frame – transfer – entire contents of the imaging
area are shifted into a storage buffer
For more information please see :
http://mrsec.wisc.edu/edetc/CCD
The Human Eye
1. Flexible lens can adjust focal center to image objects near
or far .
- lens flattens for distant objects
-lens thickens for near objects
2. Retina contains photoreceptive cells that form the image.
3. Cones(bright-light ) vision
- highly sensitive to color
- 6-8 million of them centered in the fovea
 4. Rods ( dim-light) vision
-
- 75-150 million distributed across the retina
-sensitive to low level of illumination
-
- non sensitive to color
The Human Eye

Brightness adaptation

Perceived brightness is not a simple function of
intensity

Although there are nearly 130 million photoreceptors
in the retina, the optical nerve contains only a million fibres.
Biologically Motivated System for Feature
Extraction from Sequences of Images
Retina
Suppression of noise
Retina performs excellent
spatial decorrelation.
Receptive fields of retinal ganglion cell are well described
with Center-surround type whitening filters (Difference of
Gausians).
Lateral Geniculate Nucleus
Primary Visual Cortex
Feature detection
Linear predictive
coding
The main purpose of
the PVC processing is
to complete the
temporal decorrelation
of the retinal signal.
Receptive fields of simple – cells are well described with
a Gabor function.
To
From
Higher Visual Area
Sparse coding
s Biologically Motivated System for Feature
Extraction from Sequences of Images
Retina
The basis functions that
result from contrast
sensitivity evaluative
procedure are similar to
the receptive field shape
of cells in the retina.
Predictive coding
Lateral Geniculate Nucleus
Primary Visual Cortex
To
From
The basis functions
that result from sparse
codes learning are
localized, oriented, and
banpass, similar to the
receptive field shape of
cells in the cortex.
Higher Visual Area
Sparse coding
Three-Dimensional Imaging
http://www.stereovision.net/
http://www.nvnews.net/articles/3dimagery/sterview.shtml
To assist you, I have placed a
yellow dot in each picture. The
idea is to make yourself go
cross-eyed so that the two dots
join and form a third dot in
between them which is closer to
you. Below is a diagram of what
you should make your eyes do
Stereoscopy
The separation of the points is
named the disparity.
There is an inverse
relationship between disparity
and depth in the scene,
disparity will be relatively
large for points in the scene
that are near to us and
relatively small for points that
are far away.
http://www.vision3d.com/book.html
Viewing Machines
There are numerous different type of machines that
show a stereo pair of images to the viewer. The most
popular kind is probably the "View Master" which
most of you have probably seen in toy stores
These machines are basically an assisted version of
the parallel viewing method. They contain lenses to
magnify the image and make sure that each eye only
looks at the image it was meant to see.
Introduction to Stereo Imaging -Theory
Figure shows:
•2 cameras with their optical axes parallel
and separated by a distance d.
•The line connecting the camera lens centers
is called the baseline.
•Let baseline be perpendicular to the line
of sight of the cameras.
•Let the x axis of the three-dimensional
world coordinate system be parallel to the
baseline
•Let the origin O of this system be mid-way
between the lens center
Introduction to Stereo Imaging -Theory
Consider a point (x,y,z), in three-dimensional world coordinates, on an object.
Let f be the focal length of both cameras, the perpendicular distance between the
lens center and the image plane. Then by similar triangles:
Solving for (x,y,z) gives:
Introduction to Stereo Imaging -Theory
In order to measure the depth of a point it must be visible to
both cameras and we must also be able to identify this point
in both images.
As the camera separation increases so do the differences in
the scene as recorded by each camera.
Thus it becomes increasingly difficult to match
corresponding points in the images.
This problem is known as the stereo correspondence
problem.
Active Stereo Vision System
The vision system consists of:
•a matched pair of high sensitivity CCD cameras,
•a laser scanner all mounted on an optical bench to reduce vibration.
Computed Tomography
http://imaginis.com/ct-scan/
Imaginis - Computed Tomography Imaging (CT Scan,
CAT Scan)
Computed Tomography (CT) imaging, also known as
"CAT scanning" (Computed Axial Tomography), was
developed in the early to mid 1970s and is now available
at over 30,000 locations throughout the world
Computed Tomography
Computed tomography is used in several ways:
•
•
To detect or confirm the presence of a tumor;
To provide information about the size and location of the tumor and
whether it has spread;
•
To guide a biopsy (the removal of cells or tissues for examination
under a microscope);
•
To help plan radiation therapy or surgery; and
•
To determine whether the cancer is responding to treatment by
getting smaller.
See also:
Computed Tomography (CT): Questions and Answers, Cancer Facts 5.2
http://cis.nci.nih.gov/fact/5_2.htm