Digitální zpracování obrazu (PV131)
Download
Report
Transcript Digitální zpracování obrazu (PV131)
Image Analysis Phases
• Image pre-processing
– Noise suppression, linear and non-linear filters, deconvolution, etc.
• Image segmentation
– Detection of objects using thresholding, edge detection, region
growing, template matching, mathematical morphology, etc.
• Object description
– Determination of object attributes such as area, volume, perimeter,
surface, boundary, roundness, etc.
• Object classification
– Dividing the detected objects into several classes based on the object
attributes.
• Image understanding
– Making sense of the detected and classified objects – complex
understanding of the image data.
Object Description
• Why object description?
– After segmentation, objects need to be described in order to perform
consequent classification phase. Usually those parameters which are
needed for classification are computed.
• The basic parameters are:
•
•
•
•
•
coordinates
size (area or volume)
perimeter or surface area
mean or peak intensity
boundary
Object Description
• Boundary
– The boundary is usually represented as an encoded chain of points
where only the first point’s absolute position is stored and then only
directions from one point to another are stored. This is so called
Freeman’s code (Freeman 1961).
– Boundary has got its own properties which can be also calculated. For
example, curvature can be computed which is defined as a fraction
between number of boundary pixels where the boundary changes its
direction significantly and the total number of boundary pixels.
Object Description
A lot of other special object parameters can be computed such as:
• Center of mass
(coordinate average weighted by intensity)
• Minimal bounding rectangle in 2D (or bounding box in 3D)
(in 2D: a rectangle of minimal area that contains given object)
(in 3D: a parallelepiped of minimal volume that contains given object)
• Elongatedness
(A/(2d)2 where A is object area and d is the number of erosion steps
that must be applied before the object completely disappears)
• Direction of an elongated object
(direction of the longer side of a minimum bounding rectangle or box)
• Circularity (=roundness)
(4pA/P2 where A is area and P perimeter of the object)
• Convex hull
• Skeleton
• etc.
Object Classification
• What is object classification?
– The classification step tries to divide the objects detected during the
segmentation step into several classes.
– The classification is impossible without a priori knowledge:
the properties of individual classes must be known beforehand.
– The number of classes is also usually known beforehand it is derived from the problem specification.
– The objects are usually classified according to the object descriptions
which are compared to the descriptions of individual classes.
– An example of a classification task can be dividing cells into G0, S,
G2/M classes (cell cycle stages) according to their total DNA intensity
parameter. In praxis, however, usually more parameters are taken into
account.
Object Classification
• Two main approaches to the classification step:
1) Formal description is constructed.
• If formal description can be written, the classifier can be quite easily
realized by means of an appropriate programming language. The formal
description of more complicated classes is often written precisely by
means of formal grammars (formal languages), predicate logic,
production rules or other mathematical tools.
2) A classifier is trained on a set of examples.
• The computer learns step by step which input corresponds to which class.
The most frequent approach to classification based on learning on a set of
examples is neural networks approach.
Image Understanding
• What is image understanding?
– Image understanding is the most complicated task and often requires
interaction with the other phases of image analysis. Its aim is to make
sense of the recognized and classified objects.
– Sometimes, object classification is sufficient and no other image
understanding is required. However, if we want to analyze objects
in context with each other, the understanding phase is required.
– The approaches used for this task are specific to each problem.
– In cytometry (cell measurements), the task is usually only to measure
and classify the individual cells and the cells are not treated in context
with each other. Only during the final statistical evaluation of cell
attributes (e.g. in a spreadsheet program) all cells are taken into
account (e.g. it is found that 20% of cells are normal and 80% are
aberrant).
Image Understanding
• Four main approaches to the understanding step:
1) Bottom-up control (control by the image data).
• Processing proceeds from the raster image to segmented image, to region
(object) description, and to their classification and recognition of the
scene.
2) Top-down control (model-based control).
• A set of assumptions and expected properties is constructed from a priori
knowledge. The satisfaction of these properties is tested in image
representations at different processing levels in a top-down direction,
down to the original data. The image understanding is internal model
verification, and the model is either accepted or rejected.
3) Combined control strategy.
• Bottom-up and top-down control mechanisms are combined in order
to obtain more flexible and powerful vision control strategy.
4) Non-hierarchical control.
• The next action is chosen based on the actual state and acquired
information about the solved problem.
Segmentation: Biological Applications
Human genome visualization
Principle: Selected genes and chromosomes within cell nuclei are visualized
using short DNA-probes (100-300 base pairs) which are complementary to
the target gene or chromosome. Several probes are used for one gene, many
probes are used for one chromosome. The probes are stained with a certain
fluorescent dye (of certain color). Probes for one gene or one chromosome
are stained with the same color. Different genes and chromosomes are
stained with different colors.
Input: Images of cells at different stages of the cell cycle (i.e. with different
amount of DNA). The nuclear DNA is stained with a certain color called
counterstain. In this way cell nuclei are visualized. Cells at different stages
of the cell cycle have different intensities (proportional to their DNA
content). Using various colors (different from the counterstain), genes
or chromosomes within cell nuclei are visualized.
Tasks:
1) Find cell nuclei within the counterstain image.
2) Find genes (chromosomes) within individual color channels.
Segmentation: Biological Applications
Segmentation of cells
(a)
(b)
(c)
(d)
(e)
(f)
Segmentation: Biological Applications
Typical example of gene behaviour in a 3D image
R1
G1
R5
G5
R9
G9
R2
G2
R6
G6
R 10
G 10
R3
G3
R7
G7
R 11
G 11
R4
G4
R8
G8
R max
G max