DNA Microarrays K-means, a Clustering Technique

Download Report

Transcript DNA Microarrays K-means, a Clustering Technique

DNA Chips and Their Analysis
Comp. Genomics: Lectures 10-11
based on many sources, primarily Zohar Yakhini
DNA Microarras: Basics
•
•
•
•
What are they.
Types of arrays (cDNA arrays, oligo arrays).
What is measured using DNA microarrays.
How are the measurements done?
DNA Microarras:
Computational Questions
•
•
•
•
•
•
Design of arrays.
Techniques for analyzing experiments.
Detecting differential expression.
Similar expression: Clustering.
Other analysis techniques (mmmmmany).
Machine learning techniques, and applications
for advanced diagnosis.
What is a DNA Microarray (I)
• A surface (nylon, glass, or plastic).
• Containing hundreds to thousand pixels.
• Each pixel has copies of a sequence
of single stranded DNA (ssDNA).
• Each such sequence is called a probe.
What is a DNA Microarray (II)
• An experiment with 500-10k elements.
• Way to concurrently explore the function
of multiple genes.
• A snapshot of the expression level of
500-10k genes under given test conditions
Some Microarray Terminology
• Probe: ssDNA printed on the solid
substrate (nylon or glass). These are
short substrings of the genes we are
going to be testing
• Target: cDNA which has been labeled
and is to be washed over the probe
Back to Basics: Watson and Crick
James Watson and
Francis Crick
discovered, in
1953, the double
helix structure of
DNA.
From Zohar Yakhini
Watson-Crick Complimentarity
A
C
binds to
binds to
T
G
AATGCTTAGTC
TTACGAATCAG
AATGCGTAGTC
TTACGAATCAG
Perfect match
One-base mismatch
From Zohar Yakhini
Array Based Hybridization
Assays (DNA Chips)
• Array of probes
• Thousands to millions of
different probe sequences
per array.
Unknown
sequence or
mixture (target).
Many copies.
From Zohar Yakhini
Array Based Hyb Assays
• Target hybs to WC
complimentary
probes only
• Therefore – the
fluorescence pattern
is indicative of the
target sequence.
From Zohar Yakhini
Central Dogma of Molecular Biology
(reminder)
Transcription
Translation
mRNA
Gene (DNA)
Protein
Cells express different subset of
the genes in different tissues and
under different conditions
From Zohar Yakhini
Expression Profiling on
MicroArrays
• Differentially label
the query sample
and the control
(1-3).
• Mix and hybridize
to an array.
• Analyze the image
to obtain
expression levels
information.
From Zohar Yakhini
Microarray: 2 Types of Fabrication
1. cDNA Arrays: Deposition of DNA
fragments
–
–
Deposition of PCR-amplified cDNA clones
Printing of already synthesized
oligonucleotieds
2. Oligo Arrays: In Situ synthesis
–
–
–
Photolithography
Ink Jet Printing
Electrochemical Synthesis
By Steve Hookway lecture and Sorin Draghici’s book “Data Analysis Tools for DNA Microarrays”
cDNA Microarrays vs.
Oligonucleotide Probes and Cost
cDNA Arrays
•Long Sequences
•Spot Unknown
Sequences
•More variability
• Arrays cheaper
Oligonucleotide
Arrays
•Short Sequences
•Spot Known
Sequences
•More reliable data
•Arrays typically
more expensive
By Steve Hookway lecture and Sorin Draghici’s book “Data Analysis Tools for DNA Microarrays”
Photolithography (Affymetrix)
• Similar to process used to
generate VLSI circuits
• Photolithographic masks
are used to add each base
• If base is present, there will
be a “hole” in the
corresponding mask
• Can create high density
arrays, but sequence
length is limited
Photodeprotection
mask
C
From “Data Analysis Tools for DNA Microarrays” by Sorin Draghici
Photolithography (Affymetrix)
From Zohar Yakhini
Ink Jet Printing
• Four cartridges are loaded with the
four nucleotides: A, G, C,T
• As the printer head moves across the
array, the nucleotides are deposited in
pixels where they are needed.
• This way (many copies of) a 20-60 base
long oligo is deposited in each pixel.
By Steve Hookway lecture and Sorin Draghici’s book “Data Analysis Tools for DNA Microarrays”
Ink Jet Printing (Agilent)
The array is a stack of
images in the colors
A, C, G, T.
A
C
T
G
…
From Zohar Yakhini
Inkjet Printed Microarrays
Inkjet head, squirting
phosphor-ammodites
From Zohar Yakhini
Electrochemical Synthesis
• Electrodes are embedded in the substrate to
manage individual reaction sites
• Electrodes are activated in necessary positions
in a predetermined sequence that allows the
sequences to be constructed base by base
• Solutions containing specific bases are washed
over the substrate while the electrodes are
activated
From “Data Analysis Tools for DNA Microarrays” by Sorin Draghici
Expression Profiling on
MicroArrays
• Differentially label
the query sample
and the control
(1-3).
• Mix and hybridize
to an array.
• Analyze the image
to obtain
expression levels
information.
From Zohar Yakhini
Expression Profiling:
a FLASH Demo
URL:
http://www.bio.davidson.edu/courses/genomics/chip/chip.html
Expression Profiling – Probe Design
Issues
• Probe specificity and sensitivity.
• Special designs for splice variations or
other custom purposes.
• Flat thermodynamics.
• Generic and universal systems
From Zohar Yakhini
Hybridization Probes
• Sensitivity:
Strong interaction between the probe
and its intended target, under the
assay's conditions.
How much target is needed for the
reaction to be detectable or
quantifiable?
• Specificity:
No potential cross hybridization.
From Zohar Yakhini
Specificity
• Symbolic specificity
• Statistical protection in the unknown part
of the genome.
Methods, software and application in
collaboration with Peter Webb, Doron Lipson.
From Zohar Yakhini
Reading Results: Color Coding
Campbell & Heyer, 2003
• Numeric tables are difficult to read
• Data is presented with a color scale
• Coding scheme:
– Green = repressed (less mRNA) gene in experiment
– Red = induced (more mRNA) gene in experiment
– Black = no change (1:1 ratio)
• Or
– Green = control condition (e.g. aerobic)
– Red = experimental condition (e.g. anaerobic)
• We usually use ratio
Thermal Ink Jet Arrays, by
Agilent Technologies
In-Situ synthesized
oligonucleotide
array. 25-60 mers.
cDNA array,
Inkjet deposition
Application of Microarrays
• We only know the function
of just about 30% of the
30,000 genes in the Human
Genome
– Gene exploration
– Functional Genomics
• DNA microarrays are just
the first among many high
throughput genomic devices
(first used approx. 1996)
http://www.gene-chips.com/sample1.html
By Steve Hookway lecture and Sorin Draghici’s book “Data Analysis Tools for DNA Microarrays”
A Data Mining Problem
• On a given microarray, we test on the order
of 10k elements in one time
• Number of microarrays used in typical
experiment is no more than 100.
• Insufficient sampling.
• Data is obtained faster than it can be
processed.
• High noise.
• Algorithmic approaches to work through this
large data set and make sense of the data are
desired.
Informative Genes in a
Two Classes Experiment
• Differentially expressed in the two classes.
• Identifying (statistically significant)
informative genes
- Provides biological insight
- Indicate promising research directions
- Reduce data dimensionality
- Diagnostic assay
From Zohar Yakhini
Scoring Genes
Expression pattern and pathological diagnosis information (annotation),
for a single gene
+
a1
+
a2
a3
a4
+
a5
+
a6
+
a7
a8
a9
+
+
+
a10 a11 a12 a13 a14 a15
Permute the annotation by sorting the expression pattern (ascending, say).
Informative genes
Non-informative genes
+ + + + + + + + - - - - - - - - - - - - - + + + + + + + +
+ - + - + + + + - - + + - - - + + - + - - + + - + + - - +
- - - - + - - + + - + + + + +
+ - - - + + - + + - + + - + -
etc
etc
From Zohar Yakhini
Separation Score
• Compute a Gaussian fit for each class
 (1 , 1) , (2 , 2) .
• The Separation Score is
(1 - 2)/(1+ 2)
Threshold Error Rate (TNoM) Score
Find the threshold that best separates tumors from
normals, count the number of errors committed there.
Ex 1:
- + + - + - - + + - + + - - +
6
7
# of errors = min(7,8) = 7. Not informative
Ex 2: A perfect single gene classifier gets a score of 0.
+ + + + + + + + - - - - - - -
Very informative
0
From Zohar Yakhini
p-Values
• Relevance scores are more useful when we can
compute their significance:
– p-value: The probability of finding a gene with a
given score if the labeling is random
• p-Values allow for higher level statistical
assessment of data quality.
• p-Values provide a uniform platform for
comparing relevance, across data sets.
• p-Values enable class discovery
From Zohar Yakhini
Breast Cancer BRCA1/BRCA2 data
BRCA1
Differential
Expression
100
Genes over-expressed
in BRCA1 mutants
200
Genes
300
400
500
Collab with NIH
NEJM 2001
Genes under-expressed
in BRCA1 mutants
600
| | | | | | | - - - - - - - - - - - - - - Tissues
BRCA1
mutants
BRCA1 Wildtype
700
Sporadic sample s14321
With BRCA1-mutant
expression profile
From Zohar Yakhini
Lung Cancer
Informative Genes
LUCA, 38 samples, 14.May.2001 Kaminski.
Data from Naftali
Kaminski’s lab, at Sheba.
50
100
150
250
Genes
• 24 tumors (various
types and origins)
• 10 normals (normal
edges and normal
lung pools)
200
300
350
400
450
500
550
| ||| ||| |||
-- --- --- --- --- --- --- --- Tissues
From Zohar Yakhini
And Now: Global Analysis
of Gene Expression Data
Most common tasks:
1.Construct gene network from experiments.
2.Cluster - either genes, or experiments
And Now: Global Analysis
of Gene Expression Data
Most common tasks:
1.Construct gene network from experiments.
2.Cluster - either genes, or experiments
And Now: Global Analysis
of Gene Expression Data
Most common tasks:
1.Construct gene network from experiments.
2.Cluster - either genes, or experiments
Pearson Correlation Coefficient,
r. Values are in [-1,1] interval
• Gene expression over d experiments is a
vector in Rd, e.g. for gene C: (0, 3, 3.58, 4,
3.58, 3)
• Given two vectors X and Y that contain N
elements, we calculate r as follows:
Cho & Won, 2003
Intuition for Pearson
Correlation Coefficient
r(v1,v2) close to 1: v1, v2 highly correlated.
r(v1,v2) close to -1: v1, v2 anti correlated.
r(v1,v2) close to 0: v1, v2 not correlated.
Pearson Correlation and p-Values
When entries in v1,v2 are distributed
according to normal distribution, can assign
(and efficiently compute) p-Values for a given
result.
These p-Values are determined by the Pearson
correlation coefficient, r, and the
dimension, d, of the vectors.
For same r, vectors of higher dimension will
be assigned more significant (smaller) p-Value.
Spearman Rank Order Coefficient
(a close relative of Pearson,
non parametric)
• Replace each entry xi by its rank in vector x.
• Then compute Pearson correlation
coefficients of rank vectors.
• Example: X = Gene C = (0, 3.00, 3.41, 4, 3.58, 3.01)
Y = Gene D = (0, 1.51, 2.00, 2.32, 1.58, 1)
• Ranks(X)= (1,2,4,6,5,3)
• Ranks(Y)= (1,3,5,6,4,2)
• Ties should be taken care of, but: (1) rare
(2) can randomize (small effect)
From Pearson Correlation
Coefficients to a Gene Network
• Compute correlation coefficient for all
pairs of genes (what about missing data?)
• Choose p-Value threshold.
• Put an edge between gene i and gene j iff
p-Value exceeds threshold.
Things May Get Messy
• What to do with significant yet negative
correlation coefficients? Usually care only
about the p-value and put a “normal
edge”
• Cases composed of multiple experiments
where distribution is far from normal.
ETA
E- BM NA 63
E- S C-7
NA
4
E- S C
M
-2
E- E XP 9
ME
XP 449
EME 109
4
X
EG E P-3
OD 00
ETA 431
E- BM NA 21
E- S C
ME -6
1
E- XPME 26
5
E- XPME 54
6
E- XPME 47
5
X
E- P-7
N
E- AS 39
ME CXP 77
E- -11
T
38
E- ABM
G
-1
E- EOD 9
GE
-9
E- OD- 11
G E 28
E- OD- 48
G E 33
5
E- OD- 0
G E 34
1
E- OD- 6
G E 32
OD 20
-3
70
9
%
Things Do Get Messy
Percentage of significantly correlated pairs
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Experiments
ETA
E- BM NA 63
E- S C-7
NA
4
E- S C
M
-2
E- E XP 9
ME
XP 449
EME 109
4
X
EG E P-3
OD 00
ETA 431
E- BM NA 21
E- S C
ME -6
1
E- XPME 26
5
E- XPME 54
6
E- XPME 47
5
X
E- P-7
N
E- AS 39
ME CXP 77
E- -11
T
38
E- ABM
G
-1
E- EOD 9
GE
-9
E- OD- 11
G E 28
E- OD- 48
G E 33
5
E- OD- 0
G E 34
1
E- OD- 6
G E 32
OD 20
-3
70
9
%
What to Do when Things
Get Messy?
Percentage of significantly correlated pairs
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Experiments
ETA
E- BM NA 63
E- S C-7
NA
4
E- S C
M
-2
E- E XP 9
ME
XP 449
EME 109
4
X
EG E P-3
OD 00
ETA 431
E- BM NA 21
E- S C
ME -6
1
E- XPME 26
5
E- XPME 54
6
E- XPME 47
5
X
E- P-7
N
E- AS 39
ME CXP 77
E- -11
T
38
E- ABM
G
-1
E- EOD 9
GE
-9
E- OD- 11
G E 28
E- OD- 48
G E 33
5
E- OD- 0
G E 34
1
E- OD- 6
G E 32
OD 20
-3
70
9
%
What to do when things Get Messy
Percentage of significantly correlated pairs
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Experiments
What to do when things Get Messy
1) Create a single vector of all experiments
per gene. Compute correlations based on
these vectors. This is the common
approach.
Disadvantage: Outcome is dominated by
the larger experiments.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
ETA
E- BM NA 63
E- S C-7
NA
4
E- S C
M
-2
E- E XP 9
ME
X 449
E- P-1
ME 09
4
X
EG E P-3
OD 00
ETA 431
E- BM NA 21
E- S C
ME -6
1
E- XPME 26
5
E- XPME 54
6
E- XPME 47
5
X
E- P-7
N
E- AS 39
ME CXP 77
E- -11
TA
38
B
EG M -1
E- EOD 9
GE
-9
E- OD- 11
G E 28
E- OD- 48
G E 33
5
E- OD- 0
G E 34
1
E- OD- 6
G E 32
OD 20
-3
70
9
%
Percentage of significantly correlated pairs
Experiments
What to do when things Get Messy
2) For each edge, count the no. of
experiments where it appears
significantly. Take edges exceeding some
threshold.
Disadvantage: Outcome is somewhat
dominated by experiments with
many significant correlations.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
ETA
E- BM NA 63
E- S C-7
NA
4
E- S C
M
-2
E- E XP 9
ME
X 449
E- P-1
ME 09
4
X
EG E P-3
OD 00
ETA 431
E- BM NA 21
E- S C
ME -6
1
E- XPME 26
5
E- XPME 54
6
E- XPME 47
5
X
E- P-7
N
E- AS 39
ME CXP 77
E- -11
TA
38
B
EG M -1
E- EOD 9
GE
-9
E- OD- 11
G E 28
E- OD- 48
G E 33
5
E- OD- 0
G E 34
1
E- OD- 6
G E 32
OD 20
-3
70
9
%
Percentage of significantly correlated pairs
Experiments
What to do when things Get Messy
3) For each edge, make a weighted count
the of experiments where it appears
significantly. Weights are higher if
experiment has few significant
correlations. Take edges exceeding some
threshold.
Disadvantage: No solid
mathematical justification.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
ETA
E- BM NA 63
SC
ENA -74
E- S C
ME -2
EX 9
ME P-4
49
X
E- P-1
ME 09
E- XP 4
GE -3
O 00
E- D-4
TA 31
E- BM NA 21
E- S C
ME -6
1
E- XPME 26
5
E- XPME 54
6
E- XPME 47
5
X
E- P-7
N
E- AS 39
ME CXP 77
E- -11
T
38
E- ABM
G
-1
E- EOD 9
GE
-9
1
E- OD- 1
GE 28
E- OD- 48
GE 33
5
E- OD- 0
GE 34
1
E- OD- 6
GE 32
OD 20
-3
70
9
%
Percentage of significantly correlated pairs
Experiments
Summary of the procedure
Public microarray
data sets
Pair-wise gene coexpression matrices
Samples
score ( g i , g j ) 
Genes
•
•
•
Pearson
Correlation
•
Network of conserved
co-expression links
Edges represent highly
correlated expressions
k 1.. n
k
i, j
k
k
<gi,gj> - a gene pair
n
- number of datasets
k
x i,j
- 1 if gi and gj are
significantly correlated in dataset
k, 0 otherwise
pk
- proportion of
significantly correlated gene pairs
in dataset k
Highly inter-connected clusters
Cluster
Detection
Nodes represent
genes
 x  ln( 1 p )
 ln( 1 p )
k 1.. n
Genes
Genes
Gene pair score
The Outcome (Whole Network)
Outcome after Clustering
A
Node Score
Cutoff
ER and mitochondrionrelated
Chloroplast-related
Ribosome-related
0.2
0.15
0.1
0.05
Node Score
Cutoff
0.2
* (2)
+
0.15
* (1)
0.1
0.05
B
Node Score
Cutoff
Ribosome-related
Chloroplast and Ribosomerelated
Chloroplast-related
0.2
0.15
0.1
0.05
0.2
0.15
0.1
0.05
* (4)
* (3)
Chloroplast and ERrelated
But what is Clustering?
Grouping and Reduction
• Grouping: Partition items into groups.
Items in same group should be similar.
Items in different groups should be
dissimilar.
• Grouping may help discover patterns in
the data.
• Reduction: reduce the complexity of data
by removing redundant probes (genes).
Unsupervised Grouping: Clustering
• Pattern discovery via clustering
similarly entities together
• Techniques most often used:
k-Means Clustering
 Hierarchical Clustering
 Biclustering
 Alternative Methods: Self Organizing Maps (SOMS),
plaid models, singular value decomposition (SVD),
order preserving submatrices (OPSM),……

Clustering Overview
• Different similarity measures in use:
–
–
–
–
–
–
–
–
–
Pearson Correlation Coefficient
Cosine Coefficient
Euclidean Distance
Information Gain
Mutual Information
Signal to noise ratio
Simple Matching for Nominal
Clustering Overview (cont.)
• Different Clustering Methods
– Unsupervised
• k-means Clustering (k nearest neighbors)
• Hierarchical Clustering
• Self-organizing map
– Supervised
• Support vector machine
• Ensemble classifier
Data Mining
Clustering Limitations
• Any data can be clustered, therefore
we must be careful what conclusions we
draw from our results
• Clustering is often randomized. It can,
and will, produce different results for
different runs on same data
k-means Clustering
• Given a set of m data points in
d-dimensional space and an integer k.
• We want to find the set of k “centers” in
d-dimensional space that minimizes the
Euclidean (mean square) distance from each
data point to its nearest center.
• No exact polynomial-time algorithms are
known for this problem (no wonder, NP-hard!).
“A Local Search Approximation Algorithm for k-Means Clustering” by Kanungo et. al
K-means Heuristic
(Lloyd’s Algorithm)
• Has been shown to
converge to a locally
optimal solution
• But can converge to a
solution arbitrarily bad
compared to the
optimal solution
Data
Points
Optimal
Centers
Heuristic
Centers
K=3
•“K-means-type algorithms: A generalized convergence theorem and characterization of local optimality” by Selim and Ismail
•“A Local Search Approximation Algorithm for k-Means Clustering” by Kanungo et al.
Euclidean Distance
d E( x, y) 
n
2
(
x

y
)
 i i
i 1
Now to find the distance between two points, say
the origin and the point (3,4):
d E (O, A)  3 4  5
2
2
Simple and Fast! Remember this when we consider
the complexity!
Finding a Centroid
We use the following equation to find the n dimensional
centroid point (center of mass) amid k (n dimensional) points:
k
k
 x1st  x2nd
CP( x1 , x 2 ,..., x k )  ( i 1
i
k
,
i 1
k
k
 xnth
i
,...,
i
i 1
k
)
Example: The midpoint between three 2D points, say:
(2,4) (5,2) (8,9)
CP  (
258 4 29
,
)  (5,5)
3
3
K-means Iterative Heuristic
•
•
•
•
•
Choose k initial center points “randomly”
Cluster data using Euclidean distance (or other
distance metric)
Calculate new center points for each cluster,
using only points within the cluster
Re-Cluster all data using the new center points
(this step could cause some data points to be placed in
a different cluster)
Repeat steps 3 & 4 until no data points are moved
from one cluster to another (stabilization), or till
some other convergence criteria is met
From “Data Analysis Tools for DNA Microarrays” by Sorin Draghici
An example with 2 clusters
1. We Pick 2
centers at
random
2. We cluster our
data around
these center
points
Figure Reproduced From “Data Analysis Tools for DNA
Microarrays” by Sorin Draghici
K-means example with k=2
3. We recalculate
centers based on
our current clusters
Figure Reproduced From “Data Analysis Tools for DNA
Microarrays” by Sorin Draghici
K-means example with k=2
4. We re-cluster our
data around our
new center points
Figure Reproduced From “Data Analysis Tools for DNA
Microarrays” by Sorin Draghici
k-means example with k=2
5. We repeat the last
two steps until no
more data points
are moved into a
different cluster
Figure Reproduced From “Data Analysis Tools for DNA
Microarrays” by Sorin Draghici
Choosing k
• Run algorithm on data with several
different values of k
• Use prior knowledge about the
characteristics of your test (e.g.
cancerous vs non-cancerous tissues,
in case it is the experiments that are
being clustered)
Cluster Quality
• Since any data can be clustered, how do we
know our clusters are meaningful?
– The size (diameter) of the cluster
vs. the inter-cluster distance
– Distance between the members of a cluster and the
cluster’s center
– Diameter of the smallest sphere containing the cluster
From “Data Analysis Tools for DNA Microarrays” by Sorin Draghici
Cluster Quality Continued
distance=20
diameter=5
distance=5
diameter=5
Quality of cluster assessed by
ratio of distance to nearest
cluster and cluster diameter
Figure Reproduced From “Data Analysis Tools for DNA
Microarrays” by Sorin Draghici
Cluster Quality Continued
Quality can be
assessed simply by
looking at the
diameter of a cluster
(alone????)
Warning: A cluster can be formed by the
heuristic even when there is no similarity
between clustered patterns. This occurs
because the algorithm forces k clusters to be
created.
From “Data Analysis Tools for DNA Microarrays” by Sorin
Draghici
Properties of k-means Clustering
• The random selection of initial center points
implies the following properties
– Non-Determinism / Randomized
– May produce incoherent clusters
• One solution is to choose the centers randomly
from existing points
From “Data Analysis Tools for DNA Microarrays” by Sorin Draghici
Heuristic’s Complexity
• Linear in the number of data points, N
• Can be shown to have run time cN, where c
does not depend on N, but rather the number of
clusters, k
• (not sure about dependence on dimension, d?)
 efficient
From “Data Analysis Tools for DNA Microarrays” by Sorin Draghici
Hierarchical Clustering
- a different clustering paradigm
Figure Reproduced From “Data Analysis Tools for DNA
Microarrays” by Sorin Draghici
Hierarchical Clustering (cont.)
Gene C
Gene C
Gene D
Gene E
Gene F
Gene G
Gene H
Gene I
Gene J
Gene K
Gene L
Gene M
Gene N
Gene D
Gene E
Gene F
Gene G
Gene H
Gene I
Gene J
Gene K
Gene L
Gene M
Gene N
0.94
0.96
-0.40
0.95
-0.95
0.41
0.36
0.23
0.95
-0.94
-1
0.84
-0.10
0.94
-0.94
0.68
0.24
-0.07
0.94
-1
-0.94
-0.57
0.89
-0.89
0.21
0.30
0.43
0.89
-0.84
-0.96
-0.35
0.35
0.60
-0.43
-0.79
-0.35
0.10
0.40
-1
0.48
0.22
0.11
1
-0.94
-0.95
-0.48
-0.21
-0.11
-1
0.94
0.95
0
-0.75
0.48
-0.68
-0.41
0
0.22
-0.24
-0.36
0.11
0.07
-0.23
-0.94
-0.95
Campbell & Heyer, 2003
0.94
Hierarchical Clustering (cont.)
Gene
1 C
Gene
1 C
Gene D
F
Gene E
Gene D
Gene E
F
Gene
Gene G
F
Gene G
0.94
0.89
-0.485
0.96
-0.40
0.92
0.95
-0.10
0.84
0.94
-0.10
0.94
-0.35
-0.57
0.89
Gene G
F
-0.35
Gene G
C
Average “similarity” to
D
Gene D: (0.94+0.84)/2 = 0.89
1
E
F
G
•Gene F: (-0.40+(-0.57))/2 = -0.485
1
•Gene G: (0.95+0.89)/2 = 0.92
C
E
Hierarchical Clustering (cont.)
1
1
Gene D
Gene F
Gene D
Gene F
Gene G
0.89
-0.485
0.92
-0.10
0.94
-0.35
Gene G
1
D
C
F
G
2
E
G
D
Hierarchical Clustering (cont.)
1
1
2
2
Gene F
0.905
-0.485
-0.225
3
Gene F
1
C
F
2
E
G
D
Hierarchical Clustering (cont.)
3
3
4
Gene F
-0.355
Gene F
3
F
1
F
C
2
E
G
D
Hierarchical Clustering (cont.)
4
algorithm looks
familiar?
3
1
F
C
2
E
G
D
Clustering of entire yeast
genome
Campbell & Heyer, 2003
Hierarchical Clustering:
Yeast Gene Expression Data
Eisen et al., 1998
A SOFM Example With Yeast
“Interpresting patterns of gene expression with self-organizing maps: Methods and application to hematopoietic
differentiation” by Tamayo et al.
SOM Description
• Each unit of the SOM
has a weighted
connection to all inputs
• As the algorithm
progresses,
neighboring units are
grouped by similarity
Output Layer
Input Layer
From “Data Analysis Tools for DNA Microarrays” by Sorin Draghici
An Example Using Color
Each color in
the map is
associated with
a weight
From http://davis.wpi.edu/~matt/courses/soms/
Cluster Analysis of Microarray
Expression Data Matrices
Application of cluster analysis techniques in
the elucidation gene expression data.
Important paradigm: Guilt by association
Cluster Analysis
• Cluster Analysis is an unsupervised procedure which involves
grouping of objects based on their similarity in feature space.
• In the Gene Expression context Genes are grouped based on the
similarity of their Condition feature profile.
• Cluster analysis was first applied to Gene Expression data from
Brewer’s Yeast (Saccharomyces cerevisiae) by Eisen et al. (1998).
Z
Conditions
Genes
1.55
1.05
0.5
2.5
1.75
0.25
0.1
1.7
0.3
2.4
2.9
1.5
0.5
1.0
1.55
1.05
0.5
2.5
1.75
0.25
0.1
1.7
0.3
2.4
1.5
0.5
1.0
1.55
1.55
0.5
2.5
1.75
0.25
0.1
0.3
2.4
2.9
1.5
0.5
1.0
1.05
0.5
2.5
1.75
0.25
0.1
A
Clustering
B
Y
C
Clusters A,B
and C
represent
groups of
related genes.
X
Two general conclusions can be drawn from these clusters:
• Genes clustered together may be related within a biological
module/system.
• If there are genes of known function within a cluster these
may help to class this biological/module system.
From Data to Biological Hypothesis
Gene Expression Microarray
Cluster Set
Conditions (A-Z)
Gene 1
Gene 2
Gene 3
Gene 4
Gene 5
Gene 6
Gene 7
Cluster C with four Genes may
represent System C
A
C
Relating these genes aids in
elucidation of this System C
YB
X
System C
External Stimulus( Condition X)
Toxin
Cell Membrane
Regulator Protein
DNA
Gene a
Gene b
Gene c
Gene d
Gene
Expression
Toxin
Pump
Some Drawbacks of Clustering Biological Data
1. Clustering works well over small numbers of conditions but a typical Microarray
may have hundreds of experimental conditions. A global clustering may not offer
sufficient resolution with so many features.
2. As with other clustering applications, it may be difficult to cluster noisy expression
data.
3. Biological Systems tend to be inter-related and may share numerous factors (Genes)
– Clustering enforces partitions which may not accurately represent these intimacies.
4. Clustering Genes over all Conditions only finds the strongest signals in the dataset
as a whole. More ‘local’ signals within the data matrix may be missed.
Inter-related(3)
Z
A
B
Y
C
X
May represent more
complex system such as:
Local
Signals(4)
How do we better model more complex
systems?
•
One technique that allows detection of all signals in the data is
biclustering.
•
Instead of clustering genes over all conditions biclustering clusters
genes with respect to subsets of conditions.
This enables better representation of:
-interrelated clusters (genes may belong more than one bicluster).
-local signals (genes correlated over only a few conditions).
-noisy data (allows erratic genes to belong to no cluster).
Biclustering
Conditions
Gene 1
Gene 2
Gene 3
Gene 4
Gene 5
Gene 6
Gene 7
Gene 8
Gene 9
A B C D E FG H
Clustering
(of genes)
A B
C
D
E
G
H
Gene 1
Gene 4
Gene 9
Clustering misses local signal
{(B,E,F),(1,4,6,7,9)} present over subset of
conditions.
Biclustering (of genes AND conditions)
A B
D
Gene 1
Gene 4
• Technique first described by
J.A. Hartigan in 1972 and
termed ‘Direct Clustering’.
• First Introduced to Microarray
expression data by Cheng
and Church(2000)
F
Biclustering
discovers
local
coherences
over a
subset of
conditions
Gene 9
B
Gene 1
Gene 4
Gene 6
Gene 7
Gene 9
E F
E
F
G H
Approaches to Biclustering Microarray
Gene Expression
• First applied to Gene Expression Data by Cheng
and Church(2000).
– Used a sub-matrix scoring technique to locate biclusters.
• Tanay et al.(2000)
– Modelled the expression data on Bipartite graphs and
used graph techniques to find ‘complete graphs’ or
biclusters.
• Lazzeroni and Owen
– Used matrix reordering to represent different ‘layers’ of
signals (biclusters) ‘Plaid Models’ to represent multiple
signals within data.
• Ben-Dor et al. (2002)
– “Biclusters” depending on order relations (OPSM).
Bipartite Graph Modelling
First proposed in: “Discovering statically significant biclusters in
gene expressing data” Tanay et al. Bioinformatics 2000
Genes
Data Matrix M
Genes
Conditions
ABCDEF
1
2
3
4
5
6
7
Conditions
Graph G
A
1
2
3
4
5
6
7
B
C
D
E
F
Sub-Matrix
(Bicluster)
AD
1
4
6
1
4
6
A
D
Sub-graph H
(Bicluster)
Within the graph modelling paradigm biclusters are equivalent to
complete bipartite sub-graphs.
Tanay and colleagues used probabilistic models to determine
the least probable sub-graphs (those showing most order and
consequently most surprising) to identify biclusters.
•
The Cheng and Church Approach
The core element in this approach is the development of a scoring to
prioritise sub-matrices.
This scoring is based on the concept of the residue of an entry in a
matrix.
In the Matrix (I,J) the residue score of element
a ij is given by:
R(aij )  aij  aiJ  aIj  aIJ
In words, the residue of an entry
is the value of the entry minus the
row average, minus the column
average, plus the average value
in the matrix.
J
I
j
aij
This score gives an idea of how
the value
fits into the data in
the surrounding matrix.
i
a
Conclusions:
• High throughput Functional Genomics (Microarrays) requires
Data Mining Applications.
• Biclustering resolves Expression Data more effectively than
single dimensional Cluster Analysis.
Future Research/Question’s:
• Implement a simple H score program to facilitate study if H
score concept.
• Are there other alternative scorings which would better apply
to gene expression data?
• Do un-biclustered genes have any significance? Horizontally
transferred genes?
• Implement full scale biclustering program and look at better
adaptation to expression data sets and the biological context.
References
• Basic microarray analysis: grouping and feature reduction by
Soumya Raychaudhuri, Patrick D. Sutphin, Jeffery T. Chang and
Russ B. Altman; Trends in Biotechnology Vol. 19 No. 5 May 2001
• Self Organizing Maps, Tom Germano,
http://davis.wpi.edu/~matt/courses/soms
• “Data Analysis Tools for DNA Microarrays” by Sorin Draghici;
Chapman & Hall/CRC 2003
• Self-Organizing-Feature-Maps versus Statistical Clustering
Methods: A Benchmark by A. Ultsh, C. Vetter; FG Neuroinformatik &
Kunstliche Intelligenz Research Report 0994
References
• Interpreting patterns of gene expression with selforganizing maps: Methods and application to
hematopoietic differentiation by Tamayo et al.
• A Local Search Approximation Algorithm for k-Means
Clustering by Kanungo et al.
• K-means-type algorithms: A generalized convergence
theorem and characterization of local optimality by Selim
and Ismail