labsky-ex - Knowledge Engineering Group

Download Report

Transcript labsky-ex - Knowledge Engineering Group

Ex
Information Extraction System
Martin Labsky
[email protected]
KEG seminar, March 2006
Agenda
•
•
•
•
•
•
Purpose
Use cases
Sources of knowledge
Identifying attribute candidates
Parsing instance candidates
Implementation status
Purpose
• Extract objects from documents
– object = instance of a class from an ontology
– document = text, possibly with formatting
and other documents from the same source
• Usability
– make simple things simple
– complex possible
Use Cases
• Extraction of objects of a known, well-defined class(es)
• From document collections of any size
– Structured, semi-structured, free-text
– Extraction should improve if:
• documents contain some formatting (e.g. HTML)
• this formatting is similar within or across document(s)
• Examples
–
–
–
–
–
Product catalogues (e.g. detailed product descriptions)
Weather forecast sites (e.g. forecasts for the next day)
Restaurant descriptions (cuisine, opening hours etc.)
Emails on a certain topic
Contact information
Use 3 sources of knowledge
• Ontology
– the only mandatory source
– class definitions + IE “hooks” (e.g. regexps)
• Sample instances
– possibly coupled with referring documents
– get to know typical content and context of
extractable items
• Common Formatting structure
– of instances presented
– in a single document, or
– among documents from the same source
Ontology sample
• see monitors.xml
Sample Instances
• see monitors.tsv and *.html
Common Formatting
• If a document or a group of documents
have common or similar regular
structure, this structure can be identified
by a wrapper and used to improve
extraction (esp. recall)
Document “understanding”
•
•
•
•
Known pattern spotting [4]
ID of possible wrappers [2]
ID of attribute candidates [2]
Parsing attribute candidates [4]
Known pattern spotting (1)
• Sources of known patterns
– attribute content patterns
• specified in EOL
• induced automatically by generalizing attribute
contents in sample instances
– attribute context patterns
• specified in EOL
• induced automatically by generalizing attribute
context observed in referring documents
Known pattern spotting (2)
• Known phrases and patterns
represented using a single datastructure
VIEWSONIC
VP201s
LCD
monitor
token ID
230
215
567 TokenInfo 719
lwrcase ID
211
215
456
718
lemma ID
211
215
456
718
token type
AL
AL
AL
AN
capitalization UC
LC
UC
MX
Known pattern spotting (3)
• Known phrases and patterns
represented using a single datastructure
PhraseInfo
LCD
 attribute
monitor
VP201s
VIEWSONIC
phrase ID
989
lemma phrase ID
567
cnt as monitor_name content
3
cnt as monitor_name L-context
0
cnt as monitor_name R-context
0
cnt as garbage
0
Known pattern spotting (4)
• Pattern generalizing the content of
attribute monitor_name
1-2
lcd
monitor
viewsonic
AN & MX
token ID
-1
-1
-1
-1
lwrcase ID
-1
-1
-1
-1
lemma ID
211
215
456
-1
token type
-1
-1
-1
AN
capitalization
-1
-1
-1
MX
Known pattern spotting (5)
• Pattern generalizing the content of
attribute monitor_name
PhraseInfo
 attribute
lcd
monitor
viewsonic
1-2
AN & MX
pattern ID
345
cnt as monitor_name content
27
cnt as monitor_name L-context
0
cnt as monitor_name R-context
0
cnt as garbage
0
Known pattern spotting (6)
• Data structures
– All known tokens stored in Vocabulary (character Trie)
along with their features
– All known phrases and patterns stored in PhraseBook
(token Trie), also with features
• Precision and recall of a known pattern
– using stored count features, we have:
– precision & recall of each pattern with respect to each
attribute content, L-context, R-context:
– precision = c(pattern & attr_content) / c(pattern)
– recall = c(pattern & attr_content) / c(attr_content)
Document “understanding”
•
•
•
•
Known phrase/pattern spotting [4]
ID of possible wrappers [2]
ID of attribute candidates [2]
Parsing attribute candidates [4]
ID of possible wrappers (1)
• Given a collection of documents from the
same source:
 attribute:
– Identify all high-precision phrases (hpp’s)
– Apply a wrapper induction algorithm, specifying
hpp’s as labeled samples
– Get n-best wrapper hypotheses
ID of possible wrappers (2)
• Start with a simple wrapper induction algorithm:
–  attribute:
– list L-contexts, R-contexts, and X-PATH (LRP) leading to
labeled attribute samples
– find clusters of samples with similar LRPs:
 cluster with |cluster|>threshold:
• compute the most specific generalization of LRP that covers the
whole cluster
• this generalized LRP is hoped to cover also unlabeled attributes
– the (single) wrapper on output is the set of generalized LRPs
• Able to plug-in different wrapper induction algorithms
Document “understanding”
•
•
•
•
Known phrase/pattern spotting [4]
ID of possible wrappers [2]
ID of attribute candidates [2]
Parsing attribute candidates [4]
Attribute candidate (CA) generation
 known phrases P in document collection:
– if P is known as the content of some attribute A:
• create new CA from this P
– if P is known as a high-precision L-(R-)context of
some attribute:
• create new CA‘s from phrases P’ on the right (left) of P
• in CA, set the following feature:
has_context_of_attribute_A = 1
 wrapper WA for attribute A:
  phrase P covered by WA:
 if P is a not already an CA, create a new CA
 in CA, set the following feature to 1:
in_wrapper_of_attribute_A = 1
Attribute candidates
• Properties
– many overlapping attribute candidates
– maximum recall, precision is low
a
Att_X
Att_Y
Att_Z
b
c
d
e
f
g
h
i
j
k
l
Document “understanding”
•
•
•
•
Known phrase/pattern spotting [4]
ID of possible wrappers [2]
ID of attribute candidates [2]
Parsing attribute candidates [4]
Parsing of attribute candidates
•
•
•
•
The table below can be converted to a lattice
A parse is a single path through the lattice
Many paths are impossible due to ontology constraints
Many paths still remain possible, we must determine the most
probable one
a
Att_X
Att_Y
Att_Z
Garbage
b
c
d
e
f
g
h
i
j
k
l
Sample parse tree
Doc
ICLASS
ICLASS
AX
AY
a
AX
AY
AZ
Garbage
b
c
d
e
AZ
f
g
h
i
j
k
l m n ...
AC parsing algorithm
• Left-to-right bottom-up parsing
• Decoding phase
– in each step, algorithm selects n most probable nonterminals to become heads for observed (non-)terminal
sequence
– we support nested attributes, therefore some ACs may
become heads of other ACs
– an instance candidate (IC) may become a head of ACs that
do not violate ontology constraints
– the most probable heads are determined using features of
ACs in the examined AC sequence
• features = all features assigned directly to the AC or to the
underlying phrase
• features have weights assigned during parser training
AC parser training
• Iterative training
– initial feature weights are set
• based on counts observed in sample instances
• based on parameters defined in ontology
– document collection is parsed (decoded)
with current features
• feature weights are modified in the direction
that improves the current parsing result
• repeat until time allows or convergence
work-in-progress notes
AC Parser – revised
• Attribute candidates (AC)
– AC identification by patterns
• matching pattern indicates AC with some probability
• patterns given by user or induced by trainer
– assignment of conditional P(attA|phrase,context)
• computation from
– single-pattern conditional probabilities
– single-pattern reliabilities (weights)
• AC parsing
– trellis representation
– algorithm
Pattern types by area
• Patterns can be defined for attribute:
– content
• lcd monitor viewsonic ALPHANUM&CAP
• <FLOAT> <unit>
• a special case of content pattern: list of example attribute values
– L/R context
• monitor name :
– content+L/R context (units are better modeled as content)
• <int> x <int> <unit>
• <float> <unit>
– DOM context
• BLOCK_LEVEL_ELEMENT A
Pattern types by generality
• General patterns
– expected to appear across multiple websites, used when parsing new
websites
• Local (site-specific) patterns
– all pattern types from previous slide can have their local variants for a
specific website
– we can have several local variants plus a general variant of the same
pattern, these will differ in statistics (esp. pattern precision and weight)
– local patterns are induced while joint-parsing documents with supposed
similar structure (e.g. from a single website)
– for example, local DOM context patterns can get more detailed than
general DOM context patterns, e.g.
• TD{class=product_name} A {precision=1.0, weight=1.0}
– statistics for local patterns are only computed based on the local website
– local patterns are stored for each website (similar to a wrapper) and used
when re-parsing the website next time. When deleted, they will be induced
again the next time the website is parsed.
Pattern match types
• Types of pattern matches
– exact match,
– approximate phrase match if pattern definition allows, or
– approximate numeric match for numeric types (int, float)
• Approximate phrase match
– can use any general phrase distance or similarity measure
• phrase distance: dist = f(phrase1, phrase2); 0  dist < 
• phrase similarity: sim = f(phrase1, phrase2); 0  sim < 1
– now using a nested edit-distance defined on tokens and their types
• this distance is a black box for now, returns dist, can compare to a set of [phrase2]
• Approximate numeric match
– when searching for values of a numeric attribute, all int or float values found in
analyzed documents are considered, except for those not satisfying min or max
constraints. User specifies, or trainer estimates:
– a probability function, e.g. a simple value-probability table (for discrete values) or
– a probability density function (pdf), e.g. weighted gaussians (for continuous
values). Each specific number NUM found in document can be further
represented either as:
• pdf(NUM)
• P(less probable value than NUM | attribute) = t: pdf(t) < pdf(NUM) pdf(NUM)
• or, use likelihood relative to pdf max: lik(NUM | attribute) = pdf(NUM) / maxt pdf(t)
AC conditional probability – computing P(attA|pat)
• P(attA|phrase,ctx) = Σpat wpatP(attA|pat)
• How do we get P(attA|pat)? (pattern indicates AC)
Σpat wpat=1
– exact pattern matches
• pattern’s precision estimated by user, or
• P(attA|pat)=c(pat indicates attA) / c(pat) in training data
– approximate pattern matches
reds must come
from training data
• train a cumulative probability on held-out data (phrase similarity trained on
training data)
examples should
• P(attA | PHR) = interpolate(examples)
be both positive and
• examples:
– scored using similarity to (distance from) pattern, and
– classified into positive (examples of attA) or negative.
negative
– approximate numeric matches
• for discrete p.d.: user estimates precisions for all discrete values as if they were
separate exact matches, or compute from training data:
• P(attA|value) = p.d.(value|attA) * P(attA) / P(value)
• for continuous pdf: (also possible for discrete p.d.): train a cumulative
probability on held-out data (pdfs/p.d. trained on training data)
• P(attA | NUM) = interpolate(examples)
• examples:
– scored using: pdf(NUM), or P(less probable value than NUM|attA), or lik(NUM|attA)
– classified into positive or negative
Approximate matches
Att A
dist(P,A)
Phrase P
1
0.00
LCD Monitor Viewsonic VP201D
1
0.01
LCD Viewsonic V800
1
0.06
Monitor VIEWSONIC V906D
0
0.10
LCD Monitor
1
0.12
LCD VP201D
0
0.30
View
0
0.31
VP
0
0.50
LCD
• From the above examples, derive a mapping “distanceP(attA|dist)”:
other mappings possible:
we could fit linear or
logarithmic curve e.g. by
least squares
P(attA|dist)
1
0.5
dist(P, attA)
0
0.06 0.12
0.50
analogous approach is taken
for numeric approximate
matches
pdf(NUM) or lik(NUM|attA)
or P(less probable value than
NUM|attA) will replace
dist(P,attA) and the x scale will
be reversed
AC conditional probability – computing wpat
•
•
•
P(attA|phrase,ctx) = Σpat wpatP(attA|pat)
Σpat wpat=1
How do we get wpat? (represents pattern reliability)
For general patterns (site-independent):
– user specifies pattern “importance”, or
– reliability is initially computed from:
• the number of pattern examples seen in training data (irrelevant whether pattern
means attA or not)
• the number of different websites showing this pattern with similar site-specific
precision for attA (this indicates pattern’s general usefulness)
– with held-out data from multiple websites, we can re-estimate wpat using the
EM algorithm
• we probably could first use held-out data to update pattern precisions, and then,
keeping precisions fixed, update pattern weights via EM
• EM: for each labeled held-out instance, accumulate each pattern’s contribution to
P(attA|phrase,ctx) in accumulatorpat += wpatP(attA|pat). After a single run through
held-out data, new weights are given by normalizing accumulators.
•
For site-specific patterns:
– since local patterns are established while joint-parsing documents with
similar structure, both their wpat and P(attA|pat) will develop as the joint-parse
proceeds. wpat will again be based on the number of times the pattern was
seen.
•
Pattern statistics
Each pattern needs:
– precision P(attA|pat) = a/(a+b)
– reliability wpat
•
Maybe we need also:
attA
attA
pat a
b
pat c
d
– negative precision P(attA|pat) = c/(c+d), or
– recall P(pat|attA) = a/(a+c) (this could be rel. easy to enter by users)
– these are slightly related, e.g. when recall=1 then negative precision=0
•
Conditional model variants
– A. P(attA|phrase,ctx) = Σpatmatched wpatP(attA|pat)
Σpatmatched wpat=1
• Σ only goes over patterns that match phrase,ctx (uses 2 parameters per pattern)
– B. P(attA|phrase,ctx) = Σpatmatched wpatP(attA|pat) + Σpatnonmatched w_negpatP(attA|pat)
• Σpatmatched wpat+ Σpatnonmatched w_negpat =1
• Σ goes over all patterns, using negative precision for patterns not matched, and negative
reliability w_negpat (negative reliability of a pattern in general != its reliability). This model uses 4
parameters per pattern)
•
Generative model (only for contrast)
– assumes independence among patterns (naive bayes assumption, which is never true
in our case)
– P(phrase,ctx|attA) = P(attA) pat P(pat|attA) / pat P(pat) (the denominator can be
ignored in argmaxA P(phrase,ctx|attA) search, P(attA) is another parameter)
– however, patterns are typically very much dependent and thus the probability produced
by dependent patterns is very much overestimated (and often > 1 ;-) )
– smoothing would be necessary, while conditional models (maybe) avoid it
Normalizing weights for conditional models
• Need to ensure Σpat wpat=1
• Conditional model A (only matching patterns used)
– Σpatmatched wpat=1
• Conditional model B (all patterns are always used)
– Σpatmatched wpat+ Σpatnonmatched w_negpat =1
• Both models:
– need to come up with an appropriate estimation of pattern reliabilities
(weights), and possibly negative reliabilities, so that we can do normalization
with no harm
– it may be problematic that some reliabilities are estimated by users (e.g. in a
1..9 scale) and some are to be computed from observed pattern frequency in
training documents and across training websites. How shall we integrate this?
First, let’s look at how we can integrate them separately:
• if all weights to be normalized are given by user: wpat’= wpat/ ΣpatX wpatX
• if all weights are estimated from training data counts, then something like:
• wpat = log(coccurences(pat)) + log(cdocuments(pat)) + log(cwebsites(pat))
• and then as usual (including user-estimated reliabilities) wpat’= wpat/ ΣpatX wpatX
Parsing
• AC – attribute candidate
• IC – instance candidate (set of ACs)
• the goal is to parse a set of documents
into valid instances of classes defined in
extraction ontology
AC scoring (1)
• Main problem seems to be the integration of:
– conditional probabilities P(attA|phrase,ctx) which we computed in previous slides, with
– generative probabilities P(proposition|instance of class C)
•
•
•
•
•
•
proposition can be e.g.:
“price_with_tax > price_without_tax”,
“product_name is first attribute mentioned“,
“the text in product_picture’s alt attribute is similar to product_name”
“price follows name”
“instance has 1 value for attribute price_with_tax”
– if proposition is not true, then complementary probability 1-P is used
– proposition is taken into account whenever its source attributes are present in the
parsed instance candidate (let’s call this proposition set PROPS)
• Combination of proposition probabilities
– assume that propositions are mutually independent (seems OK)
– then we can multiply their generative probabilities to get an averaged generative
probability of all propositions together, and normalize this probability according to
the number of propositions used:
– PAVG( PROPS | instance of C) = (propPROPS P(prop|instance of C))1/|PROPS|
– (computed as logs)
AC scoring (2)
• Combination of pattern probabilities
– view the parsed instance candidate IC as a set of attribute candidates
– PAVG(instance of class C|phrases,contexts) = ΣAIC P(attA|phrase,ctx) / |IC|
– Extension: each P(attA|phrase,ctx) may be further multiplied by the “engaged-ness” of
attribute, P(part_of_instance|attA), since some attributes appear alone (outside of
instances) more often than others
•
Combination of
–
–
–
–
PAVG(propositions | instance of class C)
PAVG(instance of class C | phrases,contexts)
into a single probability used as a score for the instance candidate
intuitively, multiplying seems reasonable, but is incorrect – we must justify it somehow
• we use propositions & their generative probabilities to discriminate among possible parse
candidates for the assembled instance
• we need probabilities here to compete with the probabilities given by patterns
• if PAVG(propositions | instance of class C) = 0 then result must be 0
• but finally, we want to see something like conditional P (instance of class C | attributes’
phrases, contexts, and relations between them) as an IC’s score
• so let’s take PAVG(instance of class C | phrases,contexts) as a basis, and multiply it by the
portion of training instances that exhibit the observed propositions. This will lower the base
probability proportionally to the scarcity of observed propositions.
– result: use multiplication: score(IC) =
– PAVG(propositions | instance of class C) * PAVG(instance of class C | phrases,contexts)
– but experiments necessary (can be tested in approx. 1 month)
Parsing algorithm (1)
• bottom-up parser
• driven by candidates with the highest current scores
(both instance and attribute candidates), not a left-toright parser
• using DOM to guide search
• joint-parse of multiple documents from the same
source
• adding/changing local patterns (especially DOM
context patterns) as the joint-parse continues,
recalculating probabilities/weights of local patterns
• configurable beam width
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
treat all documents D from a single source as a single document; identify and score ACs;
INIT_AC_SET={}; VALID_IC_SET={};
do {
BAC=the best AC not in INIT_AC_SET (from atts with card=1 or >=1 if any);
if(BAC’s score < threshold) break;
add BAC to INIT_AC_SET;
INIT_IC={BAC};
IC_SET={INIT_IC};
curr_block=parent_block(BAC);
while(curr_block != top_block) {
for all AC in curr_block (ordered by linear token distance from BAC) {
for all IC in IC_SET {
if(IC.accepts(AC)) { *
create IC2={IC+AC};
add IC2 to IC_SET; **
}
}
if(IC_SET contains a valid IC and too many ACs were refused due to ontology constraints) break;
}
curr_block= curr_block.next_parent_block(); ***
}
add all valid ICs from IC_SET to VALID_IC_SET;
find new local patterns in VALID_IC_SET, and if found, recompute scores in VALID_IC_SET
} while(true)
find and output the most likely sequence of non-overlapping ICs from VALID_IC_SET;
Parsing
algorithm
(2)
* accepts() returns true if the IC can accommodate the AC according to ontology constraints and if the AC
does not overlap with any other AC already present in IC, with the exception of being embedded in that AC.
** adding the new IC2 at the end of the list will prolong the loop going through IC_SET
*** next_parent_block() returns a single parent block for most block elements. For table cells,
this returns 4 aggregates of horizontally and vertically neighboring cells, and the encapsulating
table row and column. Calling next_parent_block() on each of these aggregates yields the next aggregate,
call to the last aggregate returns the whole table body.
Class C
X card=1, may contain Y
Y card=1..n
Z card=1..n
{AX}
a b c d e
f
g h
i
j
k
l m n ...
AX
AY
AZ
Garbage
A
TD
TD
block
structure

TR
TABLE
Class C
X card=1, may contain Y
Y card=1..n
Z card=1..n
{AXAY}
{AX}
{AY}
a b c d e
f
g h
i
j
k
l m n ...
AX
AY
AZ
Garbage
A
TD
TD
block
structure

TR
TABLE
Class C
X card=1, may contain Y
Y card=1..n
Z card=1..n
{AXAY}
{AXAY}
{AY}
{AY}
{AX}
a b c d e
f
g h
i
j
k
l m n ...
AX
AY
AZ
Garbage
A
TD
TD
block
structure

TR
TABLE
Class C
X card=1, may contain Y
Y card=1..n
Z card=0..n
{AXAYAZ} {AXAYAY} {AXAYAZ}
{AXAYAZ} {AXAYAY} {AXAYAZ}
{AXAZ} {AXAY} {AXAZ}
{AXAY}
{AXAY}
{AY}
{AY}
{AX}
a b c d e
{AY} {AZ}
{AZ}
f
g h
i
j
k
l m n ...
AX
AY
AZ
Garbage
A
TD
TD
block
structure

TR
TABLE
Class C
X card=1, may contain Y
Y card=1..n
Z card=0..n
{AXAYAZ} {AXAYAY} {AXAYAZ}
{AXAYAZ} {AXAYAY} {AXAYAZ}
{AXAZ} {AXAY} {AXAZ}
{AXAY}
{AX[AY]} {AX[AY]} {AXAY}
{AX}
{AY}
{AY}
a b c d e
{AY} {AZ}
{AZ}
f
g h
i
j
k
l m n ...
AX
AY
AZ
Garbage
A
TD
TD
block
structure

TR
TABLE
Class C
X card=1, may contain Y
Y card=1..n
Z card=0..n
{AXAYAZ} {AXAYAY} {AXAYAZ} {AX[AY]AZ} {AX[AY]AY} {AX[AY]AZ}
{AXAYAZ} {AXAYAY} {AXAYAZ} {AX[AY]AZ} {AX[AY]AY} {AX[AY]AZ}
{AXAZ} {AXAY} {AXAZ}
{AXAY}
{AX[AY]} {AX[AY]} {AXAY}
{AX}
{AY}
{AY}
a b c d e
{AY} {AZ}
{AZ}
f
g h
i
j
k
l m n ...
AX
AY
AZ
Garbage
A
TD
TD
block
structure

TR
TABLE
Class C
X card=1, may contain Y
Y card=1..n
Z card=0..n
{AXAYAZ} {AXAYAY} {AXAYAZ} {AX[AY]AZ} {AX[AY]AY} {AX[AY]AZ}
{AXAYAZ} {AXAYAY} {AXAYAZ} {AX[AY]AZ} {AX[AY]AY} {AX[AY]AZ}
{AXAZ} {AXAY} {AXAZ}
{AXAY}
{AX[AY]} {AX[AY]} {AXAY}
{AX}
{AY}
{AY}
a b c d e
{AY} {AZ}
{AZ}
f
g h
i
j
k
l m n ...
AX
AY
AZ
Garbage
A
TD
TD
block
structure

TR
TABLE
Class C
X card=1, may contain Y
Y card=1..n
{AX[AY]AZ}
Z card=0..n
{AX[AY]}
{AZ}
{AY}
a b c d e
f
g h
i
j
k
l m ...
AX
AY
AZ
Bg
A
TD
TD
TR
TABLE
Class C
X card=1, contains Y
Y card=1..n
{AX[AY]AZ}
Z card=0..n
{AX[AY]}
{AZ}
{AY}
a b c d e
f
g h
i
j
k ...
AX
AY
AZ
Bg
A
TD
TD
TR
TABLE
...
Aggregation of overlapping ACs
• Performance and clarity improvement: before
parsing, aggregate those overlapping ACs
that have the same relation to ACs of other
attributes, and let the aggregate have the
max score of its children ACs. This will
prevent some multiplications of new ICs. The
aggregate will only break down if other
features appear during the parse which only
support some of its children. At the end of the
parse, all remaining aggregates are reduced
to their best child.
Focused or global parsing
• The algorithm above is focused since it focuses in detail on
each single AC at a time. All ICs built by the parser in a single
loop have the chosen AC as a member. More complex ICs are
built incrementally based on existing simpler ICs, as we take
into account larger neighboring area of the document. Stopping
criterion to taking in further ACs from further parts of document
is needed.
• Alternatively, we may do global parsing by first creating a singlemember IC={AC} for each AC in document. Then, in a loop,
always choose the best-scoring IC and add a next AC that is
found in the growing context of the IC. Here the IC’s score is
computed without certain ontological constraints that would
damage partially populated ICs (e.g. missing mandatory
attributes). Again, a stopping criterion is needed to prevent highscoring ICs from growing all over the document. Validity itself is
not a good criterion, since (a) valid ICs may still need further
attributes, (b) some ICs will never be valid since they are wrong
from the beginning.
Global parsing
• How to do IC merging when extending
existing ICs during global parsing? Shall
we only merge ICs with single-AC ICs?
Should the original ICs be always
retained for other possible merges?
References
• M. Collins: Discriminative training methods for hidden
markov models: Theory and experiments with
perceptron algorithms, 2002.
• M. Collins, B. Roark: Incremental Parsing with the
Perceptron Algorithm, 2004.
• D. W. Embley: A Conceptual-Modeling Approach to
Extracting Data from the Web, 1998.
• V. Crescenzi, G. Mecca, P. Merialdo: RoadRunner:
Towards Automatic Data Extraction from Large Web
Sites, 2000.
• F. Ciravegna: LP2, an Adaptive Algorithm for
Information Extraction from
Web-related Texts, 2001.