CS276A Text Information Retrieval, Mining, and Exploitation
Download
Report
Transcript CS276A Text Information Retrieval, Mining, and Exploitation
Web Characterisitics II
Adapted from Lectures by
Prabhakar Raghavan (Yahoo, Stanford)
and Christopher Manning (Stanford)
Prasad
L15WebCharII
1
Today’s topics
Estimating web size and search
engine index size
Near-duplicate document
detection
2
Size of the web
3
What is the size of the web ?
Issues
The web is really infinite
Dynamic content, e.g., calendar
Soft 404: www.yahoo.com/<anything> is a valid
page
Static web contains syntactic duplication,
mostly due to mirroring (~30%)
Some servers are seldom connected
Who cares?
Media, and consequently the user
Engine design
Engine crawl policy. Impact on recall.
4
What can we attempt to
measure?
The relative sizes of search engines
The notion of a page being indexed is still
reasonably well defined.
Already there are problems
Document extension: e.g. engines index pages not yet
crawled, by indexing anchortext.
Document restriction: All engines restrict what is
indexed (first n words, only relevant words, etc.)
The coverage of a search engine relative
to another particular crawling process.
5
New definition?
The statically indexable web is
whatever search engines index.
Different engines have different preferences
(IQ is whatever the IQ tests
measure.)
max url depth, max count/host, anti-spam
rules, priority rules, etc.
Different engines index different things
under the same URL:
frames, meta-keywords, document
restrictions, document extensions, ...
6
Statistical methods
Random queries
Random searches
Random IP addresses
Random walks
7
Relative Size from Overlap
[Bharat & Broder, 98]
Sample URLs randomly from A
Check if contained in B
and vice versa
AB
A B =
A B =
(1/2) * Size A
(1/6) * Size B
(1/2)*Size A = (1/6)*Size B
\ Size A / Size B =
(1/6)/(1/2) = 1/3
Each test involves: (i) Sampling (ii) Checking
8
Sampling URLs
Ideal strategy: Generate a random URL and
check for containment in each index.
Problem: Random URLs are hard to find!
Enough to generate a random URL contained
in a given Engine.
Key lesson from this lecture.
9
Random URLs from random
queries [Bharat & B, 98]
Generate random query: how?
Lexicon: 400,000+ words from a crawl of
Yahoo!
Conjunctive Queries: w1 and w2
e.g., vocalists AND rsi
Get 100 result URLs from the source
engine
Choose a random URL as the candidate to
check for presence in other engines.
This distribution induces a probability
weight W(p) for each page.
Conjecture:
10
Query Based Checking
Strong Query to check for a
document D:
Download document. Get list of words.
Use 8 low frequency words as AND
query
Check if D is present in result set.
Problems:
Near duplicates
Frames
Redirects
Engine time-outs
Might be better to use e.g. 5 distinct
conjunctive queries of 6 words each.
11
Computing Relative Sizes and
Total Coverage [BB98]
a
= AltaVista,
fxy
= Excite,
h
= HotBot,
i
= Infoseek
= fraction of x in y
Six pair-wise
overlaps
fah*
fai*
fae*
fhi*
fhe*
fei*
e
a
a
a
h
h
e
-
fha*
fia*
fea*
fih*
feh*
fie*
h
i
e
i
e
i
=
=
=
=
=
=
e1
e2
e3
e4
e5
e6
Arbitrarily, let a =
1.
We have 6
equations and 3
unknowns.
Solve for e, h and i
to minimize S ei2
Compute engine
overlaps.
Re-normalize so
that the total joint
coverage is 100%
12
Advantages &
disadvantages
Statistically sound under the induced
weight.
Biases induced by random query
Query Bias: Favors content-rich pages in the
language(s) of the lexicon
Ranking Bias: Solution: Use conjunctive queries &
fetch all
Checking Bias: Duplicates, impoverished pages
omitted
Document or query restriction bias: engine
might not deal properly with 8 words conjunctive
query
Malicious Bias: Sabotage by engine
13
Random searches
Choose random searches extracted from
a local log [Lawrence & Giles 97] or build
“random searches” [Notess]
Use only queries with small results sets.
Count normalized URLs in result sets.
Use ratio statistics
14
Advantages & disadvantages
Advantage
Might be a better reflection of the human
perception of coverage
Issues
Samples are correlated with source of log
Duplicates
Technical statistical problems (must have
non-zero results, ratio average, use harmonic
mean?)
15
Random searches [Lawr98, Lawr99]
575 & 1050 queries from the NEC RI employee
logs
6 Engines in 1998, 11 in 1999
Implementation:
Restricted to queries with < 600 results in total
Counted URLs from each engine after verifying
query match
Computed size ratio & overlap for individual
queries
Estimated index size ratio & overlap by
averaging over all queries
16
Queries from Lawrence and Giles study
adaptive access control
neighborhood
preservation topographic
hamiltonian structures
right linear grammar
pulse width modulation
neural
unbalanced prior
probabilities
ranked assignment
method
internet explorer
favourites importing
karvel thornber
zili liu
softmax activation
function
bose multidimensional
system theory
gamma mlp
dvi2pdf
john oliensis
rieke spikes exploring
neural
video watermarking
counterpropagation
network
fat shattering dimension
abelson amorphous
computing
17
Random IP addresses [Lawrence
& Giles ‘99]
Generate random IP addresses
Find a web server at the given address
If there’s one
Collect all pages from server.
Method first used by O’Neill, McClain, &
Lavoie, “A Methodology for Sampling the
World Wide Web”, 1997.
http://digitalarchive.oclc.org/da/ViewObject.jsp?objid=0000
003447
18
Random IP addresses [ONei97, Lawr99]
HTTP requests to random IP addresses
Ignored: empty or authorization required or
excluded
[Lawr99] Estimated 2.8 million IP addresses
running crawlable web servers (16 million total)
from observing 2500 servers.
OCLC using IP sampling found 8.7 M hosts in
2001
Netcraft [Netc02] accessed 37.2 million hosts in July
2002
[Lawr99] exhaustively crawled 2500
servers and extrapolated
Estimated size of the web to be 800 million
19
Advantages & disadvantages
Advantages
Clean statistics
Independent of crawling strategies
Disadvantages
Doesn’t deal with duplication
Many hosts might share one IP, or not accept
requests
No guarantee all pages are linked to root
page.
Power law for # pages/hosts generates bias
towards sites with few pages.
Eg: employee pages
But bias can be accurately quantified IF underlying
distribution understood
20
Potentially influenced by spamming (multiple
Random walks
[Henzinger et al WWW9]
View the Web as a directed graph
Build a random walk on this graph
Includes various “jump” rules back to visited
sites
Converges to a stationary distribution
Does not get stuck in spider traps!
Can follow all links!
Must assume graph is finite and independent of
the walk.
Conditions are not satisfied (cookie crumbs,
flooding)
Time to convergence not really known
Sample from stationary distribution of walk
Use the “strong query” method to check
coverage by SE
21
Dependence on seed list
How well connected is the graph? [Broder et
al., WWW9]
22
Advantages & disadvantages
Advantages
“Statistically clean” method at least in theory!
Could work even for infinite web (assuming
convergence) under certain metrics.
Disadvantages
List of seeds is a problem.
Practical approximation might not be valid.
Non-uniform distribution
Subject to link spamming
23
Conclusions
No sampling solution is perfect.
Lots of new ideas ...
....but the problem is getting harder
Quantitative studies are fascinating and a
good research problem
24
Duplicate detection
25
Duplicate documents
The web is full of duplicated content
Strict duplicate detection = exact
match
Not as common
But many, many cases of near
duplicates
E.g., Last modified date the only
difference
26
Duplicate/Near-Duplicate Detection
Duplication: Exact match can be detected with
fingerprints
Near-Duplication: Approximate match
Overview
Compute syntactic similarity with an editdistance measure
Use similarity threshold to detect nearduplicates
E.g., Similarity > 80% => Documents are “near
duplicates”
Not transitive though sometimes used transitively
27
Computing Similarity
Features:
Segments of a document (natural or artificial
breakpoints)
Shingles (Word N-Grams)
a rose is a rose is a rose →
a_rose_is_a
rose_is_a_rose
is_a_rose_is
Similarity Measure between two docs (= sets of
shingles)
Set intersection [Brod98]
(Specifically, Size_of_Intersection / Size_of_Union )
Jaccard measure
28
Shingles + Set Intersection
Computing exact set intersection of
shingles between all pairs of documents is
expensive/intractable
Approximate using a cleverly chosen subset of
shingles from each (a sketch)
Estimate (size_of_intersection /
size_of_union) based on a short sketch
29
Sketch of a document
Create a “sketch vector” (of size ~200)
for each document
Documents that share ≥ t (say 80%)
corresponding vector elements are
near duplicates
For doc D, sketchD[ i ] is as follows:
Let f map all shingles in the universe to
0..2m (e.g., f = fingerprinting)
Let pi be a random permutation on 0..2m
Pick MIN {pi(f(s))} over all shingles s in D
30
Computing Sketch[i] for Doc1
Document 1
264 Start with 64-bit f(shingles)
264 Permute on the number line
264
with
pi
264 Pick the min value
31
Test if Doc1.Sketch[i] = Doc2.Sketch[i]
Document 2
Document 1
A
264
264
264
264
264
264
B
264
264
Are these equal?
Test for 200 random permutations:
p1, p2,… p200
32
However…
Document 2
Document 1
A
264
264
264
264
264
B
264
264
264
A = B iff the shingle with the MIN value in the union of Doc1 and
Doc2 is common to both (I.e., lies in the intersection)
This happens with probability:
Size_of_intersection / Size_of_union
Why?
33
Set Similarity
Set Similarity (Jaccard measure)
Jaccard(C i , C j )
Ci C j
Ci C j
View sets as columns of a matrix; one row for
each element in the universe. aij = 1 indicates
presence of item i in set j
Example
C1 C2
0
1
1
0
1
0
1
0
1
0
1
1
Jaccard(C1,C2) = 2/5 = 0.4
34
Key Observation
For columns Ci, Cj, four types of rows
A
B
C
D
Ci
1
1
0
0
Cj
1
0
1
0
Overload notation: A = # of rows of type A
Claim
A
Jaccard(C i , C j )
ABC
35
“Min” Hashing
Randomly permute rows
Hash h(Ci) = index of first row with 1 in
column Ci
Surprising
P h(C )Property
h(C ) Jaccard C , C
i
j
i
j
Why?
Both are A/(A+B+C)
Look down columns Ci, Cj until first non-TypeD row
h(Ci) = h(Cj) type A row
36
Min-Hash sketches
Pick P random row permutations
MinHash sketch
SketchD = list of P indexes of first rows with
1 in column C
Similarity of signatures
Let sim[sketch(Ci),sketch(Cj)] = fraction of
permutations where MinHash values agree
Observe E[sim(sig(Ci),sig(Cj))] = Jaccard(Ci,Cj)
37
Example
R1
R2
R3
R4
R5
C1
1
0
1
1
0
C2
0
1
0
0
1
C3
1
1
0
1
0
Signatures
S1
Perm 1 = (12345) 1
Perm 2 = (54321) 4
Perm 3 = (34512) 3
S2
2
5
5
S3
1
4
4
Similarities
1-2
1-3 2-3
Col-Col 0.00 0.50 0.25
Sig-Sig 0.00 0.67 0.00
38
Implementation Trick
Permuting rows even once is prohibitive
Row Hashing
Pick P hash functions hk: {1,…,n}{1,…,O(n)}
Ordering under hk gives random row
permutation
One-pass Implementation
For each Ci and hk, keep “slot” for min-hash
value
Initialize all slot(Ci,hk) to infinity
Scan rows in arbitrary order looking for 1’s
Suppose row Rj has 1 in column Ci
For each hk,
39
Example
R1
R2
R3
R4
R5
C1
1
0
1
1
0
C2
0
1
1
0
1
h(x) = x mod 5
g(x) = 2x+1 mod 5
C1 slots
C2 slots
h(1) = 1
g(1) = 3
1
3
-
h(2) = 2
g(2) = 0
1
3
2
0
h(3) = 3
g(3) = 2
1
2
2
0
h(4) = 4
g(4) = 4
1
2
2
0
h(5) = 0
g(5) = 1
1
2
0
0
40
Comparing Signatures
Signature Matrix S
Compute – Pair-wise similarity of signature
columns
Problem
Rows = Hash Functions
Columns = Columns
Entries = Signatures
MinHash fits column signatures in memory
But comparing signature-pairs takes too
much time
Technique to limit candidate pairs?
Locality Sensitive Hashing (LSH)
41
Resources
IIR 19
See also
Phelps & Wilensky. Robust
Hyperlinks & Locations, 2002
Ziv Bar-Yossef and Maxim Gurevich.
Random Sampling from a Search
Engine’s Index, WWW 2006.
Broder et al. Estimating corpus size
via queries. CIKM 2006.
42
More resources
Related papers:
[Bar Yossef & al, VLDB 2000],
[Rusmevichientong & al, 2001], [Bar Yossef &
al, 2003]
43