Probabilistic Information Retrieval
Download
Report
Transcript Probabilistic Information Retrieval
Ahmet Selman Bozkır
Introduction to conditional, total probability & Bayesian
theorem
Historical background of probabilistic information retrieval
Why probabilities in IR?
Document ranking problem
Binary Independence Model
Given some event B with nonzero probability P(B) > 0
We can define conditional prob. as an event A, given B, by
P( A B)
P( A B)
P( B)
The Probabilty P(A|B) simply reflects the fact that the probability of an
event A may depend on a second event B. So if A and B are mutually
exclusive, A B =
Tolerance
Resistance
()
5%
10%
Total
22-
10
14
24
47-
28
26
44
100-
24
8
32
Total:
62
38
100
Let’s define three events:
1. A as “draw 47 resistor
2. B as “draw” a resistor with 5%
3. C as “draw” a “100 resistor
P(A) = P(47) = 44/100
P(B) = P(5%) = 62/100
P(C) = P(100) = 32 /100
The joint probabilities are:
P(A B) = P(47 5%) = 28/100
P(A C) = P(47 100 ) = 0
P(B C) = P(5% 100 ) = 24/100
I f we use them the cond. prob. :
P( A B) 28
P( A B)
P( B)
62
P( A C )
P( A C )
0
P(C )
P( B C )
P( B C ) 24
P(C )
32
The probability of P(A) of any event A defined on a sample space S can be
expressed in terms of cond. probabilities. Suppose we are given N
mutually exclusive events Bn ,n = 1,2…. N whose union equals S as
ilustrated in figure
A Bn
B1
B3
B2
A
N
N
A S A Bn ( A Bn )
n1 n1
Bn
The definition of conditional probability applies to any two
events. In particular ,let Bn be one of the events defined
above in the subsection on total probability.
P(Bn A)
P( Bn A)
P(A)
İf P(A)≠O,or, alternatively,
P( A Bn )
P( A Bn )
P( Bn )
if P(Bn)≠0, one form of Bayes’ theorem is obtained by
equating these two expressions:
P ( Bn A)
P ( A Bn ) P ( Bn )
P ( A)
Another form derives from a substitution of P(A) as given:
P( Bn A)
P( A Bn ) P( Bn )
P( A B1 ) P( B1 ) ... P( A BN ) P( BN )
The first attempts to develop a probabilistic theory of retrieval were made over
30 years ago [Maron and Kuhns 1960; Miller 1971], and since then there has been
a steady development of the approach. There are already several operational IR
systems based upon probabilistic or semiprobabilistic models.
One major obstacle in probabilistic or semiprobabilistic IR models is finding
methods for estimating the probabilities used to evaluate the probability of
relevance that are both theoretically sound and computationally efficient.
The first models to be based upon such assumptions were the “binary
independence indexing model” and the “binary independence retrieval model
One area of recent research investigates the use of an explicit network
representation of dependencies. The networks are processed by means of
Bayesian inference or belief theory, using evidential reasoning techniques such as
those described by Pearl 1988. This approach is an extension of the earliest
probabilistic models, taking into account the conditional dependencies present in
a real environment.
User
Information
Need
Query
Representation
Understanding
of user need is
uncertain
How to match?
Document
s
Document
Representation
Uncertain guess of
whether document
has relevant content
In traditional IR systems, matching between each document and
query is attempted in a semantically imprecise space of index terms.
Probabilities provide a principled foundation for uncertain reasoning.
Can we use probabilities to quantify our uncertainties?
Classical probabilistic retrieval model
Probability ranking principle, etc.
(Naïve) Bayesian Text Categorization
Bayesian networks for text retrieval
Probabilistic methods are one of the oldest but also one of the
currently hottest topics in IR.
Traditionally: neat ideas, but they’ve never won on
performance. It may be different now.
In probabilistic information retrieval, the goal is the estimation of the
probability of relevance P(R l qk, dm) that a document dm will be judged
relevant by a user with request qk. In order to estimate this probability, a
large number of probabilistic models have been developed.
Typically, such a model is based on representations of queries and
documents (e.g., as sets of terms); in addition to this, probabilistic
assumptions about the distribution of elements of these representations
within relevant and nonrelevant documents are required.
By collecting relevance feedback data from a few documents, the model
then can be applied in order to estimate the probability of relevance for
the remaining documents in the collection.
We have a collection of documents
User issues a query
A list of documents needs to be returned
Ranking method is core of an IR system:
In what order do we present documents to the
user?
We want the “best” document to be first, second best
second, etc….
Idea: Rank by probability of relevance of the
document w.r.t. information need
P(relevant|documenti, query)
For events a and b:
Bayes’ Rule
p (a, b) p (a b) p (a | b) p (b) p (b | a ) p (a )
p (a | b) p (b) p (b | a ) p (a )
Prior
p (b | a ) p (a )
p (b | a ) p (a )
p ( a | b)
p (b)
xa,a p(b | x) p( x)
Posterior
Odds:
p(a)
p(a)
O( a )
p(a ) 1 p(a)
Let x be a document in the collection.
Let R represent relevance of a document w.r.t. given (fixed)
query and let NR represent non-relevance.
R={0,1} vs. NR/R
Need to find p(R|x) - probability that a document x is relevant.
p( x | R) p( R)
p( R | x)
p( x)
p( x | NR) p( NR)
p( NR | x)
p( x)
p(R),p(NR) - prior probability
of retrieving a (non) relevant
document
p( R | x) p( NR | x) 1
p(x|R), p(x|NR) - probability that if a relevant (non-relevant) document is
retrieved, it is x.
Bayes’ Optimal Decision Rule
x is relevant iff p(R|x) > p(NR|x)
PRP in action: Rank all documents by p(R|x)
More complex case: retrieval costs.
Let d be a document
C - cost of retrieval of relevant document
C’ - cost of retrieval of non-relevant document
Probability Ranking Principle: if
C p( R | d ) C (1 p( R | d )) C p( R | d ) C (1 p( R | d ))
for all d’ not yet retrieved, then d is the next
document to be retrieved
We won’t further consider loss/utility from
now on
How do we compute all those probabilities?
Do not know exact probabilities, have to use
estimates
Binary Independence Retrieval (BIR) – which we
discuss later today – is the simplest model
Questionable assumptions
“Relevance” of each document is independent of
relevance of other documents.
▪ Really, it’s bad to keep on returning duplicates
Boolean model of relevance
Estimate how terms contribute to relevance
How tf, df, and length influence your judgments
about do things like document relevance?
▪ One answer is the Okapi formulae (S. Robertson)
Combine to find document relevance
probability
Order documents by decreasing probability
Basic concept:
"For a given query, if we know some documents
that are relevant, terms that occur in those
documents should be given greater weighting in
searching for other relevant documents.
By making assumptions about the distribution of
terms and applying Bayes Theorem, it is possible
to derive weights theoretically."
Van Rijsbergen
Traditionally used in conjunction with PRP
“Binary” = Boolean: documents are represented as binary
incidence vectors of terms (cf. lecture 1):
x ( x1 ,, xn )
xi 1 iff term i is present in document x.
“Independence”: terms occur in documents independently
Different documents can be modeled as same vector
Bernoulli Naive Bayes model (cf. text categorization!)
Queries: binary term incidence vectors
Given query q,
for each document d need to compute p(R|q,d).
replace with computing p(R|q,x) where x is binary term
incidence vector representing d Interested only in
ranking
Will use odds and Bayes’ Rule:
p ( R | q ) p ( x | R, q )
p ( R | q, x )
p( x | q)
O ( R | q, x )
p ( NR | q ) p ( x | NR , q )
p ( NR | q, x )
p( x | q)
p ( R | q, x )
p ( R | q ) p ( x | R, q )
O ( R | q, x )
p( NR | q, x ) p( NR | q) p( x | NR, q)
Constant for a
given query
Needs estimation
• Using Independence Assumption:
n
p( xi | R, q)
p( x | R, q)
p( x | NR, q) i 1 p( xi | NR, q)
•So :
n
O( R | q, d ) O( R | q)
i 1
p( xi | R, q)
p( xi | NR, q)
n
O( R | q, d ) O( R | q)
i 1
p( xi | R, q)
p( xi | NR, q)
• Since xi is either 0 or 1:
p( xi 1 | R, q)
p( xi 0 | R, q)
O( R | q, d ) O( R | q)
xi 1 p( xi 1 | NR, q) xi 0 p( xi 0 | NR, q)
• Let
pi p( xi 1 | R, q); ri p( xi 1 | NR, q);
• Assume, for all terms not occurring in the query (qi=0)
Then...
pi ri
This can be
changed (e.g., in
relevance feedback)
O ( R | q, x ) O ( R | q )
xi qi 1
All matching terms
pi
1 pi
ri xi 0 1 ri
qi 1
Non-matching
query terms
pi (1 ri )
1 pi
O( R | q )
xi qi 1 ri (1 pi ) qi 1 1 ri
All matching terms
All query terms
O( R | q, x ) O( R | q)
pi (1 ri )
1 pi
xi qi 1 ri (1 pi ) qi 1 1 ri
Constant for
each query
• Retrieval Status Value:
Only quantity to be estimated
for rankings
pi (1 ri )
pi (1 ri )
RSV log
log
ri (1 pi )
xi qi 1
xi qi 1 ri (1 pi )
• Estimating RSV coefficients.
• For each term i look at this table of document counts:
Documens Relevant
Non-Relevant Total
Xi=1
Xi=0
s
S-s
n-s
N-n-S+s
n
N-n
Total
S
N-S
N
• Estimates:
s
pi
S
(n s)
ri
(N S)
s ( S s)
ci K ( N , n, S , s) log
(n s ) ( N n S s )
For now,
assume no
zero terms.
If non-relevant documents are approximated by the whole
collection, then ri (prob. of occurrence in non-relevant
documents for query) is n/N and
log (1– ri)/ri = log (N– n)/n ≈ log N/n = IDF!
pi (probability of occurrence in relevant documents) can be
estimated in various ways:
from relevant documents if know some
▪ Relevance weighting can be used in feedback loop
constant (Croft and Harper combination match) – then just get idf
weighting of terms
proportional to prob. of occurrence in collection
▪ more accurately, to log of this (Greiff, SIGIR 1998)
1.
Assume that pi constant over all xi in query
2.
Determine guess of relevant document set:
3.
pi = 0.5 (even odds) for any given doc
V is fixed size set of highest ranked documents on
this model (note: now a bit like tf.idf!)
We need to improve our guesses for pi and ri, so
Use distribution of xi in docs in V. Let Vi be set of
documents containing xi
▪
Assume if not retrieved then not relevant
▪
4.
pi = |Vi| / |V|
ri = (ni – |Vi|) / (N – |V|)
Go to 2. until converges then return ranking
Guess a preliminary probabilistic description of R
and use it to retrieve a first set of documents V,
as above.
2. Interact with the user to refine the description:
learn some definite members of R and NR
3. Reestimate pi and ri on the basis of these
1.
4.
Or can combine new information with original guess
(1)
(use Bayesian prior):
|
V
|
p
i
κ is
pi( 2) i
prior
| V |
Repeat, thus generating a succession of
approximations to R.
weight
Getting reasonable approximations of
probabilities is possible.
Requires restrictive assumptions:
term independence
terms not in query don’t affect the outcome
boolean representation of documents/queries/relevance
document relevance values are independent
Some of these assumptions can be removed
Problem: either require partial relevance information or
only can derive somewhat inferior term weights
In general, index terms aren’t
independent
Dependencies can be complex
van Rijsbergen (1979) proposed
model of simple tree
dependencies
Exactly Friedman and
Goldszmidt’s Tree Augmented
Naive Bayes (AAAI 13, 1996)
Each term dependent on one
other
In 1970s, estimation problems
held back success of this model
What is a Bayesian network?
A directed acyclic graph
Nodes
▪ Events or Variables
▪ Assume values.
▪ For our purposes, all Boolean
Links
▪ model direct dependencies between nodes
• Bayesian networks model causal
relations between events
a
b
p(a)
c
p(c|ab) for all values
for a,b,c
p(b)
Conditional
dependence
•Inference in Bayesian Nets:
•Given probability distributions
for roots and conditional
probabilities can compute
apriori probability of any instance
• Fixing assumptions (e.g., b
was observed) will cause
recomputation of probabilities
For more information see:
R.G. Cowell, A.P. Dawid, S.L. Lauritzen, and D.J. Spiegelhalter.
1999. Probabilistic Networks and Expert Systems. Springer Verlag.
J. Pearl. 1988. Probabilistic Reasoning in Intelligent Systems:
Networks of Plausible Inference. Morgan-Kaufman.
f
0.3
f
0.7
f
n
Finals
(f)
Project Due
(d)
f
0.9 0.3
No Sleep
(n)
n 0.1 0.7
g
g
t
0.99
0.1
t
0.01
0.9
Gloom
(g)
Triple Latte
(t)
d
d
fd fd
g 0.99 0.9
g 0.01 0.1
0.4
0.6
fd fd
0.8
0.3
0.2
0.7
Finals
(f)
No Sleep
(n)
Project Due
(d)
Gloom
(g)
• Independence assumption:
P(t|g, f)=P(t|g)
• Joint probability
P(f d n g t)
=P(f) P(d) P(n|f) P(g|f d) P(t|g)
Triple Latte
(t)
Goal
Given a user’s information need (evidence), find
probability a doc satisfies need
Retrieval model
Model docs in a document network
Model information need in a query network
Document Network
di -documents
d1
d2
tiLarge,
- document
but representations
t1
t2
riCompute
- “concepts”
once for each
document collection
r1
r2
r3
c1
c2
q1
dn
tn
rk
ci - query concepts
cm
Small, compute once for
every query
qi - high-level
concepts q2
Query Network
I
I - goal node
Construct Document Network (once !)
For each query
Construct best Query Network
Attach it to Document Network
Find subset of di’s which maximizes the
probability value of node I (best subset).
Retrieve these di’s as the answer to query.
Documents
d1
r1
d2
r2
c1
c3
q2
i
Terms/Concepts
r3
c2
q1
Document
Network
Concepts
Query operators
(AND/OR/NOT)
Information need
Query
Network
Prior doc probability P(d) =
1/n
P(r|d)
within-document term
frequency
tf idf - based
P(c|r)
1-to-1
thesaurus
P(q|c): canonical forms of
query operators
Always use things like AND
and NOT – never store a
full CPT*
*conditional probability table
Hamlet
Macbeth
Document
Network
reason
trouble
reason
trouble
OR
double
two
NOT
User query
Query
Network
Prior probs don’t have to be 1/n.
“User information need” doesn’t have to be a
query - can be words typed, in docs read, any
combination …
Phrases, inter-document links
Link matrices can be modified over time.
User feedback.
The promise of “personalization”
Document network built at indexing time
Query network built/scored at query time
Representation:
Link matrices from docs to any single term are like
the postings entry for that term
Canonical link matrices are efficient to store and
compute
Attach evidence only at roots of network
Can do single pass from roots to leaves
All sources served by Google!