Learning Subjective Adjectives From Corpora

Download Report

Transcript Learning Subjective Adjectives From Corpora

Identifying Collocations for
Recognizing Opinions
Janyce Wiebe, Theresa Wilson,
Matthew Bell
University of Pittsburgh
Office of Naval Research grant N00014-95-1-0776
ACL01 Workshop on Collocation
1
Introduction
Subjectivity: aspects of language used to
express opinions and evaluations (Banfield 1982)
Relevant for many NLP applications, such as
information extraction and text categorization
This paper: identifying collocational clues
of subjectivity
ACL01 Workshop on Collocation
2
Outline
Subjectivity
Data and annotation
Unigram features
N-gram features
Generalized N-gram features
Document classification
ACL01 Workshop on Collocation
3
Subjectivity Tagging
Recognizing opinions and evaluations
(Subjective sentences) as opposed to
material objectively presented as true
(Objective sentences)
Banfield 1982, Fludernik 1993, Wiebe 1994, Stein & Wright 1995
ACL01 Workshop on Collocation
4
Examples
At several different levels, it’s a fascinating
tale. subjective
Bell Industries Inc. increased its quarterly to
10 cents from 7 cents a share. objective
ACL01 Workshop on Collocation
5
Subjectivity
“Complained”
“You Idiot!”
“Terrible product”
“Enthused”
“Wonderful!”
“Great product”
ACL01 Workshop on Collocation
“Speculated”
“Maybe”
6
Examples
Strong addressee-oriented negative evaluation


Recognizing flames (Spertus 1997)
Personal e-mail filters (Kaufer 2000)
I had in mind your facts, buddy, not hers.
Nice touch. “Alleges” whenever facts posted are not
in your persona of what is “real.”
ACL01 Workshop on Collocation
7
Examples
Opinionated, editorial language


IR, text categorization (Kessler et al. 1997)
Do the writers purport to be objective?
Look, this is a man who has great numbers.
We stand in awe of the Woodstock generation’s
ability to be unceasingly fascinated by the subject
of itself.
ACL01 Workshop on Collocation
8
Examples
Belief and speech reports

Information extraction, summarization, intellectual
attribution (Teufel & Moens 2000)
Northwest Airlines settled the remaining lawsuits,
a federal judge said.
“The cost of health care is eroding our standard of
living and sapping industrial strength”, complains
Walter Maher.
ACL01 Workshop on Collocation
9
Other Applications
Review mining (Terveen et al. 1997)
Clustering documents by ideology (Sack 1995)
Style in machine translation and generation
(Hovy 1987)
ACL01 Workshop on Collocation
10
Potential Subjective Elements
Sap: potential subjective element
"The cost of health care is eroding standards
of living and sapping industrial strength,”
complains Walter Maher.
Subjective element
ACL01 Workshop on Collocation
11
Subjectivity
Multiple types, sources, and targets
Somehow grown-ups believed that
wisdom adhered to youth.
We stand in awe of the Woodstock generation’s
ability to be unceasingly fascinated by the
subject of itself.
ACL01 Workshop on Collocation
12
Annotations
Manually tagged + existing annotations
Three levels:
expression level
sentence level
document level
ACL01 Workshop on Collocation
13
Expression Level Annotations
[Perhaps you’ll forgive me] for reposting
his response
They promised [e+ 2 yet] more for
[e+ 3 really good][e? 1 stuff]
ACL01 Workshop on Collocation
14
Expression Level Annotations
Probably the most natural level
Difficult for manual and automatic tagging:
detailed
no predetermined classification unit
To date:
used for training and bootstrapping
ACL01 Workshop on Collocation
15
Expression Level Data
1000 WSJ sentences (2J)
462 newsgroup messages (2J)
15413 words newsgroup data (1J)
Single round of tagging; results promising
Used to generate features, not for evaluation
ACL01 Workshop on Collocation
16
Sentence Level Annotations
A sentence is labeled subjective if any significant
expression of subjectivity appears
“The cost of health care is eroding our standard of
living and sapping industrial strength,’’ complains
Walter Maher.
“What an idiot,’’ the idiot presumably complained.
ACL01 Workshop on Collocation
17
Document Level Annotations
This work: Opinion Pieces in the WSJ: editorials,
letters to the editor, arts & leisure reviews
Other work:
flames
1-star to 5-star reviews
+ Free source of data
+ More directly related to applications
ACL01 Workshop on Collocation
18
Document Level Annotations
Opinion pieces contain objective sentences
Non-opinion pieces contain subjective sentences
News reports present reactions (van Dijk 1988)
“Critics claim …”
“Supporters argue …”
Editorials contain facts supporting the argument
Reviews contain information about the product
ACL01 Workshop on Collocation
19
Class Proportions in WSJ Sample
Opinion Pieces
Noise
Subjective sentences 70%
Objective 30%
Non-Opinion Pieces
Noise
Subjective sentences 43%
ACL01 Workshop on Collocation
Objective 57%
20
Word Distribution
13-17%
of words are in opinion pieces
83-87%
of words are in non-opinion pieces
ACL01 Workshop on Collocation
21
Evaluation Metric for Feature S
with Respect to Opinion Pieces
Precision(S) = # instances of S in opinions /
total # instances of S
Baseline for comparison
# words in opinions / total # words
Given the distributions, precisions of even
perfect subjectivity clues would be low
Improvement over baseline taken as evidence
of promising PSEs
ACL01 Workshop on Collocation
22
Data
Opinion
Pieces
Non-Opinion Pieces
ACL01 Workshop on Collocation
23
Document Level Data
3 WSJ editions, each more than 150K words
Existing opinion-piece annotations used for training
Manually refined classifications used for testing
Identified editorials not marked as such
3 hours/edition
Kappa = .93 for 2 judges
ACL01 Workshop on Collocation
24
Automatically Generated Unigram
Features
Adjective and verb features were generated using
distributional similarity (Lin 1998, Wiebe 2000)
Existing opinion-piece annotations used for training
Manually refined annotations used for testing
ACL01 Workshop on Collocation
25
Unigram Feature Results
WSJ-10
baseline 17%
+prec/freq
WSJ-33
baseline 13%
+prec/freq
Adjs
+21/373
+09/2137
Verbs
+16/721
+07/3193
ACL01 Workshop on Collocation
26
Example Adjective Feature
conclusive, undiminished, brute, amazing,
unseen, draconian, insurmountable, unqualified,
poetic, foxy, vintage, jaded, tropical, distributional,
discernible, adept, paltry, warm, reprehensible,
astonishing, surprising, commonplace, crooked,
dreary, virtuoso, trashy, sandy, static, virulent,
desolate, ours, proficient, noteworthy, Insistent,
daring, unforgiving, agreeable, uncritical,
homicidal, comforting, erotic, resonant, ephemeral,
believable, epochal, dense, exotic, topical, …
ACL01 Workshop on Collocation
27
Unique Words hapax legomena
More than expected single-instance words in subjective
elements
Unique-1-gram feature: all words that appear once
in the test data
Precision is 1.5 times baseline precision
Frequent feature!
ACL01 Workshop on Collocation
28
Unigram Feature Results
WSJ-10
WSJ-33
baseline 17%
baseline 13%
Adjs
Verbs
Unique-1-gram
+21/373
+16/721
+10/6065
+09/2137
+07/3193
+06/6048
Results are consistent, even with different identification
procedures (similarly for WSJ-22)
ACL01 Workshop on Collocation
29
Collocational PSEs
get out
what a
for the last time
just as well
here we go again
Started with the observation that low precision
words often compose higher precision collocations
ACL01 Workshop on Collocation
30
Identifying Collocational PSEs
Searching for 2-grams, 3-grams, 4-grams
No grammatical generalizations or constraints yet
Train on the data annotated with subjective elements
(expression level)
Test on the manually-refined opinion-piece data
(document level)
ACL01 Workshop on Collocation
31
Identifying Collocational PSEs:
Training Data (reminder)
1000 WSJ sentences (2J)
462 newsgroup messages (2J)
15413 words newsgroup data (1J)
[Perhaps you’ll forgive me] for reposting his response
They promised [e+ 2 yet] more for [e+ 3 really good]
[e? 1 stuff]
ACL01 Workshop on Collocation
32
N-Grams
Each position is filled by a word POS pair
in|prep the|det air|noun
ACL01 Workshop on Collocation
33
Identifying Collocational PSEs:
Training, Step 1
Precision with respect to subjective elements
calculated for all 1,2,3,4-grams in the training data
Precision(n-gram) = # subjective instances of n-gram /
total # instances of n-gram
An instance of an n-gram is subjective if each
word in the instance is in a subjective element
ACL01 Workshop on Collocation
34
Identifying Collocational PSEs:
Training
An instance of an n-gram is subjective if each
word in the instance is in a subjective element
[Perhaps you’ll forgive me] for reposting his response
They promised [e+ 2 yet] more for [e+ 3 really good]
[e? 1 stuff]
ACL01 Workshop on Collocation
35
Identifying Collocational PSEs:
Training, Step 2
N-gram PSEs selected based on their precisions, using
two criteria:
1. Precision >= 0.1
2. Precision >= maximum precision of its
constituents
ACL01 Workshop on Collocation
36
Identifying Collocational PSEs:
Training, Step 2
Precision >= maximum precision of its constituents
prec (w1,w2) >= max (prec (w1), prec (w2))
prec (w1,w2,w3) >= max(prec(w1,w2),prec(w3)) or
prec (w1,w2,w3) >= max(prec(w1),prec(w2,w3))
ACL01 Workshop on Collocation
37
Results
WSJ-10
WSJ-33
baseline 17%
baseline 13%
Adjs
+21/373
Verbs
+16/721
Unique-1-gram +10/6065
2-grams
+07/2182
3-grams
+09/271
4-grams
+05/32
+09/2137
+07/3193
+06/6048
+04/2080
+06/262
-03/30
ACL01 Workshop on Collocation
38
Generalized Collocational PSEs
Replace each single-instance word in the training
data with “UNIQUE”
Rerun the same training procedure, finding collocations
such as highly|adverb UNIQUE|adj
To test the new collocations on test data, first
replace each single-instance word in the test data
with “UNIQUE”
ACL01 Workshop on Collocation
39
Results
WSJ-10
WSJ-33
baseline 17%
baseline 13%
Adjs
+21/373
Verbs
+16/721
Unique-1-gram +10/6065
2-grams
+07/2182
3-grams
+09/271
4-grams
+05/32
U-2-grams
+24/294
U-3-grams
+27/132
U-4-grams
+83/3
+09/2137
+07/3193
+06/6048
+04/2080
+06/262
- 03/30
+14/288
+13/144
+15/27
ACL01 Workshop on Collocation
40
Example
highly|adverb
UNIQUE|adj
highly unsatisfactory
highly unorthodox
highly talented
highly conjectural
highly erotic
ACL01 Workshop on Collocation
41
Example
UNIQUE|verb out|IN
farm out
chuck out
ruling out
crowd out
flesh out
blot out
spoken out
luck out
ACL01 Workshop on Collocation
42
Examples
UNIQUE|adj to|TO UNIQUE|verb
impervious to reason
strange to celebrate
wise to temper
they|pronoun are|verb UNIQUE|noun
they are fools
they are noncontenders
UNIQUE|noun of|IN
sum of its
usurpation of its
proprietor of its
its|pronoun
ACL01 Workshop on Collocation
43
How do Fixed and U-Collocations
Compare?
Recall the original motivation for investigating
fixed n-gram PSEs:
Started with the observation that low precision
words often compose higher precision collocations
But unique words are probably not low precision
Are we finding the same collocations two
different ways? Or are we finding new PSEs?
ACL01 Workshop on Collocation
44
Comparison
WSJ-10
2-grams 3-grams 4-grams
Intersecting instances
4
2
0
%overlap
0.0016
0.0049
0
WSJ-33: all 0s
ACL01 Workshop on Collocation
45
Opinion-Piece Recognition
using Linear Regression
Adjs,verbs
Ngrams
Adjs,verbs,ngrams
All features (+ max density)
%correct TP
.896
5
.899
5
.909
9
.912
11
FP
4
3
4
4
Max density: the maximum feature count in an
11-word window
ACL01 Workshop on Collocation
46
Future Work
Methods for recognizing non-compositional phrases
(e.g., Lin 1999)
Mutual bootstrapping (Rilof and Jones 1999)
to alternatively recognize sequences and
subjective fillers
ACL01 Workshop on Collocation
47
Sentence Classification
Probabilistic classifier
Binary Features:
pronoun, adjective, number, modal ¬ “will “,
adverb ¬ “not”, new paragraph
Lexical feature:
good for subj; good for obj; good for neither
10-fold cross validation; 51% baseline
72% average accuracy across folds
82% average accuracy on sentences rated certain
ACL01 Workshop on Collocation
48
Test for Bias:
Marginal Homogeneity
C1
pi   p i for all i
Worse the fit,
greater the bias
C2
C3
C4
C1
1+ = X1
C2
2+ = X2
C3
3+ = X3
C4
4+ = X4
+1 = +2 =
X1 X2
ACL01 Workshop on Collocation
+3 =
X3
+4 =
X4
49
Test for Symmetric Disagreement:
Quasi-Symmetry
C1
Tests relationships
among the
off-diagonal counts
C2
*
C1
C2
*
C3
*
C4
*
C3
*
*
*
*
*
*
C4
*
*
Better the fit,
higher the correlation
ACL01 Workshop on Collocation
50
Unigram PSEs
Adjectives and Verbs identified using Lin’s
distributional similarity (Lin 1998)
Distributional similarity is often used in
NLP to find synonyms
Motivating hypothesis: words may be similar
because they have similar subjective usages
ACL01 Workshop on Collocation
51
Unigram Feature Generation
AdjFeature = {}
For all Adjectives A in the training data:
S = A + N most similar words to A
P = precision(S) in the training data
if P > T:
AdjFeature += S
Many runs with various settings for N and T
Choose values of N and T on a validation set
Evaluate on a new test set
ACL01 Workshop on Collocation
52
Lin’s Distributional Similarity
R2
R3
I
have
a
brown
dog
R1
R4
Word
I
have
brown
R
R1
R2
R3
...
W
have
dog
dog
ACL01 Workshop on Collocation
Lin 1998
53
Filtering
Word + cluster removed
if precision on training set
< threshold
Filtered
Set
Seed
Words
Words+
Clusters
ACL01 Workshop on Collocation
54
Parameters
Threshold
Seed
Words
Words+
Clusters
Cluster size
ACL01 Workshop on Collocation
55
Lin’s Distributional Similarity
Word1
R W
R W
Word2
RW
RW
RW
RW
RW
RW
Pairs statistically correlated with Word1
Sum over RWint: I(Word1,RWint) + I(Word2,RWint) /
Sum over RWw1: I(Word1,RWw1) + Sum over RWw2: I(Word2,RWw2)
ACL01 Workshop on Collocation
56