Price Premium - Search and the New Economy

Download Report

Transcript Price Premium - Search and the New Economy

Search and the New Economy
Session 5
Mining User-Generated Content
Prof. Panos Ipeirotis
Today’s Objectives
• Tracking preferences using social networks
– Facebook API
– Trend tracking using Facebook
• Mining positive and negative opinions
– Sentiment classification for product reviews
– Feature-specific opinion tracking
• Economic-aware opinion mining
– Reputation systems in marketplaces
– Quantifying sentiment using econometrics
Top-10, Zeitgeist, Pulse, …
• Tracking top preferences have been around for
ever
Online Social Networking Sites
• Preferences listed and easily accessible
Facebook API
• Content easily extractable
• Easy to “slice and dice”
– List the top-5 books for 30-year old New Yorkers
– List the book that had the highest increase across
female population last week
– …
Demo
Today’s Objectives
• Tracking preferences using social networks
– Facebook API
– Trend tracking using Facebook
• Mining positive and negative opinions
– Sentiment classification for product reviews
– Feature-specific opinion tracking
• Economic-aware opinion mining
– Reputation systems in marketplaces
– Quantifying sentiment using econometrics
Customer-generated Reviews
• Amazon.com started with books
• Today there are review sites for almost everything
• In contrast to “favorites” we can get information
for less popular products
Questions
• Are reviews representative?
• How do people express sentiment?
Helpfulness of review
(by other customers)
Rating
(1 … 5 stars)
Review
Do People Trust Reviews?
• Law of large numbers: single review no, multiple
ones, yes
• Peer feedback: number of useful votes
• Perceived usefulness is affected by:
– Identity disclosure: Users trust real people
– Mixture of objective and subjective elements
– Readability, grammaticality
• Negative reviews that are useful may increase
sales! (Why?)
Are Reviews Representative?
counts
counts
What is the Shape of the Distribution of Number of Stars?
2
3
4
5
1
2
3
4
5
Guess?
1
2
3
4
5
1
2
3
4
5
counts
counts
1
counts
Observation 1: Reporting Bias
1
2
3
4
5
Why?
Implications for WOM strategy?
Possible Reasons for Biases
• People don’t like to be critical
• People do not post if they do not feel strongly
about the product (positively or negatively)
Observation 2: The SpongeBob Effect
versus
SpongeBob Squarepants
Oscar
Oscar Winners 2000-2005
Average Rating
3.7 Stars
SpongeBob DVDs
Average Rating
4.1 Stars
And the Winner is… SpongeBob!
If SpongeBob effect is common, then ratings do
not accurately signal the quality of the resource
What is Happening Here?
• People choose movies they think they will like, and often they are
right
– Ratings only tell us that “fans of SpongeBob like SpongeBob”
– Self-selection
• Oscar winners draw a wider audience
– Rating is much more representative of the general population
• When SpongeBob gets a wider audience, his ratings drop
Title
# Ratings
Ave
SpongeBob Season 2
3047
4.12
Tide and Seek
3114
4.05
SpongeBob the Movie
21,918
3.49
Home Sweet Pineapple
2007
4.10
Fear of a Krabby Patty
1641
4.06
Effect of Self-Selection: Example
• 10 people see SpongeBob’s 4-star ratings
– 3 are already SpongeBob fans, rent movie, award 5 stars
– 6 already know they don’t like SpongeBob, do not see
movie
– Last person doesn’t know SpongeBob, impressed by high
ratings, rents movie, rates it 1-star
Result:
• Average rating remains unchanged: (5+5+5+1)/4
= 4 stars
• 9 of 10 consumers did not really need rating
system
• Only consumer who actually used the rating
system was misled
Bias-Resistant Reputation System
• Want P(S) but we collect data on P(S|R)
S = Are satisfied with resource
R = Resource selected (and reviewed)
• However, P(S|E)  P(S|E,R)
E = Expects that will like the resource
– Likelihood of satisfaction depends primarily on expectation of
satisfaction, not on the selection decision
– If we can collect prior expectation, the gap between evaluation
group and feedback group disappears
• whether you select the resource or not doesn’t matter
Bias-Resistant Reputation System
Before viewing:
After viewing:
•
•
I think I will:





Love this movie
Like this movie
It will be just OK
Somewhat dislike this movie
Hate this movie
Skeptics
Everyone else
Big fans
I liked this movie:





Much more than expected
More than expected
About the same as I expected
Less than I expected
Much less than I expected
Conclusions
1. Reporting bias and Self-selection bias exists in
most cases of consumer choice
2. Bias means that user ratings do not reflect the
distribution of satisfaction in the evaluation group
– Consumers have no idea what “discount” to apply to
ratings to get a true idea of quality
3. Many current rating systems may be selfdefeating
– Accurate ratings promote self-selection, which leads to
inaccurate ratings
4. Collecting prior expectations may help address
this problem
OK, we know the biases
• Can we get more knowledge?
• Can we dig deeper than the numeric ratings?
– “Read the reviews!”
– “They are too many!”
Independent Sentiment Analysis
• Often we need to analyze opinions
– Can we provide review summaries?
– What should the summary be?
Basic Sentiment classification
• Classify full documents (e.g., reviews, blog
postings) based on the overall sentiment
– Positive, negative and (possibly) neutral
• Similar but also different from topic-based text
classification.
– In topic-based classification, topic words are important
• Diabetes, cholesterol  health
• Election, votes  politics
– In sentiment classification, sentiment words are more
important, e.g., great, excellent, horrible, bad, worst, etc.
– Sentiment words are usually adjectives or adverbs or
some specific expressions (“it rocks”, “it sucks” etc.)
• Useful when doing aggregate analysis
Can we go further?
• Sentiment classification is useful, but it does not
find what the reviewer liked and disliked.
– Negative sentiment does not mean that the reviewer
does not like anything about the object.
– Positive sentiment does not mean that the reviewer likes
everything
• Go to the sentence level and feature level
Extraction of features
• Two types of features: explicit and implicit
• Explicit features are mentioned and evaluated directly
– “The pictures are very clear.”
– Explicit feature: picture
• Implicit features are evaluated but not mentioned
– “It is small enough to fit easily in a coat pocket or purse.”
– Implicit feature: size
• Extraction: Frequency based approach
– Focusing on frequent features (main features)
– Infrequent features can be listed as well
Identify opinion orientation of features
• Using sentiment words and phrases
– Identify words that are often used to express positive or
negative sentiments
– There are many ways (dictionaries, WorldNet, collocation with
known adjectives,…)
• Use orientation of opinion words as the sentence
orientation, e.g.,
– Sum:
• a negative word is near the feature, -1,
• a positive word is near a feature, +1
Two types of evaluations
• Direct Opinions: sentiment expressions on some
objects/entities, e.g., products, events, topics,
individuals, organizations, etc
– E.g., “the picture quality of this camera is great”
– Subjective
• Comparisons: relations expressing similarities,
differences, or ordering of more than one objects.
–
–
–
–
E.g., “car x is cheaper than car y.”
Objective or subjective
Compares feature quality
Compares feature existence
Visual Summarization & Comparison
Summary
+
Digital camera 1
_
Picture
+
Comparison
Digital camera 1
Digital camera 2
_
Battery
Zoom
Size
Weight
Example: iPod vs. Zune
Today’s Objectives
• Tracking preferences using social networks
– Facebook API
– Trend tracking using Facebook
• Mining positive and negative opinions
– Sentiment classification for product reviews
– Feature-specific opinion tracking
• Economic-aware opinion mining
– Reputation systems in marketplaces
– Quantifying sentiment using econometrics
Comparative Shopping in e-Marketplaces
Customers Rarely Buy Cheapest Item
Are Customers Irrational?
BuyDig.com gets
Price Premium
(customers pay more than
the minimum price)
$11.04
Price Premiums @ Amazon
10000
Number of Transactions
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
-100
-75
-50
-25
0
Price Premium
25
50
75
100
Why not Buying the Cheapest?
You buy more than a product
 Customers do not pay only for the product
 Customers also pay for a set of fulfillment characteristics
 Delivery
 Packaging
 Responsiveness
 …
Customers care about reputation of sellers!
Reputation Systems are Review Systems for Humans
Example of a reputation profile
Basic idea
Conjecture:
Price premiums measure reputation
Reputation is captured in text feedback
Examine how text affects price premiums
(and do sentiment analysis as a side effect)
Outline
• How we capture price premiums
• How we structure text feedback
• How we connect price premiums and text
Data
Overview

Panel of 280 software products sold by Amazon.com X 180 days

Data from “used goods” market
 Amazon Web services facilitate capturing transactions
 No need for any proprietary Amazon data
Data: Secondary Marketplace
Data: Capturing Transactions
Jan
1
Jan
2
Jan
3
Jan
4
Jan
5
Jan
6
Jan
7
Jan
8
time
We repeatedly “crawl” the marketplace using Amazon Web Services
While listing appears  item is still available  no sale
Data: Capturing Transactions
Jan
1
Jan
2
Jan
3
Jan
4
Jan
5
Jan
6
Jan
7
Jan
8
Jan
9
Jan
10
time
We repeatedly “crawl” the marketplace using Amazon Web Services
When listing disappears  item sold
Data: Transactions
Capturing transactions and “price premiums”
Jan
1
Jan
2
Jan
3
Jan
4
Jan
5
Jan
6
Jan
7
Jan
8
Jan
9
Jan
10
time
Item sold on 1/9
When item is sold, listing disappears
Data: Variables of Interest
Price Premium

Difference of price charged by a seller minus listed price of a competitor
Price Premium = (Seller Price – Competitor Price)

Calculated for each seller-competitor pair, for each transaction

Each transaction generates M observations, (M: number of competing
sellers)
Alternative Definitions:

Average Price Premium (one per transaction)

Relative Price Premium (relative to seller price)

Average Relative Price Premium (combination of the above)
Price premiums @ Amazon
10000
Number of Transactions
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
-100
-75
-50
-25
0
Price Premium
25
50
75
100
Average price premiums @ Amazon
1200
Number of Transactions
1000
800
600
400
200
0
-100
-75
-50
-25
0
25
Average Price Premium
50
75
100
Relative Price Premiums
20000
18000
16000
14000
12000
10000
8000
6000
4000
2000
0
Average Relative Price Premiums
2500
2000
1500
1000
500
0
Outline
• How we capture price premiums
• How we structure text feedback
• How we connect price premiums and text
Decomposing Reputation
Is reputation just a scalar metric?

Many studies assumed a “monolithic” reputation

Instead, break down reputation in individual components

Sellers characterized by a set of fulfillment characteristics
(packaging, delivery, and so on)
What are these characteristics (valued by consumers?)

We think of each characteristic as a dimension, represented by a
noun, noun phrase, verb or verbal phrase (“shipping”,
“packaging”, “delivery”, “arrived”)

Use (simple) Natural Language Processing tools

Scan the textual feedback to discover these dimensions
Decomposing and Scoring Reputation
Decomposing and scoring reputation

We think of each characteristic as a dimension, represented by a
noun or verb phrase (“shipping”, “packaging”, “delivery”, “arrived”)

The sellers are rated on these dimensions by buyers using
modifiers (adjectives or adverbs), not numerical scores
 “Fast shipping!”
 “Great packaging”
 “Awesome unresponsiveness”
 “Unbelievable delays”
 “Unbelievable price”
How can we find out the meaning of these adjectives?
Structuring Feedback Text: Example
Parsing the feedback
P1: I was impressed by the speedy delivery! Great Service!
P2: The item arrived in awful packaging, but the delivery was speedy
Deriving reputation score

We assume that a modifier assigns a “score” to a dimension

α(μ, k): score associated when modifier μ evaluates the k-th dimension

w(k): weight of the k-th dimension

Thus, the overall (text) reputation score Π(i) is a sum:
Π(i) = 2*α (speedy, delivery) * weight(delivery)+
1*α (great, service)
* weight(service) +
1*α (awful, packaging) * weight(packaging)
unknown?
unknown
Outline
• How we capture price premiums
• How we structure text feedback
• How we connect price premiums and text
Sentiment Scoring with Regressions
Scoring the dimensions

Use price premiums as “true” reputation score Π(i)

Use regression to assess scores (coefficients)
Π(i) = 2*α (speedy, delivery) * weight(delivery)+
1*α (great, service)
* weight(service) +
Price
1*α (awful, packaging) * weight(packaging)
Premium
estimated coefficients
Regressions

Control for all variables that affect price premiums

Control for all numeric scores of reputation

Examine effect of text: E.g., seller with “fast delivery” has premium
$10 over seller with “slow delivery”, everything else being equal
 “fast delivery” is $10 better than “slow delivery”
Some Indicative Dollar Values
Negative
Positive
captures misspellings as well
Natural method for extracting sentiment strength and polarity
good packaging
Positive?
-$0.56
Negative ?
Naturally captures the pragmatic meaning within the given context
Results
Some dimensions that matter
 Delivery and contract fulfillment (extent and speed)
 Product quality and appropriate description
 Packaging
 Customer service
 Price (!)
 Responsiveness/Communication (speed and quality)
 Overall feeling (transaction)
More Results
Further evidence: Who will make the sale?

Classifier that predicts sale given set of sellers

Binary decision between seller and competitor

Used Decision Trees (for interpretability)

Training on data from Oct-Jan, Test on data from Feb-Mar

Only prices and product characteristics: 55%

+ numerical reputation (stars), lifetime: 74%

+ encoded textual information: 89%

text only: 87%
Text carries more information than the numeric metrics
Other applications
Summarize and query reputation data

Give me all merchants that deliver fast
SELECT merchant FROM reputation
WHERE delivery > ‘fast’

Summarize reputation of seller XYZ Inc.
 Delivery: 3.8/5
 Responsiveness: 4.8/5
 Packaging: 4.9/5
Pricing reputation

Given the competition, merchant XYZ can charge $20 more and still
make the sale (confidence: 83%)
Reputation Pricing Tool for Sellers
Canon Powershot x300
Seller: uCameraSite.com
Your last 5 transactions in
Name of product
Cameras
Price
1. Canon Powershot x300
2. Kodak - EasyShare 5.0MP
3. Nikon - Coolpix 5.1MP
4. Fuji FinePix 5.1
5. Canon PowerShot x900
Your Price: $399
Your Reputation Price: $419
Your Reputation Premium: $20 (5%)
Your competitive landscape
Product Price (reputation)
Seller 1 - $431
(4.8)
(4.65)
Seller 2 - $409
You - $399
$20
(4.7)
Seller 3 - $382
(3.9)
Seller 4-$379
(3.6)
Seller 5-$376
(3.4)
Left on the table
Tool for Seller Reputation Management
Quantitatively Understand & Manage Seller Reputation
How your customers see you
relative to other sellers:
Service
35%*
Dimensions of your reputation and the
relative importance to your customers:
Delivery
9%
25%
Packaging 69%
Delivery
89%
Quality
95%
Overall
82%
Service
Quality
45%
Packaging
Other
14%
* Percentile of all merchants
7%
•
•
•
•
RSI Products Automatically Identify the Dimensions of Reputation from Textual Feedback
Dimensions are Quantified Relative to Other Sellers and Relative to Buyer Importance
Sellers can Understand their Key Dimensions of Reputation and Manage them over Time
Arms Sellers with Vital Info to Compete on Reputation Dimensions other than Low Price.
Tool for Buyers
Marketplace Search
Canon PS SD700
Used Market (ex: Amazon)
Dimension Comparison
Price
Service
Package
Seller 1
Seller 2
Price
Price Range $250-$300
Service
Seller 1
Seller 2
Packaging
Seller 4
Seller 3
Delivery
Seller 3
Seller 4
Seller 5
Seller 6
Seller 7
Sort by Price/Service/Delivery/other dimensions
Delivery
Summary
• User feedback defines reputation → price premiums
• Generalize: User-generated-content affects “markets”
• Reviews and product sales
• News/blogs and elections
Product Reviews and Product Sales
• Examine changes in demand and estimate
weights of features and strength of evaluations
“excellent lenses”
+3%
+6%
“poor lenses”
“poor photos”
-1%



“excellent photos”
2%
Feature “photos” is two time more important than “lenses”
“Excellent” is positive, “poor” is negative
“Excellent” is three times stronger than “poor”
Question: Reviews and Ads
Given product review summaries (potentially with
economic impact), can we improve ad generation?
• How?
• Is your strategy incentive-compatible?
Sentiment & Presidential Election
Political News and Prediction Markets
Hillary Clinton
Political News and Prediction Markets
Hillary Clinton, Feb 2nd
Political News and Prediction Markets
Mitt Romney
Political News and Prediction Markets
Mitt Romney, Feb 2nd
Summary
• We can quantify unstructured, qualitative data. We need:
• A context in which content is influential and not redundant
(experiential content for instance)
• A measurable economic variable: price (premium),
demand, cost, customer satisfaction, process cycle time
• Methods for structuring unstructured content
• Methods for aggregating the variables in a business
context-aware manner
Question:
• What needs to be done for other types of USG?
– Structuring: Opinions are expressed in many ways
– Independent summaries: Not all scenarios have
associated economic outcomes, or difficult to measure
(e.g., discussion about product pre-announcement)
– Personalization: The weight of the opinion of each
person varies (interesting future direction!)
– Data collection: Rarely evaluations are in one place