Eye-Tracking Analysis of User Behavior in WWW
Download
Report
Transcript Eye-Tracking Analysis of User Behavior in WWW
Eye Tracking Analysis of User
Behavior in WWW Search
Laura Granka
Thorsten Joachims
Geri Gay
why use eye-tracking for
information retrieval?
•
•
•
•
Understand how searchers
evaluate online search results
Enhanced interface design
More accurate interpretation of
implicit feedback (eg, clickthrough data)
More targeted metrics for evaluating retrieval
performance
Figure: popular regions are
highlighted through shadow-intensity
key research questions
• How long does it take searchers to select a
document?
• How many abstracts do searchers look at
before making a selection?
• Do searchers look at abstracts ranked lower
than the selected document?
• Do searchers view abstracts linearly?
• Which parts of the abstract are most likely to
be viewed?
what is eye-tracking?
Eye tracking device
View of subject’s pupil on
monitor; used for calibration
• Device to detect and
record where and what
people look at
• Multiple applications:
reading, usability, visual
search, in both physical
and virtual contexts
Figure: Cornell HCI eye-tracking
configuration
ocular indices for www tracking
• Fixations: ~200-300ms; information is acquired
• Saccades: extremely rapid movements between fixations
• Pupil dilation: size of pupil indicates interest, arousal
Aggregate eye-tracking graphs depict
viewing intensity in key regions
“Scanpath” output depicts pattern of
movement throughout screen. Black
markers represent fixations.
experimental search tasks
• Ten search tasks
given to all
participants
• Search topics
included travel,
science, movies,
local, television,
college, and trivia
• Searches evenly
split between
informational and
navigational tasks
experimental procedures
• Users conducted live
Google searches
• Users allowed to
search freely, with
any queries
• Script removed all ad
content
• Proxy stored all
pages and log files
Figure: Specific “zones” were
created around each result, enabling
eye-movements to be analyzed
specific to the rankings
sample eye-tracking output
sample eye-tracking output
sample eye-tracking output
sample eye-tracking output
sample eye-tracking output
overall searching behavior
How long does it take users to select a document?
Tim e to select docum ent
Time spent before a result is clicked
10
8
7
6
5
4
3
2
1
more difficult
gr
ey
ho
un
d
em
er
il
an
ti b
io
t ic
s
task
pr
im
ar
y
ho
us
in
g
du
de
ra
nc
h
ti m
em
ac
hi
ne
m
ou
nt
ai
n
jo
rd
an
0
co
rn
el
l
mean time (s)
9
less difficult
Overall mean: 5.7 seconds, St.D: 5.4
overall viewing behavior
How many abstracts do we view, and in what order?
Most likely to view only two documents per results set
Total number of abstracts viewed per page
120
frequency
100
Notice dip
after page
break
80
60
40
20
0
1
2
3
4
5
6
7
8
9
Total number of abstracts viewed
Mean: 3.07
Median/Mode: 2.00
10
overall viewing behavior
How many abstracts do we view, and in what order?
Results viewed linearly
mean fixation value of arrival
Instance of arrival to each result
25
20
15
10
5
0
1
2
3
4
5
Rank of result
6
7
8
9
10
overall viewing behavior
180
# times result selected
160
time spent in abstract
1
0.9
0.8
140
0.7
120
0.6
100
0.5
80
0.4
60
0.3
40
0.2
20
0.1
0
mean time (s)
# times rank selected
spent
eachspend
result by viewing
frequency ofeach
doc selected
How much Time
time
doinwe
abstract?
0
1
2
3
4
5
6
7
8
9
10
11
Rank of result
Time spent viewing each abstract compared with the frequency
that each rank is selected. Error bars are 1 SE
overall viewing behavior
How thoroughly do we view the results set?
Number of abstracts viewed above and below selected link
overall viewing behavior
What information in the abstract is most useful?
Percentage of time spent viewing each part of abstract
Title: 30%
Snippet: 43%
Category: 0.3%
URL: 21%
Other: 5% (includes,
URL: 21.1%
cached, similar pages,
description)
Other: 5.3%
Title: 30.5%
Category: 0.3%
Snippet: 42.8%
overall searching behavior
Search task difficulty and satisfaction with Google
Difficulty
Satisfaction
9
8
7
ranking
6
5
4
3
2
1
0
Cornell
mansion
M ichael
Jordan
Dude
ranch
Time
M achine
Tallest
mountain
CM U
housing
NY
Primary
First
antibiotic
Emeril
Greyhound
search task
more difficult
less difficult
*Difficulty and satisfaction are ranked on a 1-10 scale; 10 meaning very difficult and very satisfied, respectively
overall searching behavior
Task difficulty
influences rank of selected document and number of abstracts viewed
Difficulty
Satisfaction
9
8
7
ranking
6
5
4
3
2
1
0
Cornell
mansion
M ichael
Jordan
Dude
ranch
Time
M achine
Tallest
mountain
CM U
housing
NY
Primary
First
antibiotic
Emeril
Greyhound
search task
more difficult
less difficult
Mean rank of selected doc: 2.66 Median/ Mode: 1.00
overall searching behavior
Top Query Terms
Frequency
1. Michael Jordan statistician
20
2. Thousand acres dude ranch
11
2. One thousand acres dude ranch
11
3. 1000 acres dude ranch
9
4. Time machine movie
7
4. Carnegie mellon university graduate housing
7
5. Imdb
6
5. Emeril lagasse
6
5. First modern antibiotic
6
5. Greyhound bus
6
5. Carnegie mellon graduate housing
6
conclusions
Searching Trends: Popularity of specialized,
vertical portals
Majority of students preferred
an internal imdb.com search
over a general Google search
Several students preferred
conducting a Google search
from the cmu.edu homepage
conclusions
•
•
•
•
•
•
Document selected in under 5 seconds
Users click on the first promising link they see
Results viewed linearly
Top 2 results most likely to be viewed
Users rather reformulate query than scroll
Task type and difficulty affect viewing
behavior
• Presentation of results affects selection
future research
Impact on advertising
With such fast selections being made, will searchers even view ads?
Ads most likely to be seen:
• Difficult task
• Ambiguous info need
• Informational query
• Low searcher expertise
?
future research
Relevance judgments
• Do we spent more time viewing relevant abstracts?
• Do we click the first relevant abstract viewed?
• Does pupil dilation increase for more relevant
documents
If results were re-ranked, would
viewing behavior differ?
Cornell University
Computer Science &
Human-Computer Interaction
Thorsten Joachims
[email protected]
Geri Gay
[email protected]
Laura Granka
[email protected]
Helene Hembrooke
[email protected]
Matthew Feusner
[email protected]