Using Dynamic Bayesian Networks and RFID Tags to Infer Human
Download
Report
Transcript Using Dynamic Bayesian Networks and RFID Tags to Infer Human
Robust Activity Recognition
Henry Kautz
University of Washington
Computer Science & Engineering
graduate students: Don Patterson, Lin Liao,
Krzysztof Gajos, Karthik Gopalratnam
CSE faculty: Dieter Fox, Gaetano Borriello
UW School of Medicine: Kurt Johnson, Pat Brown,
Brian Dudgeon, Mark Harniss
Intel Research: Matthai Philipose, Mike Perkowitz,
Ken Fishkin, Tanzeem Choudhury
In the Not Too Distant
Future...
Pervasive sensing infrastructure
GPS enabled phones
RFID tags on all consumer products
Electronic diaries (MS SenseCam)
Healthcare crisis
Aging baby boomers – epidemic of
Alzheimer’s Disease
Deinstitutionalization of the cognitively
disabled
Nationwide shortage of caretaking
professionals
...An Opportunity
Develop technology to
Support independent living by people
with cognitive disabilities
At home
At work
Throughout the community
Improve health care
Long term monitoring of activities of daily
living (ADL’s)
Intervention before a health crisis
The UW Assisted Cognition
Project
Synthesis of work in
Ubiquitous computing
Artificial intelligence
Human-computer interaction
ACCESS
Support use of public transit
UW CSE & Rehabilitation Medicine
CARE
ADL monitoring and assistance
UW CSE & Intel Research
This Talk
Building models of everyday plans
and goals
From sensor data
By mining textual description
By engineering commonsense knowledge
Tracking and predicting a user’s
behavior
Noisy and incomplete sensor data
Recognizing user errors
First steps
ACCESS
Assisted Cognition in Community, Employment, &
Support Settings
Supported by the National Institute on Disability &
Rehabilitation Research (NIDDR)
Learning & Reasoning About
Transportation Routines
Task
Given a data stream from a
wearable GPS unit...
Infer the user’s location and mode of
transportation (foot, car, bus, bike, ...)
Predict where user will go
Detect novel behavior
User errors?
Opportunities for learning?
Why Inference Is Not Trivial
People don’t have wheels
Systematic GPS error
We are not in the woods
Dead and semi-dead zones
Lots of multi-path propagation
Inside of vehicles
Inside of buildings
Not just location tracking
Mode, Prediction, Novelty
GPS Receivers We Used
GeoStats wearable
GPS logger
Nokia 6600 Java Cell
Phone with Bluetooth
GPS unit
Geographic Information
Systems
Street map
Bus routes and bus stops
Data source: Census 2000
Tiger/line data
Data source: Metro GIS
Architecture
Learning Engine
Goals
Paths
Modes
Errors
GIS
Database
Inference Engine
Probabilistic Reasoning
Graphical model:
Dynamic Bayesian network
Inference engine:
Rao-Blackwellised particle filters
Learning engine:
Expectation-Maximization (EM) algorithm
Flat Model: State Space
Transportation Mode
Velocity
Location
Block
Position along block
At bus stop, parking lot, ...?
GPS Offset Error
GPS signal
Motion Model for Mode of
Transportation
Rao-Blackwellised Particle
Filtering
Inference: estimate current state
distribution given all past readings
Particle filtering
Evolve approximation to state distribution using
samples (particles)
Supports multi-modal distributions
Supports discrete variables (e.g.: mode)
Rao-Blackwellisation
Particles include distributions over variables, not
just single samples
Improved accuracy with fewer particles
Tracking
blue = foot, green = bus, red = car
Learning
User model = DBN parameters
Transitions between blocks
Transitions between modes
Learning: Monte-Carlo EM
Unlabeled data
30 days of one user, logged at 2
second intervals (when outdoors)
3-fold cross validation
Results
Model
Mode Prediction
Accuracy
Decision Tree
(supervised)
55%
Prior w/o bus info
60%
Prior with bus info
78%
Learned
84%
Probability of correctly
predicting the future
Prediction Accuracy
How can we
improve
predictive
power?
City Blocks
Transportation Routines
A
B
Work
Goals
work, home, friends, restaurant, doctor’s, ...
Trip segments
Home to Bus stop A on Foot
Bus stop A to Bus stop B on Bus
Bus stop B to workplace on Foot
Hierarchical Model
gk-1
gk
Goal
tk-1
tk
Trip segment
mk-1
mk
Transportation mode
xk-1
xk
x=<Location, Velocity>
zk-1
zk
GPS reading
Hierarchical Learning
Learn flat model
Infer goals
Locations where user is often motionless
Infer trip segment begin / end points
Locations with high mode transition probability
Infer trips segments
High-probability single-mode block transition
sequences between segment begin / end
points
Perform hierarchical EM learning
Inferring Goals
Inferring Trip Segments
Going to work
Going home
Correct goal
and route
predicted 100
blocks away
Application:
Opportunity
Knocks
Demonstrated at AAHA
Future of Aging
Services, Washington,
DC, March, 2004
Novelty Detection
Approach: model-selection
Run two trackers in parallel
Tracker 1: learned hierarchical model
Tracker 2: untrained flat model
Estimate the likelihood of each tracker given
the observations
Missing
the bus
stop
Novelty Detection
CARE
Cognitive Assistance in Real-world Environments
supported by the Intel Research Council
Learning & Inferring Activities
of Daily Living
Research Hypothesis
Observation: activities of daily
living involve the manipulation of
many physical objects
Cooking, cleaning, eating, personal
hygiene, exercise, hobbies, ...
Hypothesis: can recognize
activities from a time-sequence of
object “touches”
Such models are robust and easily
learned or engineered
Sensing Object Manipulation
RFID: Radiofrequency ID
tags
Small
Semi-passive
Durable
Cheap
Where Can We Put Tags?
How Can We Sense Them?
coming... wall-mounted “sparkle reader”
Example Data Stream
Technical Approach
Define (or learn) activities in simple,
high-level language
Multi-step, partially-ordered activities
Varying durations
Probabilistic association between activities
and objects
Compile to a DBN
Infer behavior using particle filtering
Making Tea
Activity Library
Building Models
Core ADL’s amenable to classic
knowledge engineering
Open-ended, fine-grained models:
infer from natural language texts?
Perkowitz et al., “Mining Models of
Human Activities from the Web”,
WWW-2004
Translation to DBN
Tricky issues:
Time
Partial orders
Object-use probabilities
80% chance of using the teapot
sometime during the “heat water” step
Instantaneous probability of seeing
teapot is not fixed!
Consider: 100% chance of using teapot if
making tea
DBN Encoding: Duration
At
At+1
Dt
Dt+1
DBN Encoding: Partial Orders
Pt
At
Pt
At+1
DBN Encoding: Object
Probabilities
At
Dt
Ht
Ot
zt
Instantaneous
probability of
touching an object
cannot be a
constant
DBN Encoding
Pt
At
Pt
At+1
Dt
Ht
Dt+1
Ht+1
Ot
zt
What’s in a Particle?
Sample of Activity
Starting time – sufficient to
represent distribution of Duration
History list of objects
Partial-order “credits”
Experimental Setup
Hand-built library of 14
ADL’s
17 test subjects
Each asked to perform
12 of the ADL’s
Data not segmented
No training on
individual test subjects
Sample Output
Results
1
2
3
4
5
6
7
8
9
10
11
12
13
14
ADL
Grooming
Tooth brushing
Toileting
Dishwashing
Housecleaning
Appliance use
Adjust furnace
Laundry
Prepare snack
Prepare beverage
Use telephone
Leisure activities
Infant care
Take medication
Overall
Precision
92
70
73
100
100
84
100
100
75
64
100
100
100
100
Recall
92
78
73
33
75
78
73
78
60
64
79
58
93
82
88
73
Key Next Steps
Parameter learning
Timing
Object probabilities
Structure learning
New activities from sensor data
Efficient inference for
Interrupted activities
Abandoned activities
Malformed activities
Relational models
Hierarchical classes of objects
Hierarchical classes of activities
Ultimately...
Affective state
agitated, calm, attentive, ...
Physiological states
hungry, tired, dizzy, ...
Interactions between people
T. Choudhury – Social dynamics
Principled human-computer interaction
Decision-theoretic control of interventions
Why Now?
A goal of much work of AI in the 1970’s
was to create programs that could
understand the narrative of ordinary
human experience
This area pretty much disappeared
Missing probabilistic tools
Systems not able to experience world
Lacked focus – “understand” to what end?
Today: the tools, the sensors, motivation
That Other Talk...
Combining Component Caching and
Clause Learning for Effective Model
Counting
Beame, Bacchus, Kautz, Pitassi, & Sang (SAT
2004, Vancouver BC)
Unifies algorithms for SAT and Bayesian
inference
DPLL-based, generalizes recursive conditioning
Exact inference in large, non-tree-like networks
Need to solve #P? Let me know!