Fast and Accurate PoseSLAM by Combining Relative and

Download Report

Transcript Fast and Accurate PoseSLAM by Combining Relative and

An Inexpensive Method for Evaluating
the Localization Performance of a Mobile
Robot Navigation System
Harsha Kikkeri, Gershon Parent, Mihai Jalobeanu, and Stan Birchfield
Microsoft Robotics
Motivation
Goal: Automatically measure performance of mobile robot navigation system
Purpose:
• Internal comparison – how is my system improving over time?
• External comparison – how does my system compare to others?
Requirements:
• Repeatable – not just playback of recorded file, but run the system again (with
environment dynamics)
• Reproducible – others should be able to measure the performance of their system
in their environment
• Comparable – need to compare solutions with different hardware and sensors, in
different environments
• Inexpensive – cost should not be a barrier to use
We focus only on localization performance here
Scalability
• System should scale
• in space (large environments)
• in time (long runs)
• in variety (different types of environments)
• Simplicity is key to scalability:
•
•
•
•
Low setup time
Easy calibration
Inexpensive components
Non-intrusive
Previous work
• Datasets: Radish, New College, SLAM datasets
do not always have ground truth
• SLAM with ground truth: Rawseeds, Freiburg, TUM
use prerecorded data, do not scale easily
• Qualitative evaluation: RoboCupRescue, RoboCupHome
focus is on achieving a particular task
• Benchmarking initiatives: EURON, RoSta, PerMIS, RTP
have not yet created definitive set of metrics / benchmarks for nav
• Comparison on small scale: Teleworkbench
small scale
• Retroreflective markers and laser: Tong-Barfoot ICRA 2011
requires laser, subject to occlusion
Our approach
Landmark
x
y
• Checkerboard pattern
• Yields 3D pose of camera
relative to target
• Convert to 2D pose of robot
on floor
A useful instrument
Laser level:
• Upward facing laser provides
plumb-up line
• Downward facing laser provides
plumb-down line
• Horizontal laser (not used)
• Self-leveling, so plumb lines are
parallel to gravity
• Used to determine point on
ground directly below origin of
target
Procedure
• Calibration
• Internal camera parameters
• External camera parameters w.r.t. robot (position, tilt)
• Floor parameters under each landmark (tilt)
• Map-building
• Build map
• When under landmark, user presses button
• Pose estimation + calibration  robot pose w.r.t. landmark
• Store robot pose w.r.t. map*
• Runtime
• Generate sequence of waypoints
• When robot thinks it is under a landmark,*
• Pose estimation + calibration  robot pose w.r.t. landmark
• Error is difference between pose at runtime and pose at map-building
*Note: Any type of map can be used
Coordinate systems
POSE ESTIMATION
CALIBRATION
image
internal camera
parameters

camera

3D Euclidean
(external camera parameters)
landmark
?
2D Euclidean
(optional)
world
2D/3D Euclidean
?
robot
2D Euclidean
(absolute metric)
LOCALIZATION (what we want)
Camera-to-robot calibration
• Need to determine:
• rotation between camera and robot
• translation between camera and robot
3
+3
6 parameters
• If floor were completely flat, and camera
were mounted perfectly upright, then
xr = x – drc cos qrc
yr = y – drc sin qrc
qr = q – qa camera roll
robot pose camera pose
camera
camera offset
wheel
base
robot
center
But floor is often not flat, and camera is never upright
driving
direction
robot
Camera-to-robot calibration
• When floor is not flat, and camera is not upright, then estimate
tilt of camera w.r.t. floor normal (fc)
azimuth of camera tilt plane w.r.t. forward direction of robot (qc)
tilt of floor w.r.t. gravity (ff)
azimuth of floor tilt plane w.r.t. positive x axis of landmark (qf)
ff
gravity
•
•
•
•
floor normal
fc optical
axis
• Rotate robot incrementally 360 degrees
• Rotation axis is perpendicular to floor
• Optical axis traces cone
}
}
xr = x – drc cos qrc – z sin fc cos (qc+q) – z sin ff cos qf
yr = y – drc sin qrc – z sin fc cos (qc+q) – z sin ff cos qf
qr = q – qa
rf
rc
Calibration geometry
gravity
landmark
floor
Calibration geometry
gravity
landmark
floor
camera
center
robot
Calibration geometry
gravity
landmark
floor
camera
center
robot
ff
Calibration geometry
axis of
rotation (= floor normal)
gravity
landmark
ff
floor
camera
center
ff
Calibration geometry
optical
axis1
axis of
rotation
gravity
landmark
ff
fc
floor
camera
center
ff
Calibration geometry
optical
axis1
axis of
rotation
z1
gravity
landmark
ff
fc
floor
camera
center
ff
Calibration geometry
optical
axis1
axis of
rotation
landmark
z1
gravity
x1
ff
fc
floor
camera
center
ff
Calibration geometry
optical
axis1
axis of
rotation
optical
axis2
landmark
z1
gravity
x1
ff
rotate robot
fc
floor
camera
center
ff
Calibration geometry
optical
axis1
axis of
rotation
optical
axis2
These are 180o apart
landmark
z1
gravity
x1
ff
rotate robot
fc
floor
camera
center
ff
Calibration geometry
optical
axis1
axis of
rotation
landmark
z1
gravity
x1
ff
fc f
c
floor
camera
center
ff
optical
axis2
Calibration geometry
optical
axis1
axis of
rotation
landmark
z1
gravity
x1
ff
fc f
c
z2
floor
camera
center
ff
optical
axis2
Calibration geometry
optical
axis1
axis of
rotation
landmark
z1
gravity
x1
x2
ff
fc f
c
z2
floor
camera
center
ff
optical
axis2
Calibration geometry
Note: x1 + (x2-x1) / 2 = (x2+x1) / 2
optical
axis1
axis of
rotation
x2 – x1
landmark
z1
gravity
x1
x2
optical
axis2
(x1,z1), (x2,z2) are from
pose estimation
sin fc = (x2-x1) / 2z
sin ff = (x2+x1) / 2z
ff
fc f
c
z2
where
z = (z1+z2)/2
floor
camera
center
ff
Calibration geometry
radius of circle:
optical
axis1
axis of
rotation
x2 – x1
landmark
distance from landmark
center to circle center:
z1
gravity
x1
x2
optical
axis2
(x1,z1), (x2,z2) are from
pose estimation
sin fc = (x2-x1) / 2z
sin ff = (x2+x1) / 2z
ff
fc f
c
z2
where
z = (z1+z2)/2
floor
camera
center
ff
Calibration geometry
radius of circle:
optical
axis1
axis of
rotation
landmark
distance from landmark
center to circle center:
z1
gravity
x1
(x1,z1), (x2,z2) are from
pose estimation
x2 – x1
rc / z
sin fc = (x2-x1) / 2z
sin ff = (x2+x1) / 2z
rf / z
z2
where
x2
ff
fc f
c
optical
axis2
z = (z1+z2)/2
where
floor
camera
center
ff
Calibration geometry
Top-down view of circle
Tilt angles
Azimuth angles
where
(from real data)
Evaluating accuracy
• Mounted camera to
carriage of CNC machine
• Move to different known
(x,y,q), measure pose
• Covered area 1.3 x 0.6 m
Position err: m=5 s=2 mm
max=11 mm
Angular err: m=0.3 s=0.2 deg
max=1 deg
Evaluating accuracy
• Placed robot
at 20 random
positions under
one landmark
 Position err
usually < 20 mm
Orient err
usually < 1 deg
Evaluating accuracy
• 15 landmarks
across 2 bldgs.
• Placed robot
at 5 canonical
positions
 Position err
usually < 20 mm
Orient err
usually < 1 deg
Evaluating accuracy
• Our accuracy is
comparable to
other systems
• Our system is
scalable to large
environments
scales to arbitrarily large environments
scales to very large single-floor environments (with additional step)
• GTvision/GTlaser from Ceriani et al. AR 2009 (Rawseeds)
• mocap from Kummerle et al. AR 2009
• retroreflective from Tong, Barfoot ICRA 2011
Evaluating accuracy
Two different buildings on the Microsoft campus
Evaluating accuracy
• Automated runs in 2 diff.
environments
• Accuracy comparable
• Easy to setup
• Easy to maintain
Computing global coordinates
Theodolite:
• Horizontal laser
• emanates from pan-tilt head
• Reflects off mirror
• Measures (w.r.t. gravity)
• horizontal distance to mirror
• pan angle to mirror
• tilt angle to mirror (not used)
Computing global coordinates
theodolite
l23
q2
l12
reflector
q3
l34
q1
q4
reflector
l15
q5
l45
q6
l67
q7
l78
For target positions:
• Repeatedly measure
distance and angle for each
triplet of targets with line-ofsight
•  2D Euclidean coordinates
of all targets in a common
global coordinate system
• High accuracy of theodolite
removes nearly all drift
• Drift can be checked by
adding all angles in a loop,
comparing with 360 degrees
(optional)
Computing global coordinates
theodolite
reflector
(multiple locations
– only 2 needed)
For target orientation:
• Place reflector under
several positions within
target
q
target
q
h
Given l1, l2, a (from theodolite)
and tlength (known), find q
Naïve solution
sin q = (l1 - l2 cos a) / tlength
Better solution
tan q = (l1 - l2 cos a) / h
where
(l1 - l2 cos a)2 + h2 = l22
Naïve solution is sensitive to noise
Key is to use only measured values
l1
l2
a
theodolite
Navigation contest
• Microsoft and Adept are organizing Kinect Autonomous
Mobile Robot Contest at IROS 2014 in Chicago
http://www.iros2014.org/program/kinect-robot-navigation-contest
Conclusion
• System for evaluating localization accuracy of navigation
•
•
•
•
•
•
Inexpensive
Easy to setup
Easy to maintain
Highly accurate
Scalable to arbitrarily large environments
Scalable to arbitrarily run lengths (time or space)
• With theodolite, global coordinates are possible
• We have begun long-term, large-scale comparisons
(results forthcoming)
• Mobile robot navigation contest at IROS 2014
Thanks!