Transcript Calibration

Calibration
Camera Calibration
• Geometric
– Intrinsics: Focal length, principal point, distortion
– Extrinsics: Position, orientation
• Radiometric
–
–
–
–
Mapping between pixel value and scene radiance
Can be nonlinear at a pixel (gamma, etc.)
Can vary between pixels (vignetting, cos4, etc.)
Dynamic range (calibrate shutter speed, etc.)
Geometric Calibration Issues
• Camera Model
– Orthogonal axes?
– Square pixels?
– Distortion?
• Calibration Target
–
–
–
–
Known 3D points, noncoplanar
Known 3D points, coplanar
Unknown 3D points (structure from motion)
Other features (e.g., known straight lines)
Geometric Calibration Issues
• Optimization method
–
–
–
–
–
Depends on camera model, available data
Linear vs. nonlinear model
Closed form vs. iterative
Intrinsics only vs. extrinsics only vs. both
Need initial guess?
Caveat - 2D Coordinate Systems
• y axis up vs. y axis down
• Origin at center vs. corner
• Will often write (u, v) for image coordinates
u
v
v
u
v
u
Camera Calibration – Example 1
• Given:
– 3D  2D correspondences
– General perspective camera model (no distortion)
 ax  by  cz  d 


 a b c d  x 

  homogeneous  ix  jy  kz  l 
 ex  fy  gz  h 
 e f g h  y 
      z    ix  jy  kz  l 


  divide 
 i j k l  1 



 





• Don’t care about “z” after transformation
• Homogeneous scale ambiguity  11 free parameters
Camera Calibration – Example 1
• Write equations:
ax1  by1  cz1  d
 u1
ix1  jy1  kz1  l
ex1  fy1  gz1  h
 v1
ix1  jy1  kz1  l

Camera Calibration – Example 1
 x1

0
x
 2
0


y1
0
y2
z1
0
z2
1
0
1
0
x1
0
0
y1
0
0
z1
0
0  u1 x1
1  v1 x1
0  u 2 x2
 u1 y1
 u1 y1
 u 2 y2
 u1 z1
 u1 z1
 u2 z2
0

0

0 x2
 
y2

z2

1  u 2 x2


 u 2 y2

 u2 z2

 u1  a 
 
 u1  b 




 u2 c  0
 
 u2   
  l 
• Linear equation
• Overconstrained (more equations than unknowns)
• Underconstrained (rank deficient matrix – any multiple
of a solution, including 0, is also a solution)
Camera Calibration – Example 1
• Standard linear least squares methods for
Ax=0 will give the solution x=0
• Instead, look for a solution with |x|= 1
• That is, minimize |Ax|2 subject to |x|2=1
Camera Calibration – Example 1
• Minimize |Ax|2 subject to |x|2=1
• |Ax|2 = (Ax)T(Ax) = (xTAT)(Ax) = xT(ATA)x
• Expand x in terms of eigenvectors of ATA:
x = m1e1+ m2e2+…
xT(ATA)x = l1m12+l2m22+…
|x|2 = m12+m22+…
Camera Calibration – Example 1
• To minimize
l1m12+l2m22+…
subject to
m12+m22+… = 1
set mmin= 1 and all other mi=0
• Thus, least squares solution is eigenvector
corresponding to minimum (non-zero)
eigenvalue of ATA
Camera Calibration – Example 2
• Incorporating additional constraints into
camera model
– No shear (u, v axes orthogonal)
– Square pixels
– etc.
• Doing minimization in image space
• All of these impose nonlinear constraints on
camera parameters
Camera Calibration – Example 2
• Option 1: nonlinear least squares
– Usually “gradient descent” techniques
– e.g. Levenberg-Marquardt
• Option 2: solve for general perspective model,
find closest solution that satisfies constraints
– Use closed-form solution as initial guess for
iterative minimization
Radial Distortion
• Radial distortion can not be represented
by matrix
uimg  cu  u
vimg  cv  v
*
img
*
img
1  k (u
1  k (u
* 2
img
* 2
img
* 2
img
v
* 2
img
v

)
)
• (cu, cv) is image center,
u*img= uimg– cu, v*img= vimg– cv,
k is first-order radial distortion coefficient
Camera Calibration – Example 3
• Incorporating radial distortion
• Option 1:
– Find distortion first (e.g., straight lines in
calibration target)
– Warp image to eliminate distortion
– Run (simpler) perspective calibration
• Option 2: nonlinear least squares
Calibration Targets
• Full 3D (nonplanar)
– Can calibrate with one image
– Difficult to construct
• 2D (planar)
– Can be made more accuracte
– Need multiple views
– Better constrained than full SFM problem
Calibration Targets
• Identification of features
–
–
–
–
Manual
Regular array, manually seeded
Regular array, automatically seeded
Color coding, patterns, etc.
• Subpixel estimation of locations
– Circle centers
– Checkerboard corners
Calibration Target w. Circles
3D Target w. Circles
Planar Checkerboard Target
[Bouguet]
Coded Circles
[Marschner et al.]
Concentric Coded Circles
[Gortler et al.]
Color Coded Circles
[Culbertson]
Calibrating Projector
• Calibrate camera
• Project pattern onto a known object
(usually plane)
– Can use time-coded structured light
• Form (uproj, vproj, x, y, z) tuples
• Use regular camera calibration code
• Typically lots of keystoning relative to cameras
Multi-Camera Geometry
• Epipolar geometry – relationship between
observed positions of points in multiple cameras
• Assume:
– 2 cameras
– Known intrinsics and extrinsics
Epipolar Geometry
P
p1
C1
p2
C2
Epipolar Geometry
P
p1
C1
l2
p2
C2
Epipolar Geometry
P
Epipolar line
l2
p1
p2
C1
C2
Epipoles
Epipolar Geometry
• Goal: derive equation for l2
• Observation: P, C1, C2 determine a plane
P
l2
p1
C1
p2
C2
Epipolar Geometry
• Work in coordinate frame of C1
• Normal of plane is T  Rp2, where T is relative
translation, R is relative rotation
P
l2
p1
C1
p2
C2
Epipolar Geometry
• p1 is perpendicular to this normal:
p1  (T  Rp2) = 0
P
l2
p1
C1
p2
C2
Epipolar Geometry
• Write cross product as matrix multiplication

T  x  T* x ,
 0

*
T   Tz
T
 y
P
 Tz
0
Tx
l2
p1
C1
p2
C2
Ty 

 Tx 
0 
Epipolar Geometry
• p1  T* R p2 = 0

p1T E p2 = 0
• E is the essential matrix
P
l2
p1
C1
p2
C2
Essential Matrix
• E depends only on camera geometry
• Given E, can derive equation for line l2
P
l2
p1
C1
p2
C2
Fundamental Matrix
• Can define fundamental matrix F analogously,
operating on pixel coordinates instead of
camera coordinates
u1 T F u2 = 0
• Advantage: can sometimes estimate F without
knowing camera calibration