Monte Carlo Rendering

Download Report

Transcript Monte Carlo Rendering

Monte Carlo Rendering
• Central theme is sampling:
– Determine what happens at a set of discrete
points, and extrapolate from there
• Central algorithm is ray tracing
– Rays from the eye or the light
• Theoretical basis is probability theory
– The study of random events
Random Variables
• A random variable, x
– Takes values from some domain, 
– Has an associated probability density function, p(x)
• A probability density function, p(x)
– Is positive over the domain, 
– “Integrates” to 1 over  :
 p( x)dx  1
 p( x )  1

x
Expected Value
• The expected value of a random variable is
defined as:
E[ x ]   xp( x )dx

• The expected value of a function is defined as:
E[ f ( x )]   f ( x ) p( x )dx

Variance and Standard Deviation
• The variance of a random variable is
defined as:
2
v[ x ]  E[( x  E[ x ]) ]
• The standard deviation of a random variable
is defined as the square root of its variance:
 [ x]  v[ x]
Sampling
• A process samples according to the
distribution p(x) if it randomly chooses a
value for x such that:
The probabilit y that x     p( y )dy
A
• Weak Law of Large Numbers: If xi are
independent samples from p(x), then:
1 n 

Pr  E[ x ]  lim i 1 xi   1
n  n


Algorithms for Sampling
• Psuedo-random number generators give
independent samples from the uniform
distribution on [0,1), p(x)=1
• Transformation method: take random samples
from the uniform distribution and convert them
to samples from another
• Rejection sampling: sample from a different
distribution, but reject some samples
Distribution Functions
• Assume a density function f(y) defined on [a,b]
• Define the probability distribution function, or
cumulative distribution function, as:
x
F ( x)   f ( y )dy
a
• Monotonically increasing function with F(a)=0
and F(b)=1
Transformation Method for 1D
•
•
•
•
Generate zi uniform on [0,1)
1
Compute xi  F ( zi )
Then the xi are distributed according to f(x)
To apply, must be able to compute and
invert distribution function F
Multi-Dimensional
Transformation
• Assume density function f(x,y) defined on
[a,b][c,d]
x y
• Distribution function: F ( x, y )    f (u, v)dudv
a c
• Sample xi according to:
x d
Fy d ( x)    f (u, v)dvdu
a
• Sample yi according to
c
Fx  xi ( y )
Fy  d ( xi )
Rejection Sampling
• Say we wish to sample xi according to f(x)
• Find a function g(x) such that g(x)>f(x) for all x in
the domain
• Geometric interpretation: generate sample under g,
and accept if also under f
• Transform a weighted uniform sample according to
xi=G-1(z), and generate yi uniform on [0,g(xi)]
• Keep the sample if yi<f(x), otherwise reject
Important Example
• Consider uniformly sampling a point on a sphere
– Uniformly means that the probability that the point is in a
region depends only on the area of the region, not its
location on the sphere
•
•
•
•
•
Generate points inside a cube [-1,1]x[-1,1]x[-1,1]
Reject if the point lies outside the sphere
Push accepted point onto surface
Fraction of pts accepted: /6
Bad strategy in higher dimensions
Estimating Integrals
• Say we wish to estimate  h( y )dy
y
• Write h=gf, where f is something you choose
• If we sample xi according to f, then:
1 n
E[ g ( x )]   g ( y ) f ( y )dy  i 1 g ( xi )
y
n
1 n h( xi )
hence  h( y )dy  i 1
y
n
f ( xi )
Standard Deviation of the
Estimate
• Expected error in the estimate after n samples is
measured by the standard deviation of the estimate:
 1 n h( xi )  1  h 
 
  
  
n f
 n i 1 f ( xi ) 
•
•
•
•
Note that error goes down with 1 n
This technique is called importance sampling
f should be as close as possible to g
Same principle for higher dimensional integrals
Example: Form Factor Integrals
• We wish to estimate
1
Fij 
Ai
cos cos  
xPi yPj r 2 V ( x, y )dydx
• Define f ( x, y )  1 Ai Aj
• Sample from f by sampling xi uniformly on
Pi and yi uniformly on Pj
n
1
cos k cos k
~
• Estimate is F 
Aj
V ( xk , yk )

ij
2
n k 1
rk
Basic Ray Tracing
• For each pixel in the image
– Shoot a ray from the eye through the pixel, to
determine what is seen through that pixel, and
what its intensity is
• Intensity takes contributions from:
– Direct illumination (shadow rays, diffuse, Phong)
– Reflected rays (recurse on reflected direction)
– Transmitted rays (recurse of refraction direction)
Casting Rays
• Given a ray, p(t )  q  td
• Determine the first surface hit by the ray
(intersection with lowest t)
• Algorithms exist for most representations of
surfaces, including splines, fractals, height
fields, CSG, implicit surfaces…
• Hence, algorithms based on ray tracing can
be very general with respect to the geometry
Distributed Ray Tracing
• Cook, Porter, Carpenter 1984
• Addresses the inability of ray tracing to capture:
–
–
–
–
non-ideal reflection/transmission
soft shadows
motion blur
depth of field
• Basic idea: Cast more than one ray for each pixel,
for each reflection, for each frame…
• Rays are distributed, not the algorithm. Should
probably be called distribution ray tracing.
Specific Cases
• Sample several directions around the reflected
direction to get non-ideal reflection, and specularities
• Send multiple rays to area light sources to get soft
shadows
• Cast multiple rays per pixel, spread in time, to get
motion blur
• Cast multiple rays per pixel, through different lens
paths, to get depth-of-field
What’s Good? What’s Bad?
• Easy to implement - standard ray tracer plus
simple sampling
• Which L(D|S)*E paths does it get?
• Which previous method could it be used
with to good effect?