Lecture notes

Download Report

Transcript Lecture notes

Last Time
• Participating Media
• Assignment 2
– A solution program now exists, so you can preview what your
solution should look like
– It uses OpenGL with smooth shading, so you would have to
implement Gourand shading in your ray-tracer to see the same thing
– It is OK to use OpenGL to generate the image in this project (but
you’ll get better results if you “gather to a pixel” in a ray-tracer)
02/10/03
© 2003 University of Wisconsin
Today
• Monte-Carlo introduction
• Probability overview
• Sampling methods
02/10/03
© 2003 University of Wisconsin
Monte Carlo Rendering
• Central theme is sampling:
– Determine what happens at a set of discrete points, or for a set of
light paths, and extrapolate from there
• Central algorithm is ray tracing
– Rays from the eye, the lights or other surfaces
• Theoretical basis is probability theory
– The study of random events
02/10/03
© 2003 University of Wisconsin
Distributed Ray Tracing
(Cook, Porter, Carpenter 1984)
• Arguably the simplest Monte-Carlo method
• Addresses the inability of ray tracing to capture:
–
–
–
–
non-ideal reflection/transmission
soft shadows
motion blur
depth of field
• Basic idea: Use multiple sample rays to build the image for
each pixel
• Rays are distributed, not the algorithm (should probably be
called distribution ray tracing)
02/10/03
© 2003 University of Wisconsin
Specific Cases
• Cast multiple rays per pixel, through different lens paths, to
get depth-of-field
• Sample several directions around the reflected direction to
get non-ideal reflection
• Send multiple rays to area light sources to get soft shadows
• Cast multiple rays per pixel, spread in time, to get motion
blur
• Must be careful not to get correlated samples, which would
be particularly apparent in an image
– What should you do for lights?
02/10/03
© 2003 University of Wisconsin
Depth-of-Field
Aperture
Focal plane
Lens
Image plane
Out of Focal plane gives
circle of confusion
• Regardless of where they pass through the lens, all the rays from a point
on the focal plane end up on the image
• Rays from a non-focal plane point converge somewhere else, resulting
in a circle on the image plane
• Note that the aperture also allows a sampling of directions through a
world point
02/10/03
© 2003 University of Wisconsin
Distributed Depth-of-Field
Aperture
Lens
Image plane
• Sample multiple rays through the aperture from the image point
• If there are objects outside the focal plane, different rays will hit
different points, and contribute different colors
• Also cast rays from within area of pixel
• Average all rays to get resulting pixel color
02/10/03
© 2003 University of Wisconsin
Distributed Directional Reflectance
•
•
•
•
Cast multiple rays to determine what is reflected in a surface
Rays should be sampled according to the BRDF
Average contribution to get pixel color
Why is this a poor strategy for very diffuse surfaces?
02/10/03
© 2003 University of Wisconsin
Better Approaches
• Distributed ray tracing would need to cast extremely dense
ray-trees to get good diffuse effects
• Other approaches improve upon this:
– Don’t cast complete trees, just single paths
– Observe that the same point may be hit many times, but distributed
ray-tracing does not ever re-use partial ray-trees
• But first, some math…
02/10/03
© 2003 University of Wisconsin
Random Variables
• A random variable, x
– Takes values from some domain, 
– Has an associated probability density function, p(x)
• A probability density function, p(x)
– Is positive over the domain, 
– “Integrates” to 1 over  :
 p( x)dx  1
 p( x )  1

x
02/10/03
© 2003 University of Wisconsin
Expected Value
• The expected value of a random variable, x, is defined as:
E[ x ]   xp( x )dx

• The expected value of a function, f(x), is defined as:
E[ f ( x )]   f ( x ) p( x )dx

• The sample mean, for samples xi is defined as:
02/10/03
© 2003 University of Wisconsin
1 n
x  i 1 xi
n
1 n
g x   i 1 g xi 
n
Variance and Standard Deviation
• The variance of a random variable is defined as:
v[ x ]  E[( x  E[ x ]) 2 ]
• The standard deviation of a random variable is defined as
the square root of its variance:
 [ x]  v[ x]
• The sample variance is:
1 n
  i 1  xi  x 2
n
02/10/03
© 2003 University of Wisconsin
Sampling
• A process samples according to the distribution p(x) if it randomly
chooses a value for x such that:
A, the probabilit y that x     p( y)dy
A
• Weak Law of Large Numbers: If xi are independent samples from p(x),
then in the limit of infinite samples, the sample mean is equal to the
expected value:
1 n 

Pr E[ x]  lim i 1 xi   1
n  n


1 n 

Pr  xp( x)dx  lim i 1 xi   1
n  n
 x

02/10/03
© 2003 University of Wisconsin
Algorithms for Sampling
• Psuedo-random number generators give independent
samples from the uniform distribution on [0,1), p(x)=1
• Transformation method: take random samples from the
uniform distribution and convert them to samples from
another
• Rejection sampling: sample from a different distribution,
but reject some samples
• Importance sampling (to compute an expectation): Sample
from an easy distribution, but weight the samples
02/10/03
© 2003 University of Wisconsin
Distribution Functions
• Assume a density function f(y) defined on [a,b]
• Define the probability distribution function, or cumulative
distribution function, as:
x
F ( x)   f ( y )dy
a
• Monotonically increasing function with F(a)=0 and F(b)=1
02/10/03
© 2003 University of Wisconsin
Transformation Method for 1D
•
•
•
•
Generate zi uniform on [0,1)
1
Compute xi  F ( zi )
Then the xi are distributed according to f(x)
To apply, must be able to compute and invert distribution
function F
02/10/03
© 2003 University of Wisconsin
Multi-Dimensional Transformation
• Assume density function f(x,y) defined on [a,b][c,d]
x y
• Distribution function:
F ( x, y ) 
f (u, v)dudv

a
• Sample xi according to:
Fy d ( x)  
c
x
a
• Sample yi according to:
Fx  xi ( y )
Fy  d ( xi )
02/10/03
© 2003 University of Wisconsin

d
c
f (u, v)dvdu
Rejection Sampling
• Say we wish to sample xi according to f(x)
• Find a function g(x) such that g(x)>f(x) for all x in the
domain
• Geometric interpretation: generate sample uniform in the
area under g, and accept if also under f
• Transform a weighted uniform sample according to xi=G1(z), and generate y uniform on [0,g(x )]
i
i
• Keep the sample if yi<f(x), otherwise reject
02/10/03
© 2003 University of Wisconsin
Important Example
• Consider uniformly sampling a point on a sphere
– Uniformly means that the probability that the point is in a region
depends only on the area of the region, not its location on the sphere
•
•
•
•
•
Generate points inside a cube [-1,1]x[-1,1]x[-1,1]
Reject if the point lies outside the sphere
Push accepted point onto surface
Fraction of pts accepted: /6
Bad strategy in higher dimensions
02/10/03
© 2003 University of Wisconsin
Estimating Integrals

h( y )dy
• Say we wish to estimate
y
• Write h=gf, where f is something you choose
• If we sample xi according to f, then:
1 n
E[ g ( y )]   g ( y ) f ( y )dy  i 1 g ( xi )
y
n
1 n h( xi )
hence  h( y )dy  i 1
y
n
f ( xi )
02/10/03
© 2003 University of Wisconsin
Standard Deviation of the Estimate
• Expected error in the estimate after n samples is measured by the standard
deviation of the estimate:
 1 n h( xi )  1  h 
 
  
  
n f
 n i 1 f ( xi ) 
•
•
•
•
Note that error goes down with 1 n
This technique is called importance sampling
f should be as close as possible to g
Same principle for higher dimensional integrals
02/10/03
© 2003 University of Wisconsin
Example: Form Factor Integrals
• We wish to estimate
1
cos cos  
Fij   
V ( x, y )dydx
2
x

P
y

P
i
j
Ai
r
• Sample from f by sampling xi uniformly on Pi and yi
uniformly on Pj
• Define f ( x, y )  1 Ai Aj
• Estimate is
cos cos 
~ 1 n
Fij 
02/10/03
 Aj
n k 1
k
rk
k
2
© 2003 University of Wisconsin
V ( xk , yk )