Sampling - IDAV: Institute for Data Analysis and Visualization
Download
Report
Transcript Sampling - IDAV: Institute for Data Analysis and Visualization
Sampling
Attila Gyulassy
Image Synthesis
Overview
•
•
•
•
•
Overview
Problem Statement
Random Number Generators
Quasi-Random Number Generation
Uniform sampling of Disks, Triangles,
Spheres
• Stratified Sampling
• Importance Sampling of General Functions
Problem Statement
• What is sampling?
– Want to Take a function f and recreate it using
only certain values
• e.g. data points used in interpolation
• where to pick those points?
– Sometimes don’t know f but can evaluate it
• would like to choose data points used to reconstruct
function in an optimal way
Problem Statement (ctd)
• Monte Carlo Integration
– 2 ways to improve
• improve estimation method
• carefully selecting samples******
• Use filtering to recreate original function
– covered next time
– important to know necessary sampling
frequency
Example
Overview of Sampling
• Over some domain
– Sometimes parametrizable
• Some sample density
• Random / Regular
Random Numbers
• Would like to get uniformly distributed
random numbers over a range [a,b]
• Problems
– large open spaces
– slow convergence
– nondeterministic
RNG methods
• Non-linear additive feedback
• Linear congruence methods
• Mersenne Twister algorithm
• many more...
Quasi-Monte Carlo
• Use deterministic roughly uniform
aperiodic distribution through domain
– I.e. pseudo-random numbers
• Want low discrepancy
– small = evenly distributed
– large = clustering
• causes clumping and sparse regions
• Want high speed
Quasi-Random Generators
Halton Sequence
• N-dimensional points xi
xm = (2(m), 3(m),…, PN-1(m), PN(m))
PI
= ith prime number (2,3,5,7,…)
r(m) is the radical-inverse function of m to the
base r. The value is obtained by writing m in base r
and then reflecting the digits around the decimal
point.
2610 = 110102 reflecting 0.010112 = 11/2710
Halton Sequence
• N-dimensional points xi
xm = (2(m), 3(m),…, PN-1(m), PN(m))
PI
= ith prime number (2,3,5,7,…)
m = a0r0 + a0r1 + a0r2 + a0r3 + ...
r(m) = a0r-1 + a0r-2 + a0r-3 + a0r-4 + ...
Halton Sequence
• Starting at (1,1,…,1) better than starting at
(0,0,…,0)
1 = 1.0 => 0.1 = 1/2
2 = 10.0 => 0.01 = 1/4
3 = 11.0 => 0.11 = 3/4
4 = 100.0 => 0.001 = 1/8
5 = 101.0 => 0.101 = 5/8
6 = 110.0 => 0.011 = 3/8
7 = 111.0 => 0.111 = 7/8
Notice even distribution
Hammersley Sequence
• Similar to Halton
xm = (m/N,2(m), 3(m),…, PN-1(m))
PI
= ith prime number (2,3,5,7,…)
m = a0r0 + a0r1 + a0r2 + a0r3 + ...
r(m) = a0r-1 + a0r-2 + a0r-3 + a0r-4 + ...
Where N is number of total samples
Hammersley Sequence
Hammersley Sequence
Poisson Random Numbers
• Generate random numbers according to the
Poisson distribution function
This turns out to be the same as just “throwing darts”**
Result of RNGs
• Basically, now we have random numbers in
[0,1]
– what do we do with these?
– How does this relate to sampling?
Uniform Sampling of a Disk
• Want Subdivision into equal area regions
Uniform Sampling Over a Sphere
• demo
Uniform Sampling - Disk vs Sphere
• Sampling of disk and projecting onto
hemisphere = sampling on 1/2 of sphere
Uniform Sampling of Triangles
• Compute probability density function for
triangles
Uniform Sampling of Triangles
• The u and v are not independent
Stratified Sampling
• Alternative to uniform
– break domain into strata
– fills in gaps faster
Importance Sampling
• Basic Idea
– sample at important locations to decrease
variance
Importance Sampling ctd.
• As seen last time, use a probability density
function f to pick samples
– properties
Importance Sampling ctd.
• Then, our approximation becomes
(here g(x) is prob. Dens. Funciton, not f(x))
Importance Sampling ctd.
• How do we pick f?
– want to minimize variance
G2
– where G is integral of original function g(x)
– … after much math we get
f(x) = |g(x)| / G
– which is great!! Except, G is what we are trying
to find
Importance Sampling ctd.
• If we don’t know G, how can we pick f
– If we apply a filter to g, so integral is of form
Then if the filter is clamped [0,1] the filter itself
becomes a reasonable estimate for f
Problems with this method?
Importance Sampling ctd.
• Remember f(x) = |g(x)| / G gives least
variance
• motivation for adaptive sampling
• build f from first few samples
Conclusion
• Multiple ways to generate “random”
numbers
– have to pick best method for each application
• Many sampling techniques, with pros and
cons
–
–
–
–
uniform
stratified
importance
adaptive
References
•
•
•
•
http://www.math.iastate.edu/reu/2001/voronoi/halton_sequence.html
http://www.cse.cuhk.edu.hk/~ttwong/papers/udpoint/udpoint.pdf
http://www.fz-juelich.de/nic-series/volume10/janke1.pdf
http://graphics.stanford.edu/courses/cs348b00/lectures/lecture13/montecarlo.1.pdf
• Principles of Digital Image Synthesis, Glassner