Cadarache - 2 - Royal Institute of Technology

Download Report

Transcript Cadarache - 2 - Royal Institute of Technology

Monte-Carlo Primer
Motto:
Sir,
In your otherwise beautiful poem (The vision of Sin) there is a verse which reads:
“Every moment dies a man
every moment one is born”
Obviously, this cannot be true and I suggest that in the next edition you have it read
“Every moment dies a man
every moment 1 and 1/16 is born”
Even this value is slightly in error but should be sufficiently accurate for poetry.
....Charles Babbage (in a letter to Lord Tennyson)
1
PROGRAM
•
•
•
•
•
•
•
•
2
Introduction
History of Monte Carlo
Basics of Monte-Carlo
Random Number Generators
Cross sections
Monte-Carlo vs Deterministic Methods
Particle Transport Simulations
Estimators and Statistical Analysis
– Non-analog Monte Carlo
– Variance Reduction Technique
Introduction: Monte Carlo
The Monte Carlo method has been used for almost 60 years to solve
radiation transport problems in high energy physics, nuclear reactor
analysis, radiation shielding, medical imaging, oil well-logging, etc.
Individual particle histories are simulated using random numbers,
highly accurate representations of particle interaction probabilities,
and exact models of 3D problem geometry. Monte Carlo methods are
sometimes the only viable methods for analyzing complex,
demanding particle transport problems.
3
Modern applications of Monte Carlo
•
•
•
•
•
•
•
•
•
4
Nuclear reactor design
Quantum chromodynamics
Radiation cancer therapy
Traffic flow
Stellar evolution
Econometrics
Dow-Jones forecasting
Oil well exploration
VLSI design
History of Monte Carlo
John Von Neumann invented scientific computing in 1940’s
• stored programs, “software”
• algorithms & flowcharts
• assisted with hardware design as well “ordinary” computers are called “Von
Neumann machines”
•
•
•
•
5
John Von Neumann invented Monte Carlo particle transport in 1940’s
Highly accurate - no essential approximations
Expensive -typically “method of last resort”
Monte Carlo codes for particle transport have been proven to work
effectively on all computers
Vector, parallel, supercomputers, workstations, clusters of workstations, . .
Prominent Figures in Computing
If I have seen further it is by standing on the shoulders of Giants
in Isaac Newton’s Letter to Robert Hooke
• Gottfried Wilhelm Leibniz (1646 -1716)
• Charles Babbage (1791-1871)
• Augusta Ada Byron (King), countess of Lovelace,
Lady Lovelace (1815-1852), the first computer
programmer; 1979, Programming Language ADA
6
The First Digital Computer
Mid-1830’s
7
John von Neumann, 1903-57, American mathematician, b. Hungary, Ph.D.
Univ. of Budapest, 1926
He came to the United States in 1930 and was naturalized in 1937. He taught
(1930-33) at Princeton and after 1933 was associated with the Institute for
Advanced Study. In 1954 he was appointed a member of the Atomic Energy
Commission. A founder of the mathematical theory of games, he also made
fundamental contributions to quantum theory and to the development of the
atomic bomb (Stan Ulam was another great mathematician in this
environment). He was a leader in the design and development of high-speed
electronic computers; his development of maniac-an acronym for
mathematical analyzer, numerical integrator, and computer-enabled the
United States to produce and test (1952) the world's first hydrogen bomb.
With Oskar Morgenstern he wrote Theory of Games and Economic
Behavior). Von Neumann's other writings include Mathematical Foundations
of Quantum Mechanics (1926), Computer and the Brain (1958), and Theory
of Self-reproducing Automata (1966).
8
”Parties and nightlife held a special appeal for von Neumann. While
teaching in Germany, von Neumann had been a denizen of the
Cabaret-era Berlin nightlife circuit. ”
9
Terminology
1) Deterministic:
1 (r, E, t )
D(r, E) (r, E, t )   a (r, E)(r, E, t )   f (r, E)(r, E, t ) 
v
t
2
2) Probabilistic:
10
Basics of M-C
Two basic ways to approach the use of Monte Carlo methods for
solving the transport equation:
• mathematical technique for numerical integration
• computer simulation of a physical process
Each is “correct”
mathematical approach useful for:
• importance sampling, convergence, variance reduction, random sampling
techniques, . ..*.
simulation approach useful for:
collision physics, tracking, tallying, . . . . .
•
•
•
11
For Monte Carlo approach, consider the integral form of the
Boltzmann equation.
Most theory on Monte Carlo deals with fixed-source problems.
Eigenvalue problems are needed for reactor physics calculations
Major Components of a Monte Carlo Algorithm
The primary components of a Monte Carlo simulation method:
•
Probability distribution functions (pdf's) - the physical (or mathematical)
system must be described by a set of pdf's.
•
Random number generator - a source of random numbers uniformly
distributed on the unit interval must be available.
•
Sampling rule - a prescription for sampling from the specified pdf's,
assuming the availability of random numbers on the unit interval, must be
given.
•
Scoring (or tallying) - the outcomes must be accumulated into overall tallies
or scores for the quantities of interest.
•
Error estimation - an estimate of the statistical error (variance) as a function
of the number of trials and other quantities must be determined.
•
Variance reduction techniques - methods for reducing the variance in the
estimated solution to reduce the computational time for Monte Carlo
simulation
•
Parallelization and vectorization - algorithms to allow Monte Carlo methods
to be implemented efficiently on advanced computer architectures.
12
Major Shortcoming of Deterministic
Appoach: Too Many Simplifications
• Physical simplifications:
– Collision model
– Isotropic scattering in Lab-system
– Continuous slowing-down, etc.
• Mathematical simplifications:
– Diffusion model, boundary conditions, etc.
– Multi-group model
– Discretization, …
• Technological simplifications:
– Geometrical simplifications
– Homogenization, …
13
Diffusion Model
Transport equation
Q0
Transport eq.

 ( x)
3s
x=0
Diffusion eq.
0.66tr 0.71tr
14
Free surface
Multi-Group Approximation
Emin
Emax
Group G
EG
Group 2
EG-1
E2
.........................
Eg 1

 ( r , E)   g ( r ) 
(r, E) (r, Ω, E)dE
Eg
Eg 1
  (r, Ω, E)dE
Eg
15
Group 1
E1
E0
Geometrical Simplifications
16
Discretization
In space
It completely falls out
17
In angle
Advantages of Monte Carlo
•
•
•
•
Very complex 3-D configurations
Continuous space variables
Continuous angle variables
Continuous energy variable
The errors in Monte Carlo calculations
take the form of stochastic uncertainties
18
The Theoretical Foundation of MC
x1
N
1
xN   xi
N i 1
iid variables x1 , x2 ,
x2
x3
xN
xN   Exi    and Dxi   
 xN  

1
P
 z  
( z) 
N 
 2
 N

μ
 N  DxN 
19
N  
N
z

e
2
z  


2 2

z
Elements of Random Variables
Let x be a random variable
P a  x  b = probability that x will have values in  a, b
pdf: Px  x  x  x  f (x) x as x  0
b
P a  x  b =  f (x)dx;
a
cdf: F(x)  P x  x 
x



f (x)dx  1

f (x)dx  f (x) 

dF (x)
dx
y  y(x)  Py  y  y  dy  g(y)dy  g(y)dy  f (x)dx
g ( y )  f ( x)
20
dx
dy
Distribution Sampling
dx
y  P( x)  0  y  1; and g( y)  f ( x)
1
dy
  P( x)  x  P 1 ( )
  pseudo random number
We want to generate a series of random numbers that are
distributed according to the number of mean free paths
x
f (x)  e  x  F (x)   e  xdx  1  e  x
0
  1  ex  x   ln(1   )   ln( )
21
Rejection Technique
f (x)
0  x  a; 0  f (x)  fmax
x  x a;
f (x)  f ( ax) fmax
1
f (1 )
2
1, 2   2  f (1 )?
Number of r.n. in (x, x  x)
22
0
f (x)x
1
1
Example of Simple Simulation
x
1

x2
1) Generate a r.n.    =2  
x3
2) Generate a r.n.  , because f ( x)  t e t x
xN
1
 x   ln 
t
1 N
3) Calculate x   xi
N i 1
4) Generate a r.n.  , because t   f    n  n
a t
 f t
0
23
s t
 t
n t

n t
1
Principles of a Nuclear Reactor
E
Leakage
N2
2 MeV
 n/fission
Energy
Fast fission
ν ≈ 2.5
N2
k
N1
Resonance abs.
Non-fissile abs.
1 eV
Fission
200 MeV/fission
24
Leakage
Non-fuel abs.
Slowing down
N1
1
2
N ( E )dE  E e
25

E
kBT
dE
x
Flux/Current Estimators
Average 
c  t V   
c
N
c
V t
V
N
1
1
c



ci


i
N i 1
V t N i 1
l
1 N
  r, E  vn  r, E  
 li
V VN i 1
n
n
J 
; J 
A
A
1 N 
1 N 

n   ni  J  
 ni ; J  J  J
N i 1
AN i 1
26
J  r    Ω  r, Ω  dΩ 
Jn  J  r   n   Ω  n  r, Ω  dΩ 
M
1 N
1
 

;


i i 
AN i 1
m 1  m

Ω
n
The Essence of Monte Carlo
Random
numbers
Monte
Carlo
Probability
Distribution
27
RESULTS
The Essence of Monte Carlo
Monte Carlo
Sampling
Scoring (tallying)
Error estimations
Variance reduction techniques
Vectorization and parallelization
28
Simple Monte Carlo Example:
1
1
EvaluateG   g(x)dx with g(x)  1 - x 2
0
g ( x)
Mathematical approach:
For k = 1, . . . . N: choose xˆk randomly in (0,1)
G = (1-0) •[average value of g(x)]
Simulation approach:
“darts game”
For k = 1, . . . . N: choose
1
1 N
1 N
  g ( xˆ k )   1  xˆ k 2
N k 1
N k 1
xˆk and yˆ k randomlyin (0,1)
If xk 2  yk 2 1, tally (register) a " hit"
29
0
Simulation approach:
“darts game”
For k = 1, . . . . N: choose xˆk and yˆ k randomlyin (0,1)
If xk 2  yk 2 1, tally (register) a " hit"
miss
1
g ( x)
0
G = [area under curve]
30
 (1  1) 
1
number of hits
, N  a total number of shots
N
M-C as integration tool
Monte Carlo is often the method-of-choice for applications
with integration over many dimensions:
Examples: high-energy physics, particle transport (ADS)
Evaluat e: G 
b1 b2 b3
bM
   ....  g (r , r ......r
1
a1 a2 a3
2
M
)dr1dr2 ...drM
aM
where r1 , r2 ......rM are all indendent variables
For k  1,.....N:
For m  1,......M : chose Rm
1
G  (b1  a1 )  .... (bM  aM ) 
N
31
N

(k )
random lyin (am , bm )
 g R1 , R2 ......RM
k _1
(k )
(k )
(k )

Probability Density Functions:
Continuous Probability Density
f ( x)
0  f ( x)
b
P robability { a  x  b } 

f ( x)dx
f ( x)
a

Normalization
a
 f(x)dx  1
b x
0
Discrete Probability Density
f k  k 1,......,N where f k  f ( xk )
0  fk
f ( x)
P robability { x  x k }  f k
N
Normalization
32
 f k 1
k 1
x1 x2
x3
xN
x
Basic parameters
Mean, Average, Expected Value (1st statistical moment)
x    x  E x 

   xf ( x ) dx  for continuousprobablity

N
   xk f k  for discrete
k 1
Variance (2nd statistical moment), standard deviation
var ( x )  ( x   ) 2   2 

2
    x    f ( x )dx
2

standard deviation
33
x   2

 E  x   2
2
    xk    f k
2
N
k 1
  2

Basic functions
Functionof RandomVariable
Consider g ( x ), wherex is a randomvariable with density f ( x )

Eg ( x )  g ( x ) f ( x ) dx

34
Eg ( x )   g k f k
N
k 1
The key of MC methods is the notion of RANDOM
sampling
The problem can be stated this way:
ˆ s
Given a probability density, f(x), produce a sequence of x'
The x'
ˆ s should be distributed in the same manner as f(x).
f ( x)
x
The use of random sampling distinguishes Monte Carlo from all other methods
When Monte Carlo is used to solve the integral Boltzmann transport equation:
•
Random sampling models the outcome of physical events (e.g., neutron
collisions, fission process, source, . . . . . )
•
Computational geometry models the arrangement of materials
35
Transport Equation
(r, v)    (r' , v' )C(v'  v, r' )dv'Q(r' , v)T (r'  r, v)dr'
Assumptions
•
static homogeneous medium
•
time-independent
•
Markovian - next event depends only on current (r,v) not on previous events
•
particles do not interact with each other
•
neglect relativistic effects
•
no long-range forces (particles fly in straight lines between events)
•
material properties are not affected by particle reactions
•
etc., etc.
 can use superposition principle
36
Basis for Monte Carlo Solution Method
Let
p  (r , v )
and
R( p '  p)  C (v '  v, r ')T (r '  r , v)
Expand  into components having 0,1, 2,.....k collisions

 ( p)    k ( p),
k 0
with
 0 ( p) 
 Q(r ', v)T (r '  r, v)dr '
by definition
 k ( p)    k 1 ( p ') R( p '  p)dp '
Note that collision k depends only on the results of
collision k-l, and not on any prior collisions k-2, k3, . . .
37
Statistical approach to determining k
k ( p )   k 1 ( p ' ) R( p '  p )dp '
interpret terms in the following manner:
 k 1 ( p ')  probability density for occurence of (k  1) st collisions at p '
R( p '  p)  conditional probablity that a (k  1) st collision at p ' will
result in a (k )th collision at p.
Monte Carlo method:
1. Randomly sample p’ from k-1(p’)
2. Given p’, randomly sample p from R (p’p)
3. If p lies within dpi at pi, tally 1 in bin i
 Repeat steps 1,2,3 N times,
ˆ ( p )dp  countsper bin i/ N
then 
i
i
38
dp
pi
p
Histories (trajectories)
After repeated substitution for k
k ( p)   k 1 ( p' ) R( p'  p)dp
  ..... 0 ( p0 ) R( p0  p1 )R( p1  p2 )......R( pk 1  p)dp0 dp1...dpk 1
A “history” (trajectory) is a sequence of states (p0,p1,p2,p3...)
p1
p4
p3
p0
p3
p0
p2
p2
p1
For estimates in a given region, tally the occurrences
for each collision of each “history” within a region
39
Transport Equation
k ( p)   ..... 0 ( p0 )R( p0  p1 )R( p1  p2 )......R( pk 1  p)dp0dp1...dpk 1
Monte Carlo approach:
Generate a sequence of States, (p0, p1, . . . . pk), [i.e., a history by:
• Randomly sample from PDF for source: 0(p0)
• Randomly sample from PDF for kth transition: R (pk-1  pk)
Generate estimates of results, by averaging over M histories:
1 M 

A   A( p )( p )dp 
   A( pk ,m ) 
M m1  k 1

40
Simulations
Simulation approach to particle transport
Faithfully simulate the history of a single particle from birth to
death
During the particle history,
• model collisions using physics equations & crosssection data
• model free-flight using computational geometry
• tally the occurrences of events in each region
Select source r, , E randomly
Track through geometry, select
collision site randomly
Apply collision physics analysis,
slect new , E randomly
41
Repeat the simulation for many
histories, accumulating the tallies
Fundamental rule:
Think like a neutron
or other projectile
(proton)
MC rules for simulations
Source
Random sampling
E,  -analytic, discrete, or piecewise-tabulated PDF’s
Computational geometry
r -sample from region in 3-D space, or from discrete PDF
Tracking
Random sampling
dcollide-distance to collision, from mfp or exponential PDF
Computational geometry
dgeom - distance-to-boundary, ray-tracing, next-region, ....
Collisions
Random sampling
E’, ’ - analytic, discrete, or piecewise-tabulated PDF’s
Physics
, f() - cross-section data, angular PDFs, kinematics, ...
Tallies
Statistics
Variance Reduction
Random sampling
42
MC rules for simulations
Single particle
• random-walk for particle history
• simulate events, from birth to death
• tally events of interest
Batch of histories (”generation”)
• random-walk for many particle histories
• tally the aggregate behavior
Overall
timesteps
• geometry changes
• material changes
• fuel depletion
• burnable absorbers
• control rods
43
Select source r, , E randomly
Track through geometry, select
collision site randomly
Apply collision physics analysis,
slect new , E randomly
Random number generators
• Truly random - is defined as exhibiting ”true" randomness, such as the
time between ”tics" from a Geiger counter exposed to a radioactive
element
• Pseudorandom - is defined as having the appearance of randomness,
but nevertheless exhibiting a specific, repeatable pattern.
• Quasi-random - is defined as filling the solution space sequentially (in
fact, these sequences are not at all random - they are just
comprehensive at a preset level of granularity). For example, consider
the integer space [0, 100]. One quasi-random sequence which fills that
space is 0, 1, 2,...,99, 100. Another is 100, 99, 98,...,2, 1, 0. Yet a third is
23, 24, 25,..., 99, 100, 0, 1,..., 21, 22. Pseudorandom sequences which
would fill the space are pseudorandom permutations of this set (they
contain the same numbers, but in a different, ”random" order).
44
Random number generators
Desirable Properties
Random numbers are used to determine:
(1) attributes (such as outgoing direction, energy, etc.) for launched particles,
(2) Interactions of particles with the medium.
Physically, the following properties are desirable:
•
The attributes of particles should not be correlated. The attributes of each particle
should be independent of those attributes of any other particle.
•
The attributes of particles should be able to fill the entire attribute space in a
manner which is consistent with the physics. E.g. if we are launching particles into a
hemispherical space above a surface, then we should be able to approach completely
filling the hemisphere with outgoing directions, as we approach an infinite number of
particles launched. At the very least, ”holes" or sparseness in the outgoing directions
should not affect the answers significantly. Also, if we are sampling from an energy
distribution, with an increasing number of particles, we should be able to duplicate the
energy distribution better and better, until our simulated distribution is ”good enough."
45
Random number generators
Desirable properties
Mathematically, the sequence of random numbers used to effect a Monte Carlo
model should possess the following properties:
• Uncorrelated Sequences - The sequences of random numbers should be
serially uncorrelated. Any subsequence of random numbers should not be
correlated with any other subsequence of random numbers. Most especially,
n-tuples of random numbers should be independent of one another. For
example, if we are using the random number generator to generate outgoing
directions so as to fill the hemispherical space above a point (or area), we
should generate no unacceptable geometrical patterns in the distribution of
outgoing directions
• Long Period - The generator should be of long period (ideally, the
generator should not repeat; practically, the repetition should occur only
after the generation of a very large set of random numbers).
46
• Uniformity - The sequence of random numbers should be uniform, and unbiased. That is,
equal fractions of random numbers should fall into equal ”areas" in space. For example, if
random numbers on [0,1) are to be generated, it would be poor practice were more than
half to fall into [0, 0.1), presuming the sample size is sufficiently large. Often, when there
is a lack of uniformity, there are n-tuples of random numbers which are correlated. In this
case, the space might be filled in a definite, easily observable pattern. Thus, the properties
of uniformity and uncorrelated sequences are loosely related.
• Efficiency - The generator should be efficient. In particular, the generator used on vector
machines should be vectorizable, with low overhead. On massively parallel architectures,
the processors should not have to communicate among themselves, except perhaps during
initialization. This is not generally a signiffcant issue. With minimal effort, random
number generators can be implemented in a high level language such as C or FORTRAN,
and be observed to consume well less than 1% of overall CPU time over a large suite of
applications.
47
Multiplicative, Linear, Congruential Generators - the one most
commonly used for generating random integers.
 i  ( A i 1  B ) modulo M
or
 i  mod ( A i  B, M )
 o  a "seed "
in practice
 i  mod ( M  i 1 ,2 48 )
M  519
48
from integer to real
i 
i
float (M )
The Essence of Monte Carlo
Random
numbers
Monte
Carlo
Probability
Distribution
49
RESULTS

 e  x
1


50
THE HEART OF THE MONTE CARLO:
sampling of mean free path
probability of passing distance x  dx
p ( x) dx   e x dx
cumulative probability for passing distances up to z
z
    e x dx
0
   e  x 0
z
  1  e  z
thus - z = ln 1 -  
1
thus z = - ln  '

51
, ’ - random numbers
w( x, y, z, vx , vy , vz , t ) 
t
l
1
ln 1
t
w1 ( x1 , y1 , z1 , v1 x , v1 y , v1 z , t1 )
c
f
 s,el
 s  s,in
  
w1  w  1  a 
 T 
 2  choice of interaction
3  choice of angle
Cross sections – probability distribution functions!
i  random numbers
52
DATA LIBRARIES ARE OF TWO FORMS
•
Pointwise
• Cross sections are specified at a sufficient number of
energy points to faithfully reproduce the evaluated cross
section
• Used with:
• Continuous energy Monte Carlo codes
• Multigroup
• All data are collapsed and averaged into energy groups
• Used with:
• Diffusion codes
• Discrete ordinate codes
• Multigroup Monte Carlo codes
53
CROSS-SECTION LIBRARIES
• Pointwise
– User does not have to worry about:
• Group structure
• Flux spectrum
• Self-shielding
– Small amount of work in processing code
– Large amount of work in transport code
• Multigroup
– User needs to worry about everything
– Large amount of work in processing code
– Small amount of work in transport code
54
238Np
- fission cross section
10000
100
10
1
0.1
f (barn)
f (barn)
1000
0.01
0.001 -3 -2 -1
10 10 10 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7
E (eV)
55
235U
- fission cross section - pointwise JEF2
f (barn)
Resonance region
10 4
10 4
10 3
10 3
10 2
10 2
10 1
10 1
10 0
10 0
10 -410 -310 -210 -1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7
E (eV)
56
101
10 2
E (eV)
103
235U
- fission and capture cross sections
JEF2
multigroup
pointwise
10 4
1000
10 3
100
10 2
 (barn)
10000
10
10 1
1
10 0
0.1
10 -1
0.01
10 -2
0.001
10 -410 -310 -210 -1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7
E (eV)
57
10 -510 -410 -310 -2 10 -1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7
E
(eV)
- fission cross sections in resonanse region
Pointwise and multigroup JEF2 data
 (barn)
235U
10 2
10 1
10 0
10 1
10 2
E (eV)
58
10 3
Neutron cross section data contain
• Cross sections for reactions:
– elastic scattering, (n,2n), (n,3n), (n,n’,p)
– inelastic scattering (n,n’) continuum, (n,, (n,p) and (n,a
– total, absorption, proton prod. and a production
• Angular distrubutions for different reactions (45 in
MCNP)
• Energy distrubution for some reactions (4)
• Heating numbers and Q-values
• Photon production cross sections, angular distributions
for (n,), (n,n’), (n,2n) and n,3n)
59
Classes of cross-section data (MCNP)
CDYTPMGE60
Continuous-energy neutron
Discrete reaction neutron
Neutron dosimetry
Neutron S(a,b) thermal
Continuous-energy photon
Multigroup neutron
Multigroup photon
Continuous-energy electron
Major Neutron Physics Approximations
• (n,xn) sampled independently
• (n,f),(n,xn) happen instantly
• unresolved resonances treated as average cross
section
• no delayed gamma production
• no charged particle production
61
Experimental Data; Theoretical Codes
ENDF/B
Evaluations
T-2
Evaluations
LLNL
Evaluations
NJOY/TRANSX
Processing codes
T-2
Multigroup or
Continuous Energy
File
Feedback to Evaluators
Experimentalists,
and Processors
Checking Codes
Processsing Codes
Data Analysis
Test Data in
Transport Codes
Include Data in Libarary
62
MCPOINT
CLYDE
Processing
Codes
Multigroup or
Continuous Energy
File
Approximations in generating data
• Evaluation Assumptions:
– choice of experiments
– choice of representations, i.e., discrete lines for continuous
distributions
– interpolation (model codes)
– bin sizes
– tresholds, Q-values, particularly for elements
• Processing Assumptions:
– representation of angular distributions as equiprobable bins
– resonance parameter treatment
63
Things to consider when choosing
neutron cross-section data
•
•
•
•
•
•
64
Differences in evaluator’s philosophy
Neutron energy spectrum
Temperature at which set was processed
Availability of photon-production data
Sensitivity of results to different evaluations
Use the best data that you can afford
THERMAL EFFECTS
• Target motion (gas liquid, solid) --> incident
neutron “sees” targets with various velocities,
i.e., many different relative velocities
–   f (Vrelative ); so average cross section is changed
– kinematics are different
• neutron upscattering
• Doppler broadening
65
THERMAL EFFECTS (cont.)
• Lattice structure (solid)
Lattice
spacing
At high energy - wavelength of neutron
<< lattice spacing
At low energy - wavelength of neutron
@ lattice spacing
• Cross sections show very jagged behaviour, each peak
corresponds to a particular set of crystal planes
• Coherent scattering (interference of scattered waves)
add constructively in some directions and add
destructively in others
• ==>Angular distribution changed
• Bragg scattering
66
THERMAL EFFECTS (cont.)
• Molecular energy levels (liquid, solid)
vibrational and rotational levels
~0.1 eV spacing below a few eV
Neutron loses or gains energy in discrete amounts
==> modify DOUBLE-DIFFERENTIAL ( E  E ', ')
thermal inelastic scattering
67
Deterministic
Monte-Carlo
• solves the transport equation • simulates individual particles and
records some aspects of their
(integro-differential) for the
behaviour
average particle behaviour
• supplies information only about the
• gives fairly complete
specific “sampled” distribution
information (e.g. flux)
(“tally”)
throughout the phase space
of the problem
• No transport equation need to be
written to solve a transport problem
by Monte Carlo - but - equation of
the probability density of particles
= integral transport equation
68
Deterministic
• the discrete ordinates method
visualises the phase space to
be divided into many small
boxes, the particles move
from one box to another.
Boxes get progressively
smaller, particles moving
from box to box take
differential amount of time to
move and differential
distance in space ---->
approaches the integro
differential tarnsport
69
Monte-Carlo
• Monte-Carlo transports
particle between events (e.g.
collisions) that are separated
in space and time. Neither
differntial space nor time are
inherent parameters of
Monte-Carlo. The integral
equation does not have time
or space derivatives
Phase space
mean value - Xn
simulation results
Accuracy: measure of how close the
TRUE VALUE
mean is to the TRUE PHYSICAL quantity
(TRUE VALUE) being estimated
• Sometimes called “the systematic error”
• Monte Carlo cannot directly estimate this
• Factors related to accuracy:
• Code accuracy (physics models etc.)
• Problem modelling (geometry, dource, etc
• User errors/abuse
70
Phase space
mean value - Xn
simulation results
TRUE VALUE
Precision : the uncertainty in Xn the
statistical fluctuations in the sampled xj
• Monte-Carlo can estimate this
 1  x2



R
   2  1
Xn  n  Xn

71
1/ 2
PRINCIPLES
• Central Limit Theorem
The distribution of the sum of n independent, identically
distrubuted random variables
b
X n  E( x )
1


 t 2 /2
P a 
 b 
e
dt
/ n
2


a

Conditions:
Mean and Variance must exist (i.e. be finite)
72
PRINCIPLES
• Confidence Intervals
P  X n  1  E ( x )  X n  1
  0.68
P  X n  2  E ( x )  X n  2   0.95
• Probability Distributions
Discrete Probability Density Function: n discrete events
p2
p1
pn
pi = probability per event
........
1
73
2
n
PRINCIPLES
• Probability Distributions
Continuous Probability Density Function:
p(x) = probability per unit of x
p(x)

x
Normalisation:
 p( x )dx  1

Cumulative Probability Function:
x
P( x) 
 p( t ) dt

74
DEFINITIONS
true mean  E( x ) 

x f ( x )dx
N

1
estimated mean  x 
xi
N i 1
variance of population   2

  x  E( x ) 2 f ( x )dx  E( x 2 )  ( E( x ))2
standard deviation of population = 
75
DEFINITIONS
estimated variance of population  S 2
1

N 1
N
x
 x
i
i 1
N
N 1


N  1  N

2
xi
i 1
2

x 

2
 x2  x2
1
where x 
N
2
76
N

i
2
xi
DEFINITIONS
2
S
estimated variance of mean = S 
N
Sx
estimated realtive error = R 
x
N

1 1 N  1
2
2
2
R  2 
xi  x 

x N  N  1  N i  1

2
x



 x 
1 x  x 
 2

x  N 1  


2
77
2
N
i 1
xi2
N
i
i 1
2
1

N
Reminder:
f(x) = probability density function (PDF)
= history score distribution

If  f ( x )dx  1 then


mean:  = x =

variance:  
2
 xf(x)dx
-
 x - x f(x)dx - 2nd moment
2
-
vov (variance of variance):

 x - x f(x)dx - 4th moment
4
-
78
- 1st moment
Key of Symbols in Monte Carlo
W - particle weight, Ws - source weight
E- particle energy (MeV); ED - energy deposited (MeV)
- cosine of angle between surface normal and trajectory
A- surface area (cm2); V - cell volume (cm3)
 - track length (cm)
p() - probability density function:  - cosine of angle between particle
trajectory and detector
s - total mean free path to the detector
R- distance to the detector
T(E)- microscopic cross section (barns), f(E) - fission x-section
H(E) - heating number (MeV/collision)
Q - fission heating Q-value (MeV)
ra - atom density (atoms/barn-cm), m - cell mass (g)
79
• Definitions:
ESTIMATORS
– Particle speed - v(cm/s)
– Particle density - n (particles/cm3), a function of
position, energy, angle, and time
 , t   v  n r , E, 
 , t
– Flux (particles/cm2s)   r , E, 
– Fluence (particles/cm2)
     r , E, 
 , t dt  v  n r , E, 
 , t  dt
  r , E, 


– Cell fluence estimator (track length)

80
W
 vt 
V

W

V
• Total heating:
ra
FTH 
  T HdE
rg

• Fission Heating
ra
FfH 
  f QdE
rg

• keff

keff  Vr a   f dE
81
DEFINITIONS
Precision: the uncertainty in Xn caused by the
statistical fluctuations in the sampled xi’s
• Monte Carlo codes can estimate this (relative error):

Sm  1  x
R
=   2  1 

X N N  X N
2
1/ 2
User controlled factors related to precision:
• Forward versus adjoint calculation
• Estimator type (tally)
• Variance reduction technique
• Number of histories run
82
DEFINITIONS
Efficiency: a measure of how quickly the desired
precision is achieved
• Monte Carlo codes can estimate this (Figure of Merit - FOM):
1
FOM  2
RT
1
R 
N
2
and T = calc. time  N
FOM should be constant with N
Larger FOM more efficient
• Factors affecting efficiency:
• History-scoring efficiency
• Dispersions in nonzero history scores
• Computer time per history
83
VARIANCE REDUCTION MOTIVATION
N
i 1
N
i
i 1
2

1
1 1 q
i , xi  0
 


2
N  N 
qN qN

xi 
 i , xi  0 
q: fraction with xi0

R2non
{
R
2
xi2
2
xi





 x 


N
R2eff
R2eff can be a significant part of the relative error
Variance reduction techniques should strive to equalize the xi scores
as well as reduce the fraction of zero scores.
84
VARIANCE REDUCTION TECHNIQUE
Type
Time and energy cutoffs
Weight window generator
Energy splitting and Russian roulette
Forced collisions
Exponential transform
85
T
P
P
M
D
TYPES OF VARIANCE REDUCTION
(T) Truncation method - trancates parts of phase space that do not
contribute significantly to the solution
(P) Population control method - uses particle splitting and Russian
roulette to control the number of samples taken in various
regions of phase space
(M) Modified sampling method - alters the statistical sampling of a
problem to increase the “amount of information” squeezed
from each trajectory. Sampling is done from distributions that
send particles in desired directions or into other desired regions
of phase space such as time, energy, or change the location or
type of collision
(D) Partially deterministic methods - circumvents part of the
normal random walk process (e.g. next event estimator)
86
VARIANCE REDUCTION TECHNIQUE
Implicit Capture - a splitting process where the particle is
split into absorbed weight (which is not followed) and
serviving weight.
Applied after each collision
Surviving weight is given by:
  
W  1  a 
 T 
 a = collision absorption cross sect.
T = collision nuclide total cross sect.
Generally this decreases Reff faster than increasing T.
87
VARIANCE REDUCTION TECHNIQUE
Weight cutoff - Russian roulette is played if a particle’s
weight drops below a user-specified weight cutoo. The
particle is either killed or its weight is increased.
Unlike other cutoffs, weight cutoff is NOT biased.
Applied when W < Rj  WC2 and survival is given by:
probability W/(WC1  Rj) R = ratio of source cell/
j
weight
WC1  Rj
cell j importances
This decreases Rnon faster than increasing T
88
VARIANCE REDUCTION TECHNIQUE
Geometry splitting and Russian roulette - particles in
important regions are increased in number (or weight) for
better sampling and decreased in number in other regions.
Applied at surface crossing.
If importance ratio (previous cell to new) is an integer > 2,
the particle is split into n particles with weight W/n
If ratio is less than one, Russian roulette is played.
This decreases Reff faster than increasing T
89
SPLITTING
Higher importance Lower importance
region
region
Rnew
w
wi  n 
n
R previous
90
RUSSIAN ROULETTE
survival probability :
Bang!
Higher importance Lower importance
region
region
if wcut  cut off weight
Russian roulette applies when :
w  wcut  R j
91
R j  ratio of source cell / cell  j importance
w
wcut  R j
new weight
w  wcut  R j
VARIANCE REDUCTION TECHNIQUE
General source biasing - allows the production of more
source particles. with suitably reduced weights, in the more
important regimes of each variables.
Most common variables:
Angle
Energy
Position
This decrease Rnon and Reff faster than increasing T.
92
The END
93
EXPONENTIAL TRANSFORM
 t (1  p ), where
artificially adjusted total cross section
true total cross section
the stretching parameter
cosine of angle between particle direction
and stretching direction.
Requirements for preserving expected:
collided weight
uncollided weight
t s
*   *t
t s
  *t s
 t e ds  wc  t e ds
e ds  ws e
*t 
*t 
t 
p=
=
wc 
94
t et s
*   *t s
t
e
e  p t s

1  p
ws 
e
e
t s
  *t s
 e p t s
VARIANCE REDUCTION CAUTIONS
“I could have it done in a much more complicated way” said the red Queen, immensely proud
Lewis Caroll
Be cautious:
• the advantage of one variance reduction technique
may cancel another
• balance user time with computation time
• analyse carefully the results
• make cells small enough so neighbouring
importances do not exceed a factor of 8-10
95
Monte Carlo and Transport Equation
Boltzman Transport Equation - Time-independent, Linear
A general integral form:
 (r , v)    (r ' , v' )C (v'  v, r ' )dv'  Q ( r ' , v)T ( r '  r , v) dr'
where
 (r , v)
Q (r´,v)


part icledensit y
source t erm
C (v'  v, r ' )  collision kernel, change velocit yat fixed posit ion
T (r '  r , v)  t ransportkernel
 (r , v)
( r , v )
 (r , v) 
d
 (r , v)  
 ( r , v )

Angular flux  (r , v) 
Scalar flux
96

v  v
Monte Carlo and Transport Equation
Source term for the Boltzmann equation

S (r, v)
 Fixed Source


Q( r , v )   S ( r , v )   ( r , v' ) F (v'  v, r )dv'
 Fixed Source Fission

1
 ( r , v' ) F (v'  v, r )dv'  Eigenvalue

k

S (r , v)
 fixed source
F (v'  v, r )  creation operator (dueto fission)
k
 eigenvalue
97
Photon Physics
•
•
•
•
•
98
Coherent (Thomson) Scattering + Form Factors
Inhorenet (Compton) Scattering + Form Factors
Pair Production
Photoelectric Absorption and Fluorescence
Thick-Target Bremsstrahlung
Photon Physics Approximations
Only K,L edges treated for photoelectric absorption
Distance to collision sampled, not distance to scatter
Thick/thin-target bremsstrahlung
No distinction between pair and triplet production
No anomalous scattering factors
No photoneutrons
99
Surface flux
• Cell flux estimator:
• Surface flux estimator:
Wd /|  |
Fs 

dA
d 0



W

V

W
A|  |
A
10
0
d
