extended_max_likt
Download
Report
Transcript extended_max_likt
A bin-free Extended Maximum
Likelihood Fit + Feldman-Cousins
error analysis
Peter Litchfield
A bin free Extended Maximum Likelihood method of fitting
oscillation parameters is described
A Feldman-Cousins style error analysis has been developed
Systematic errors are incorporated into the MC experiments
comprising the F-C analysis giving error contours with statistical
and systematic components
Extended Maximum Likelihood
Described by Roger Barlow; NIM A297,496
Maximum Likelihood with a normalisation condition
The standard maximum likelihood method maximises the likelihood
function
M
L p( xi ;a1 ...an )
i 1
where p is the probability density and is normalised to 1, M is the
number of events, x is a measured quantity and the ai are parameters
to be determined.
The fit thus only fits the shape and says nothing about the number of
events
Extended Maximum Likelihood
In Extended Maximum Likelihood p is replaced by the unnormalised quantity P where
P( x ;a ...a
i
1
n
)dxi N( a 1 ....an )
The predicted number of events, N, is a function of the fitted
parameters.
M
It can then be shown that
L e N P( x i ;a1 ....an )
i 1
M
ln L P( xi ;a1 ....an ) N
i 1
It can also be shown that lnL is maximised for N=M
Extended Maximum Likelihood
In our case the function P is just the extrapolated predicted neutrino
measured energy distribution for the given set of parameters.
Strictly P should be a continuous function but with a high statistics MC
we can approximate it by the finely binned MC.
So we just sum over the number of predicted MC events Ni(Em) in the
bin corresponding to the measured energy Em of each data event
M
ln L Ni ( Em ) N
i 1
In the plots that follow I use 125 200 MeV MC bins between 0 and 50
GeV. The bins can be as narrow as the MC warrants.
Comparison Binned v Unbinned Likelihood
Binned likelihood has the standard 500 MeV bins below 10GeV
Unbinned gains at high m2 because of the improved resolution on the
oscillation dip
Little gain at low m2 where there is no data
Feldman-Cousins error analysis
Following the F-C prescription, for each m2-sin22 bin I generate fake
experiments with numbers of events Poisson fluctuated about the number
predicted by my extrapolation.
For each experiment I select events at random from the full Far detector
MC sample, up to the fluctuated number and according to the predicted
energy spectrum.
The lnL distribution is calculated on the m2-sin22 grid for each
experiment and the 2 difference between the best fit point and the
generated point determined.
If say 1000 experiments are generated and fitted, the 2 are sorted
and the 900th 2 from the minimum gives the 90% 2 (290) for that grid
point.
If the data 2 for that grid point is less than 290, that grid point is within
the 90% confidence allowed region
F-C results
Data 2 Surface
2 90 surface
FC contours
Systematics Analysis
For each fake MC experiment the parameters of the experiment are
varied according to a set of systematic errors.
The errors for a given experiment are taken randomly from a uniform
distribution between + and – the estimated systematic error.
Notice that CPU time forbids repeating the extrapolation for the > 2.5
billion FC experiments required, so all errors are simulated by varying the
selected far MC events.
Systematic parameters can be varied individually or all together.
Correlations between systematic parameters are accounted for.
All identified systematics can be included without significant time or
complication penalty
Systematics Included
1) Normalisation
The generated event distribution is scaled by a factor randomly
selected between 10.04.
2) Relative hadronic energy scale
The hadronic energy of the selected far detector events is scaled by
a number randomly chosen between 10.033 for each experiment
3) Muon energy scale
The muon energy is scaled randomly between 10.036
4) Absolute energy scale
I cannot change the energy in the predicted distribution but a change
in the absolute scale is equivalent to shifting the predicted oscillation
dip in the far detector. The far detector truth energy is shifted by a
random amount between 100MeV in calculating the oscillation
probability
Systematics included
5) PID cut
The far MC events available at this point in the program have
been selected by the PID. At the moment I can only make a one
sided cut in the selection. Events with PID with a value randomly
selected between 0 and 0.05 above the standard cut are
removed from the fake experiments
6) NC background
In the selection of MC events a fraction randomly selected
between 50% of true extra NC events are selected
Systematics included
7) Extrapolation error
To try to allow for the extrapolation
error I have taken the ratio of the
SKZP extrapolation to my
extrapolation and scaled the
predicted distribution by a random
fraction between 0 and 1 of that
difference for each experiment
Contours
1D errors
No systematics
All systematics
PRL
3
2
0.16
3
2
3
2
m2 2.4300..15
10
eV
2
.
44
10
eV
2
.
43
0
.
13
10
eV
11
0.12
sin2 2 1.00.07
1.00.08
>0.95
-m2
+m2
-sin22
No systematics
0.002322
0.002585
0.9315
NC 50%
0.002318
0.002595
0.9240
Energy 0.036%
0.002321
0.002585
0.9305
Relative hadronic energy 0.033
0.002320
0.002587
0.9292
Absolute hadronic energy 0.1Gev
0.002321
0.002582
0.9323
Pid +0.05
0.002320
0.002590
0.9288
0.002319
0.002587
0.9290
Extrapolation 1.0
0.002320
0.002608
0.9220
All systematics
0.002316
0.002600
0.9228
Normalisation
0.04
Unconstrained contours
To do
More fake experiments to smooth the F-C contours
This analysis just fits the E distribution. The bin-free analysis will be
more advantageous for the E v Eshw analysis where the binning of the
data is a problem
Extend to the nc and - data when available