diffusemtg022206

Download Report

Transcript diffusemtg022206

Efficiency of Cuts in the Inverted Analysis
Ndirc > 13 (number of direct hits)
Ldirb >170
(track length in meters)
| Smootallphit | < 0.250
(smoothness of hits along track)
Medres < 4
(median resolution in degrees)
Likelihood ratio vs. zenith (horizontal events must be > 27.5 and
vertical events must be greater than 65.7 (linear function)
How I calculated the efficiency:
1) Made an N-1 Plot of the selected parameter.
(applied all cuts at my cut level except for the cut on the
parameter I am studying)
2) Counted number of events that passed and failed each cut
3) efficiency = # events that pass the cut / total # of events
(plots will follow the numbers....)
NDIRC
Nch < 100
# cut
# kept
efficiency = kept/total
data
1.06E+007
3.17E+008
.968
dcors
1.81E+007
2.50E+008
.933
data
3.21E+005
6.93E+007
.995
dcors
1.63E+006
5.35E+007
.970
Nch >= 100
# cut
# kept
efficiency = kept/total
LDIRB
Nch < 100
# cut
# kept
efficiency = kept/total
data
5.63E+007
3.17E+008
.849
dcors
3.35E+007
2.50E+008
.882
data
1.64E+007
6.93E+007
.809
dcors
8.10E+006
5.35E+007
.868
Nch >= 100
# cut
# kept
efficiency = kept/total
Smootallphit
Nch < 100
# cut
# kept
efficiency = kept/total
data
1.42E+007
3.17E+008
.957
dcors
5.99E+006
2.50E+008
.977
data
2.27E+006
6.93E+007
.968
dcors
1.05E+006
5.35E+007
.981
Nch >= 100
# cut
# kept
efficiency = kept/total
Median Resolution
Nch < 100
# cut
# kept
efficiency = kept/total
data
1.54E+006
3.17E+008
.995
dcors
8.84E+005
2.50E+008
.996
data
2.47E+005
6.93E+007
.996
dcors
1.42E+005
5.35E+007
.997
Nch >= 100
# cut
# kept
efficiency = kept/total
The next pages contain the N-1 plots for each parameter. Each
page contains 4 plots.
Nch < 100
Nch < 100
with the
dCorsika
normalized to
have the same
number of
events as the
data
*The normalization factor needed is approximately 1.25.
Nch >= 100
Nch >= 100
with the
dCorsika
normalized to
have the same
number of
events as the
data
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Nch < 100
Normalized Nch < 100
Nch >= 100
INVERTED
Normalized Nch >= 100
Now, take a look at the comparable plots for the upgoing
analysis.
(Sorry that the histograms don't have identical binning... I
can do them again if critical.)
UPGOING
n-1 plot
Nch < 100
Normalized
Nch < 100
UPGOING
n-1 plot
Nch < 100
Normalized
Nch < 100
UPGOING
n-1 plot
Nch < 100
Normalized
Nch < 100
UPGOING
n-1 plot
Nch < 100
Normalized
Nch < 100
What I am working on.....
If we are cutting on distributions that don't agree, then we
are likely to get the normalization for low Nch events
wrong.
What would happen to the normalization if we had gotten
the Monte Carlo distribution correct?
Right now, I see two ways to approach this.
1) We could try to shift the MC to match the data.
* Using different ice models, for instance, could shift the
Ndirc into better agreement --->> We decided this was a bad
idea because it would send parameters like Nch out of
agreement.
cut
cut
keep
keep
2) We could shift the Monte Carlo cut (but keep the data cut).
Then we could see how this
changes the overall
normalization.
atms cut
data cut
If the Ndirc peak is off by 20%, you can “shift” it higher (or shift
the cut lower) and see the effect on the normalization.
For MC
1.2*Ndirc > 13
This is the same as shifting the cut -->>
Ndirc > 10.83
Ndirc > 13 / 1.2
since it is discrete
Ndirc > 11
We can compare what happens to the normalization at low
Nch if we pretend that we are working at an entirely different
quality cut level.
Ignore the data for a moment and pretend that the Level 7
central Bartol distribution is the truth for atmospheric neutrinos.
Count the number of events above and below the Nch cut for
other quality levels.
Since I work at Level 7, consider Levels 5,6,7,8 and 9.
Bartol Min
Level
5
6
7
8
9
<100
539.3
463.8
397.2
247.9
153.5
>100
6.7
5.2
4.9
3.7
2.6
Bartol Central
<100
725.9
623.1
533.8
331.7
204.6
>100
12.3
9.7
9.1
6.9
4.8
Bartol Max
<100
912.6
782.5
670.3
415.6
255.7
>100
17.9
14.3
13.3
10
7
Bartol Min
Level
5
6
7
8
9
<100
539.3
463.8
397.2
247.9
153.5
>100
6.7
5.2
4.9
3.7
2.6
Bartol Central
<100
725.9
623.1
533.8
331.7
204.6
>100
12.3
9.7
9.1
6.9
4.8
Bartol Max
<100
912.6
782.5
670.3
415.6
255.7
>100
17.9
14.3
13.3
10
7
Signal
>100
82.6
72.3
68.4
53.4
39.6
Assuming the Bartol Central Level 7 is the truth, you can find the low
Nch normalization factor for each scenario:
5 levels * 3 fluxes = 15 scenarios
For each, you can then calculate a normalized number of background
and signal events.
Example: Bartol Max, Level 5
normalization = 533.8 / 912.6 = 0.585
normalized background = 0.585 * 17.9 = 10.5
normalized signal = 0.585 * 82.6 = 48.3
This may appear somewhat random, but the pattern is evident
on the next slide.
Bartol
140
130
Normalized Signal
120
110
100
90
Column B
80
70
60
50
40
5
6
7
8
9
10
11
12
Normalized Background
13
14
15
Assuming Bartol Central Level 7 is the truth.....
Bartol
140
Lv. 9
130
120
Normalized Signal
Lv. 8
110
100
Lv. 7
90
80
Column B
Lv. 6
Lv. 5
70
60
50
40
5
6
7
8
9
10
11
12
Normalized Background
13
14
15
Cut levels 5,6, and 7 (the circled region, with level 7 being the blue line)
show similar behavior. Because of the large gap, it seems that the cuts
tighten dramatically between levels 7 and 8. If I wanted to, I could add a
cut level in that region.
I hope that our distributions (data vs MC) are not in as large a
disagreement as Level 7 MC to Level 9 MC. If the data and MC show a
disagreement that is similar to the disagreement between Level 6 MC and
Level 7 MC (for instance), then it seems that we can constrain the range of
signal and background. I am running ROOT right now so that I can make
plots of each parameter for the different cut levels and compare them.
Bartol
140
130
Normalized Signal
120
110
Level 7
100
90
Column B
80
70
60
50
40
5
6
7
8
9
10
11
12
Normalized Background
13
14
15
Albrecht asked me to check the space angle difference between
the True and Reconstructed tracks of the muons near the horizon
in the inverted analysis.
Although my statistics are low (not as good as Newt's), I find that
events that pass my final quality cuts (minus the Nch cut) are well
reconstructed. The difference between the true angle and the
reconstructed angle is usually within 4 to 5 degrees.
*Obviously, my statistics are low. Unweighted, there are 107 events in this plot, but they are weighted up to be
comparable in numbers to the 4-year data.