Transcript Slide 1

Tools for Probing the Universe
from the Smallest to Largest
and All Scales In Between
Jeff Scargle
Space Science and Astrobiology Division
NASA Ames Research Center
SETI Institute Colloquium Series
March 11, 2009
Real Space  Data Space
The Largest Scales in the Universe
The Smallest Scales in the Universe
Data Segmentation: Voronoi tessellation
 Large scale: structure of the Universe
 Medium scale: Extra-solar Planets
 Small scale: Space-Time?
Real Space
Instrument
Fermi γ Ray Space Telescope
Data Space
Measurement
The Large: Cosmological Scale
Hubble: 500 km/s/Mpc
The Large: Cosmological Scale
The Large: Cosmological Scale
Concordance Cosmology
Hubble constant = 71
Size of the visible Universe:
R = 14,224,900 pc
= 4.4 x 10 28 cm
= 2.7 x 10 61 Planck lengths
QG & the Planck Scale (Ron Adler)
•
•
•
•
h = 1.054 x 10 –27 g cm 2 / sec
(Quantum Mechanics)
c = 2.998 x 10 10 cm / sec
(Special Relativity)
G = 6.670 x 10 –8 cm 3 / g sec 2 (Gravity/General Relativity)
Only one combination of these variables is a length
LPlanck = ( hG / c3 )1/2 = 1.616 x 10 –35 m (10-17 electroweak scale)
Determine the distance L between two points: measure the
round-trip transit time of a photon of wavelength .
(
) =  + G (meffective = E
Uncertainty in
measured length
2) / L
/c
photon
Ordinary
uncertainty
principle
Spatial distortion
due to mass/energy
of the photon
This is  + L2planck /  --- minimum at  = Lplanck
The Small: Planck Scale
•
•
•
•
Planck length √(Gℏ/c3)
Planck energy
Planck time
Planck mass
= 1.6 x 10 -35
= 1.2 × 10 28
= 5.4 x 10 -44
= 2.2 x 10 -8
meters
electron volts
seconds
kilograms
– ℏ from quantum mechanics
– c from special relativity
– G from general relativity
• Generalized Uncertainty Principle: The smallest possible space-time
measurements are at the Planck scales: Adler, R.J., & Santiago, D.J.
1999, Modern Physics Letters A, 14, 1371.
Scales: small to large
Bayesian Blocks:
Construct best-fit piecewise
constant model to the data.
Voronoi Tessellation of data in any dimension
Construct Voronoi cells to represent local photon density
density ~ 1 / cell area
Statistical Interlude
• Clinical studies usually small and expensive
• “Meta-analysis” – Increase significance by
combining statistical summaries of published
studies (not re-analysis of original data)
• Role of publication bias (PB)
• Assess potential for PB with Rosenthal formula
Statistical Interlude
• Publication bias is large!
• Editorial policy: Do not publish a study unless it
achieves a 3-sigma positive result
• Rosenthal formula:
 Completely wrong!
Used to justify hundreds of “meta-analytic” results in
medicine, and psychology (real and para-)
Not a single applied scientist questioned the validity of the
formula
• Many medical studies, especially those relevant to
decisions about safety of drugs to be released to
the market, are based on this statistical blunder.
Statistical Interlude
• Rosenthal, R. (1979) The "file drawer problem"
and tolerance for null results. Psychological
Bulletin, 86, 638-641.
• Publication Bias: The “File-Drawer” Problem in
Scientific Inference, J. D. Scargle. Journal of
Scientific Exploration, Vol. 14, No. 1, pp. 91–
106, 2000.
• A Generalized Publication Bias Model, P. H.
Schonemann and J. D. Scargle, Chinese Journal
of Psychology, 2008, Vol. 50, 1, 21-29.
Statistical Interlude
• Pre-election radio interview with the president of
a major political polling organization (“Dr. Z”).
• Caller: “I hang up on polling phone calls – intrusion
of my privacy.”
• Discussion of this as a potential bias.
• Dr. Z: “I don’t worry about such biases. We just
get a larger sample.”
• JS calls the radio show and tries to verify Dr. Z’s
belief that increased sample size can fix a bias.
• Dr. Z does not understand; responds by puffing up
the reliability of his polling organization.
Planetary Detection
Periodograms (Marcy et al.)
• Similar to a power spectrum, or discrete data analog of a
Fourier transform.
• The periodograms used here are closely related to the LombScargle Periodogram.
• A measure of the improvement of fitting a single sinusoid plus
a constant to the data over fitting only a constant.
• Each peak has a width of ~1/T in frequency space, where T is
the time spanned by the data.
• Periodogram power z(ω) is evaluated for a grid of orbital
frequencies, separated here by 1/(4T).
• Highest peaks are optimized to increase precision in the
corresponding orbital frequency or period.
Cumming et al. 1999
Floating-mean Periodograms
• For N = 20, 40, 60, 80,
100, 120, 140, 160,
180, 200 observations,
left to right, top to
bottom.
– Vertical line represents
correct period.
– Horizontal lines for
detection thresholds
corresponding to FAP’s
of F = 0.1 (lower) and F
= 0.01 (upper).
– Initial decrease in FAP is
probably due to
increase in number of
independent
frequencies.
1. Floating-parabola periodogram
• All peaks within some fraction (we use e-1/2 ≈ 0.607) of the
highest peak are considered.
– A detection threshold can be used as an additional criteria.
– Only one peak qualifies here. It has periodicity P = 39.855 days.
1. Single-Keplerian fit
• We evaluate the floatingparabola periodogram and fit
a single Keplerian orbit as in
the single-planet case.
– Don’t decide on trend; we
keep a parabola for now.
• Peaks higher than both e-1/2 of
the highest peak (dashed line)
and a detection threshold
corresponding to an FAP of F =
0.01 (dotted line) are
considered.
– The 2 qualifying peaks in this
example are marked with
asterisks.
2-dimensional Periodogram
• Measures the improvement in
fitting 2 sinusoids plus a
floating constant/trend over
fitting a floating
constant/trend.
– The first periodicity (days) is
plotted against the second.
– Regions where the power is
lowest appear black and
those with the most power
appear red.
– The highest peaks are marked
with x’s.
• Most useful where there are
two planets with similar
velocity amplitudes.
• Has problems with highly
eccentric orbits.
Gamma-ray Large
Area Space
Telescope
Testing
Quantum Gravity Theories
with GLAST
Thanks:
Jay Norris, Johann Cohen-Tanugi, Paul Gazis, Jerry
Bonnell, Ron Adler, GLAST Science Teams
Unification of General Relativity & Quantum Mechanics
General Relativity
Quantum Mechanics
Scale
Large
Small
Dynamics
Deterministic
Probabilistic
Space-time
Background Independent
Absolute background
Modify: GR to fit with QM? QM to fit with GR? Both GR and QM?
Seek Observable Effects
Is Lorentz symmetry broken? Lorentz Invariance Violation
What about other symmetries (translation and scale invariance,
CPT, supersymmetry, Poincaré, …)?
Is space-time discrete/chunky, affecting photon/particle
propagation?
Is this quantum foam at the Planck scale (10-35 m; 1019 GeV)?
Some Approaches to QG Theory
Loop Quantum Gravity
hep-th/0601129
String theory
many
Effective Field Theory
hep-th/0407370
The World as a Hologram
hep-th/9409089
Quantum Computation
quant-ph/0501135
Extra Dimensions
hep-ph/9811291
Statistical Geometry
Myrheim, TH.2538-CERN
Categorical Geometry ?
gr-qc/0602120
Self-organized criticality
hep-th/0412307
Random Lattice Field Theory
T. D. Lee
Dynamic Probabilistic Causal Structure ? gr-qc/0509120
Causal Sets
gr-qc/06 01 069/121
Random Walk
gr-qc/0403085
Regge Calculus
gr-qc/0012035
Quantum State Diffusion
I. Percival
High Energy Astrophysics Tests of
Lorentz Invariance Violation
•
•
•
•
•
•
•
•
•
•
Dispersion in g-rays from GRBs & AGN
Photon decay (Coleman & Glashow 1999, Stecker & Glashow 2001)
Vacuum Cherenkov radiation (Coleman & Glashow 1999; Stecker & Glashow 2001)
Shifted pair production threshold constraints from AGN g-rays (Stecker & Glashow 2001).
Long baseline vacuum birefringence (GRB polarization)
Electron velocity (Crab Nebula g-ray spectrum;Jacobson, Liberati & Mattingly 2003).
Ultrahigh energy cosmic ray spectrum GZK effect (Coleman & Glashow 1999; Stecker & Scully
2005).
Photon phase coherence (diffraction patterns of distant point sources)
Dispersion in neutrinos from GRBs (Jacob and Piran, hep-ph/0607145)
Modified dispersion relation
– white dwarf Fermi temperatures
– neutrino oscillations and pulsar kicks
– Pulsar rotation periods
Time-of-Flight Measurements (Mattingly, gr-qc/0502097)
Is the speed of light a function of photon energy? Postulate:
E2 = m2 + p2
E2 = p2
 E2 = F(p, m)
 E2 = F( p)
particles
photons
“Since we live in an almost Lorentz invariant world (and are nearly at rest
with respect to the CMBR), in the preferred frame F(p,m) must reduce to
the Lorentz invariant dispersion at small energies and momenta. It is
therefore natural to expand F(p,m) about p = 0 ...”
E2 = m2 + p2 + Eplanck f(1) |p| + f(2) |p|2+ f(3) |p|3 /Eplanck + ...
(particles)
∆t / T = 0.5 ( n – 1) f(n) (∆ E / Eplanck) n-2
(photons)
where n is the order of the first non-zero term in the expansion.
More complete and cogent analysis in “High-energy Tests of Lorentz
Invariance, Coleman and Glashow, hep-ph/9812418.
Even if there is dispersion, it may be masked by the Pulse Asymmetry /
Energy-shift Paradigm, Norris, Marani, and Bonnell, astro-ph/9903233
Low energy
High energy
How best to measure Energy-Dependent Lags?
The data: time and energy tagged -- ti Ei i = 1, 2, ... , N
Usual approach: Bin the data in both time and energy
Find peak in cross-correlation function (across E bands)
Entropy approach
define transformation of time: t'i = f( ti ) = ti + L(, Ei )
(lag L is a function of a parameter )
If  is other than the correct value, the light curve for the transformed
times will be smeared out. Hence the entropy of the light curve will
be minimum for the correct value:
 optimum = argmin[ Entropy ( histogram( ti + L(, Ei ) ) ]
lag estimate is then just L( optimum, E )
Previous estimate from Crosscorrelations
Ellis et al 2002
Wavelet method
From Ellis et al.
Astro-ph/0510172
Voronoi Tessellations on 3+ Scales
Random space-time lattice (T. D. Lee)
Points: micro-partons?
Cells: Planck length cells
Blocks: Elementary Particles
GLAST Source Detection
Algorithm
10 –35 meters
Points: Photons
Blocks: Point sources
Cluster detection algorithm:
Points: Galaxies
Cells: Galaxy Neighborhoods
10 +22 meters
Blocks: Clusters, filaments, …
Large Scale Structure
Points: Galaxies
Cells: Voids