My Wild & Crazy Idea: Causal Computational Learning Theory

Download Report

Transcript My Wild & Crazy Idea: Causal Computational Learning Theory

My Wild & Crazy Idea: Causal
Computational Learning Theory
Scott Aaronson
But first: what ever happened to my WACI from a few
years ago: a Web 2.0 mathematics discussion site and
conjecture/theorem repository?
There now exists such a site, Mathoverflow.net,
which is everything I hoped for and more.
In just ~2 years, it’s noticeably changed the practice
of mathematics.
I had nothing to do with its creation.
PAC (Probabilistically
Approximately Correct) Learning
“Computational complexity theory meets statistics”
Given a collection of labeled
examples (x1,f(x1)),…,(xm,f(xm)) drawn
independently from some unknown
distribution D, problem is to output a
hypothesis h such that h(x)=f(x) for
most x~D with high probability
PAC-learning is a hugely successful model—but like
most statistics, it doesn’t care about the distinction
between correlation and cause
vs
The result: PAC explains how
banks predict who will repay
their loans, but not how
Einstein predicted the bending
of starlight by the sun
To predict what will happen in novel situations, you need to know
something about causal mechanisms—which often requires
controlled experiments (together with prior knowledge about
temporal direction, autonomy of subsystems, etc.)
Best theory of causality we currently have:
Judea Pearl’s “do-calculus”
prain | wet grass
 prain | dowet grass
My WACI Challenge for Theory
Traditional statistics : PAC-learning
:: Pearl’s do-calculus : what?
Potential applications: Debugging, reconstructing gene
regulatory networks…
Existing work in the direction I’m talking about:
- PAC-learning with membership and equivalence queries
- Angluin, Aspnes, Chen, Wu: “Learning a circuit by
injecting values,” STOC’2006
- Pearl’s IC algorithm
- Leakage-resilience in cryptography