a-1 - Stefan Dziembowski

Download Report

Transcript a-1 - Stefan Dziembowski

Cryptography on Non-Trusted
Machines
Stefan Dziembowski
Idea
Design cryptographic
protocols that are secure
even
on the machines that are
not fully trusted.
How to construct secure digital systems?
MACHINE
(PC, smartcard, etc.)
very secure
Security based on well-defined
mathematical problems.
CRYPTO
not secure!
The problem
MACHINE
(PC, smartcard, etc.)
CRYPTO
Machines cannot be trusted!
1. Information
leakage
MACHINE
(PC, smartcard, etc.)
2. Malicious
modifications
Relevant scenarios
MACHINES
...
PCs
malicious software
(viruses, trojan horses).
specialized
hardware
side-channel attacks
Examples of side-channel attacks
• timing attack — measuring how much time
various computations take to perform,
• power monitoring attack — measure the power
consumption by the hardware during
computation,
• attacks based on leaked electromagnetic
radiation,
• acoustic cryptanalysis — exploit sound produced
during a computation,
• differential fault analysis – introduce faults in a
computation.
Type of information that can be learnt
• individial bits (probing attacks)
• more general functions (e.g. in the Hamming
attack the adversary learns the sum of secret
bits)
More on the practical attacks: Side Channel
Cryptanalysis Lounge
The standard view
anti-virus software,
intrusion detection,
tamper resistance,…
MACHINE
(PC, smartcard, etc.)
practitioners
definitions, theorems,
security reductions,..
Implementation is
not our business!
CRYPTO
theoreticians
Our model
(standard) black-box access
cryptographic
scheme
additional access
to the internal data
Plan
1. Private Circuits
1. stateless circuits
2. stateful circuits
2. Bounded-Retrieval Model
1.
2.
3.
4.
5.
Introduction
Entity authentication
Forward-Secure Storage
Intrusion-Resilient Secret Sharing
Leakage-Resilient Stream Cipher
3. Open Problems
Private Circuits
This part of the lecture is based on
[Ishai, Sahai, Wagner: Private Circuits: Securing
Hardware against Probing Attacks. CRYPTO
2003]
Motivation:
Cryptographic hardware can be subject to
“probing attacks”.
Probing attacks
The adversary can insert needles into the device and
read-off the internal values
We will model the device as a Boolean circuit.
Plan
1. Private Circuits
1. stateless circuits
2. stateful circuits
2. Bounded-Retrieval Model
1.
2.
3.
4.
Entity authentication
Forward-Secure Storage
Intrusion-Resilient Secret Sharing
Leakage-Resilient Stream Cipher
3. Open Problems
Randomized Boolean circuits
b1
b2
depth
and
rnd
b4
and
and
neg
size: number of gates
conjunciton
gates
and
and
and
and
a1
output gates
neg
and
and
b5
and
and
neg
a0
b3
rnd
a2
a3
a4
a5
rnd
neg
a6
a7
“wires”
random bit
gates
negation
gates
input gates
A t-limited adversary
Assumption:In each round the adversary can readoff up to t wires
circuit
doesn’t need to be
computationally-bounded
An idea
for simplicity assume
that it is deterministic
circuit C
transformation T:
circuit
C’ = T(C)
1. C and C’ should compute the same function.
2. A circuit T(C) should be as secure as C even if the
adversary can read-off t wires.
Problem
We want to require that
“no adversary can get any information about the input a”.
C
input a
Problem: the adversary can always read a directly
Solution
I and O
should not
depend on C
output b
output
decoder O
circuit C
input encoder I
input a
the adversary
cannot read the
wires from I and O
The model
Suppose the adversary reads-off some t wires
C
input a
output x
of the adversary
The adversary outputs some value x.
The security definition
For every C’ and a
x
x
for every adversary
that attacks C’
there exists a simulator
that
has no access to C’
C’
I
and the distribution of
the output is the same
simulator
a
The construction
We are now going to construct
(T,I,O)
We first present the main idea (that contains some
errors)
Then we repair it.
Main tool: secret sharing
m-out-of-n secret sharing
dealer’s secret S
(n = 5)
S1
S2
S3
S4
S5
1. Every set of at least m players can reconstruct S.
2. Any set of less than m players has no information
about S.
Secret sharing – more generaly
Every secret sharing protocol consists of
• a sharing procedure,
• a reconstruction procedure, and
matching
• a security condition.
n-out-of-n secret sharing
This lecture: n-out-of-n secret sharing
Example
Suppose S  {0,1}.
The dealer selects uniformly at random
S1,...,Sn  {0,1}
such that
S1+ ... +Sn = S mod 2.
Idea
Encode every bit of the input using a
m-out-of-m secret sharing
for m = t + 1
example: t = 2
random such that
b1+b2+b3 = b mod 2
random such that
a1+a2+a3 = a mod 2
a1
a2
a3
b1
b2
b3
c1
input encoder I
a
decoding - trivial
b
random such that
c1+c2+c3 = c mod 2
c
c2
c3
The transformation
and
and
T
neg
and
a
neg
and
b
and
and
c
a1
a2
a3
b1
b2
b3
c1
c2
c3
How to handle negation?
Just negate the first bit...
example: t = 4
not a
not a1
neg
neg
a
a1
a2
a3
a4
a5
a2
a3
a4
a5
How to handle multiplication?
?
c
c1
and
a
c2
c3
and
b
a1
a2
a3
b1
b2
b3
How to handle multiplication?
Observation:
b   b j mod 2
a   ai mod 2
j
i



ab    ai    b j  mod 2
 i  j 
  ai b j mod 2
i
j
An idea
sharing of a
sharing of b
b1
b2
b3
b4
a1
a1b1
a2
a2b1
a3
a3b1
a4
a4b1
a1b2
a1b3
a1b4
a2b2
a2b3
a2b4
a3b2
a3b3
a3b4
a4b2
a4b3
a4b4
c1   a1b j c2   a2b j
j
ci 
a b
i
j
j
 ai
b
j
j
c3   a3b j
j
c4   a4b j
j
ai b
j
Problem: If the adversary can see that ci = 1 then she
knows that b = 1
Idea: add randomization...
An improved idea
Randomly flip some entries.
We do it symmetricaly.
a1
a2
a3
a4
a1
a2
a3
a4
b1
a1b1 a2b1 a3b1 a4b1
b1
a1b1 a2b1 a3b1 a4b1
b2
a1b2 a2b2 a3b2 a4b2
b2
a1b2 a2b2 a3b2 a4b2
b3
a1b3 a2b3 a3b3 a4b3
b3
a1b3 a2b3 a3b3 a4b3
b4
a1b4 a2b4 a3b4 a4b4
b4
a1b4 a2b4 a3b4 a4b4
a1b1
a1
a2
a3
a4
b1
a1b1 a2b1 a3b1 a4b1
b2
a1b2 a2b2 a3b2 a4b2
b3
a1b3 a2b3 a3b3 a4b3
b4
a1b4 a2b4 a3b4 a4b4
z12
a1b2
xor
a2b1
xor
z12
a1b3
xor
a3b1
xor
z13
a1b4
xor
a4b1
xor
z14
a2b2
a2b3
xor
a3b2
xor
z23
a2b4
xor
a4b2
xor
z24
xor
z13
z23
a3b3
a3b4
xor
a4b3
xor
z34
z14
z24
z34
a4b4
z12
z13
z23
z14
z24
z34
random
Observation
and
a1
a2
a3
b1
b2
b3
(a1,a2,a3) and (b1,b2,b3) may not be “independent”.
Example:
and
a
and
a
a1
a2
a3
a1
a2
a3
Example
a1
t=2
a2
a3
a1
a1a1 a2a1 a3a1
a2
a1a2 a2a2 a3a2
a3
a1a3 a2a3 a3a3
Suppose that the adversary can observe that a3a1 = 1 and a3a2=1.
Then she knows that a1 = a2 = a3 = 1.
So she knows that a1 + a2 + a3 = 1 mod 2.
What is the reason?
some wires give information about two ai’s
A solution
Set m := 2t + 1.
In other words:
Instead of
(t+1)-out-of-(t+1) secret sharing
use
(2t+1)-out-of-(2t+1) secret sharing
a1
a2
a3
a4
Example: t = 2, m = 5
a5
b1
a1b1 a2b1 a3b1 a4b1 a5b1
b2
a1b2 a2b2 a3b2 a4b2 a5b2
z12
b3
a1b3 a2b3 a3b3 a4b3 a5b3
z13
z23
b4
a1b4 a2b4 a3b4 a4b4 a5b4
z14
z24
z34
b5
a1b5 a2b5 a3b5 a4b5 a5b5
z15
z25
z35
xor
a1 b 1
a2b1
a3b1
a4b1
a5b1
a1 b 2
a2b2
a3b2
a4b2
a5b2
a1 b 3
a2b3
a3b3
a4b3
a5b3
a1 b 4
a2b4
a3b4
a4b4
a5b4
a1 b 5
a2b5
a3b5
a4b5
a5b5
c1
c2
c3
c4
c5
xor
z45
The blow-up
The size of the circuit is increased by factor
O(t2)
The depth of the circuit is increased by factor
O(log d)
Plan
1. Private Circuits
1. stateless circuits
2. stateful circuits
2. Bounded-Retrieval Model
1.
2.
3.
4.
Entity authentication
Forward-Secure Storage
Intrusion-Resilient Secret Sharing
Leakage-Resilient Stream Cipher
3. Open Problems
Stateful circuits
Usually a cryptographic device contains a “state”.
Example:
ciphertext
cipher
key K
the adversary may:
1. observe the inputoutput behaviour
2. insert some wires
inside the device
plaintext
How to model it?
Stateful circuits
Stateful circuits have additional gates called “memory
cells”
mem
• Time is divided in rounds.
• In each round the mem gate outputs its input from the
previous round
• Hence they can be used to store some state
mem
Notation
circuit C with
initial state
(s0,s1,...,sn):
C[s0,s1,...,sn]
s0
s1
...
sn
A new definition of a t-limited adversary
Time is divided into rounds. In the ith round the adversary:
1. chooses a set of t wires.
2. invokes C on some input ai.
3. learns the values on the wires chosen by him,
and the output bi of the circuit.
b54321
circuit C
aa54321
output x
of the adversary
What about the statefull circuits?
The initial state s may also need to be
“transformed”
C[s]
transformation T:
C’[s’]
Note: encoding the input and decoding it does not need
to be protected.
How to define security
For every adversary there exists a simulator such that the
outputs are distributed identically.
C’[s’]
T
C[s]
cannot read the internal wires
adversary
x
simulator
x
The construction (outline)
We use the encoding as in the previous construction.
b
γ32 = new encoding of s
at the end of each
round
C’
a
γ1 = encoding of s
encode
state s
A subsequent paper
Y. Ishai, M. Prabhakaran, A. Sahai, and D.
Wagner. Private Circuits II: Keeping Secrets in
Tamperable Circuits. EUROCRYPT 2006
They cosider the active attacks, i.e. the
adversary can modify the circuit.
Plan
1. Private Circuits
1. stateless circuits
2. stateful circuits
2. Bounded-Retrieval Model
1.
2.
3.
4.
Entity authentication
Forward-Secure Storage
Intrusion-Resilient Secret Sharing
Leakage-Resilient Stream Cipher
3. Open Problems
The problem
Computers can be infected by malware!
installs a virus
retrieves some data
The virus can:
 take control over the machine,
 steal some secrets stored on the machine.
Can we run any crypto on such machines?
Is there any remedy?
If
the virus can download
all the data stored on the machine
then
the situation looks hopeless
(because he can “clone” the machine).
Idea:
Assume that he cannot do it!
Bounded-Retrieval Model
Make secrets so large that the adversary cannot
retrieve them completely.
Practicality?
500 GB ≈ 200$
The general model
no virus
installs a virus
retrieves some data
no virus
installs a virus
retrieves some data
no virus
The total amount of retrieved data is bounded!
Our goal
Try to preserve as much security as possible (assuming
the scenario from the previous slide).
Of course
as long as the virus is controlling the machine nothing
can be done.
Therefore
we care about the periods when the machine is free of
viruses.
Two variants
How does the virus decide what the retrieve?
Variant 1 [D06a,D06b,CDDLLW07,DP07,DP08]
He can compute whatever he wants on the victim’s
machine.
Variant 2 [CLW06,…]
He can only access some individual bits on the victim’s
machine (“slow memory”)
(a bit similar to the “private circuits”)
Can we implement anything in this
model?
Yes! E.g.: entity authentication
the user
the bank
We solve the following problem:
How can the bank verify the authenticity of the user?
Entity authentication – the solution
key R = (R1,…,Rt)
00011010011101001001101011100111011111101001110101010101001001010011110000100111111110001010
Y
Y = {y1,…,ym} – a random
set of indices in R
(Ry1,…,Rym)
f(Y,R) :=
verifies
Security of the authentication protocol
Theorem [D06a,CDDLLW07]
The adversary that “retrieved” a constant fraction of
R does is not able to impersonate the user.
(This of course holds in the periods when the virus is
not on the machine.)
A related concept: the Bounded Storage
Model
This is related to the Bounded Storage Model (BSM)
[Maurer 1992]
In the BSM the security of the protocols is based on the
assumption that one can broadcast more bits than
the adversary can store.
In the BSM the computing power of the adversary may
be unlimited.
The Bounded-Storage Model (BSM)
short
initial
key K
randomizer R (a long string):
000110100111010010011010111001110111
111010011101010101010010010100111100
001001111111100010101001000101010010
001010010100101011010101001010010101
randomizer disappears
can perform any computation
on R, but the result U=h(R) has
to be much smaller than R
USERS
The adversary shouldn’t be able
to distinguish X from random
X = f(K,R)
X?
knows:
U=h(R)
f : {0,1}M × {0,1}T → {0,1}N ≈
set of all
randomizers
{0,1}T
{fK : {0,1}T → {0,1}N } K Є {0,1}M
R
1. By choosing h : {0,1}T → {0,1}V the adversary divides {0,1}T into 2V
classes.
2. Then a random R Є {0,1}T is chosen.
3. The adversary learns h(R).
4. A random K Є {0,1}M is chosen. The adversary learns K.
5. The value of fK(R) should “look almost random” to the adversary.
BSM functions
A function
f : {0,1}M × {0,1}T → {0,1}N
is (ε,V)-BSM secure
if
for every function
h : {0,1}T → {0,1}V
we have that
(for uniformly random K and R)
f(K,R) is (expected) ε-close to uniform,
even if one knows h(R) and K, i.e.:
d(f(K,R) | h(R),K) ≤ ε
Later X can be used as a one-time pad
key
f(
K
=
message
,
R
)
X
M
xor
Y
ciphertext
If the adversary has negligible information about X then this is
secure!
Good BSM functions exist
Every randomness extractor is a BSM function.
However usually we require that f(K,R) can be
computed by looking on just a small part of R.
Such (ε,V)-BSM secure functions were constructed e.g.
in
[ADR 2002, Vadhan 2004, Lu 2004, DM04]
(with V being a constant fraction of |R|)
The scheme of [DM04]
initial key K=(K1,…,Km)
K1
m
K3
K2
0
1
0
0
0
1
0
1
0
1
0
1
0
0
0
0
1
0
1
1
0
1
0
0
0
1
0
1
0
0
1
0
1
0
1
1
0
0
0
0
1
1
0
1
0
0
1
1
0
0
0
0
1
0
0
0
1
0
1
0
1
0
1
0
0
1
0
1
0
1
1
0
1
0
0
0
1
0
1
0
0
0
0
0
1
0
1
1
0
1
0
0
0
1
0
1
0
0
0
1
1
0
0
0
0
1
XOR
The derived key X
How is BSM related to our model?
Seems that the assumptions are oposite:
transmission
storage
BSM
cheap
expensive
BRM
expensive
cheap
BSM vs. BRM
Bounded-Storage Model:
R comes from a satellite
stored value U
Bounded-Retrieval Model
R is stored on a computer
retrieved value U
Consider again the authentication protocol
random K
X = f(K,R)
verifies
Observation
In the authentication protocol one could use a BSMsecure function f.
Plan
1. Private Circuits
2. Bounded-Retrieval Model
1. Introduction
2. Entity authentication
3. Forward-Secure Storage
1.
2.
3.
4.
IT-secure
computationally-secure
a scheme with a conjectured hybrid security
Connections with the theory of Harnik and Naor
4. Intrusion-Resilient Secret Sharing
5. Leakage-Resilient Stream Cipher
3. Open Problems
Forward Secure Storage (FSS) - the motivation
(E,D) - a “standard” encryption scheme
key K
message M
• The key K leaks to the adversary
or
• The adversary breaks the scheme
C = E(K,M)
C
One of the following happens:
installs a virus
retrieves C
The adversary can compute M
The idea: use a BRM!
Design an encryption scheme such that
the ciphertext C is so large that the
adversary cannot retrieve it completely
message M
ciphertext C=Encr(K,M)
Forward-Secure Storage – a more detailed
view
The adversary to compute an arbitrary function
h of C.
length t
This can be formalized
ciphertext C=Encr(K,M)
in a standard way
(e.g. game-based).
function h
retrieved value U=h(C)
length of U << t
K
M?
Computational power of the adversary
We consider the following variants:
• computational: the adversary is limited to poly-time
• information-theoretic: the adversary is infinitely-powerful
• hybrid: the adversary gains infinite power after he computed
the function h.
This models the fact that the in the future the current
cryptosystems may be broken!
Information-theoretic solution – a wrong idea
f – secure in the BSM
key
f(
K
=
message
,
R
)
X
M
xor
Y
Shannon theorem
ciphertext
ciphertext
in the
(R,Y)BSM
encryption
this cannot work!
What exactly goes wrong?
Suppose the adversary has some information about M.
denote it W
He can see
(R, f(K,R) xor M ).
So, he can solve (for K) the equation
W = f(K,R) xor M.
If he has enough information about M, and K is short, he will
succed!
Idea: “Blind” the message M!
A better idea
key is a pair (K,Z)
f(
K
=
,
R
)
X
Z
message
M
xor
Y
ciphertext
(R,Y)
Why does it work?
Intuition
The adversary can compute any function h of:
R
Y = f(K,R) xor M xor Z
Y is of no use for him, since it is xor-ed with a random string Z!
So if this FSS scheme can be broken then also the BSM function f
can be broken
(by an adversary that uses the same amount of memory).
Problem with the information-theoretic
scheme
The secret key needs to be larger than the
message!
What if we want the key to be shorter?
We need to switch to the computational
setting...
Plan
1. Private Circuits
2. Bounded-Retrieval Model
1. Introduction
2. Entity authentication
3. Forward-Secure Storage
1.
2.
3.
4.
IT-secure
computationally-secure
a scheme with a conjectured hybrid security
Connections with the theory of Harnik and Naor
4. Intrusion-Resilient Secret Sharing
5. Leakage-Resilient Stream Cipher
3. Open Problems
Computational FSS (with a short key)
(Encr,Decr) – an IT-secure FSS
(E,D) – a standard encryption scheme
Encr1(
Encr(
large
small
,
K
E(
K’
K
,
)=
M
,
K’
M
)
)
K’ is a random key for the standard encryption scheme
Intuition: when the adversary learns K he has no idea about
K’ and therefore no idea about M.
Plan
1. Private Circuits
2. Bounded-Retrieval Model
1. Introduction
2. Entity authentication
3. Forward-Secure Storage
1.
2.
3.
4.
IT-secure
computationally-secure
a scheme with a conjectured hybrid security
Connections with the theory of Harnik and Naor
4. Intrusion-Resilient Secret Sharing
5. Leakage-Resilient Stream Cipher
3. Open Problems
Hybrid security
What about the hybrid security?
Recall the scenario:
ciphertext C=Encr(K,M)
h
retrieved
value U=h(C)
M
?
Is this scheme secure in the hybrid model?
Encr(
E(
K
K’
,
,
K’
)
M
)
The adversary retrives only the second part!
Later, when she gets infinite computing
power, she can recover the message M!
Thus, the scheme is not secure in the
hybrid model!
A scheme (Encr2,Decr2)
Does there exist an FSS scheme with hybrid security
(and a short key)?
Idea: Generate K pseudorandomly!
(Encr,Decr) – an IT-secure FSS
G – a cryptographic PRG
Encr2(
Encr(
K = G(S)
S
,
M
,
M
)=
)
Is the scheme from the previous slide secure?
It cannot be IT-secure, but is it
• computationally-secure?
• secure in the hybrid model?
Intuition: should be secure, by some standard arguments, like:
suppose it is not secure
then
we can distinguish the output of G(S) from random.
Doesn’t work, because at the end the adversary learns S.
(Encr2,Decr2) - problems with the proof
Proving the security of (Encr2,Decr2) may be
hard (both in the computational and in the
hybrid model).
Why?
Because
if we generalize the notion of PRG a bit
then
there is a counterexample!
Summary
IT
security
hybrid
security
comp.
security
the first
scheme
secure
secure
secure
the second
scheme
not
secure
not
secure
secure
the third
scheme
not
secure
maybe
secure
maybe
secure
A complexity-theoretic view
Suppose the adversary wants to know if a given C is a
ciphertext of some message M.
NP-language:
L = {C : there exists K such that C = Encr(K,M)}.
standard encryption
is C in L?
FSS
Can we compress C to some U, s.t.
|U| << |C| so that later we can
decide if C is in L basing on U, and
using infinite computing power?
Plan
1. Private Circuits
2. Bounded-Retrieval Model
1. Introduction
2. Entity authentication
3. Forward-Secure Storage
1.
2.
3.
4.
IT-secure
computationally-secure
a scheme with a conjectured hybrid security
Connections with the theory of Harnik and Naor
4. Intrusion-Resilient Secret Sharing
5. Leakage-Resilient Stream Cipher
3. Open Problems
The theory of Harnik and Naor
This question was recently studied in:
Danny Harnik, Moni Naor
On the Compressibility of NP Instances and
Cryptographic Applications
FOCS 2006
See also:
Bella Dubrov, Yuval Ishai
On the Randomness Complexity of Efficient Sampling
STOC 2006
Compressibility of NP Instances
Informally, an NP language L is compressible if there exists an
efficient algorithm that
compresses every string X to a shorter string U,
in such a way that an infinitely-powerful solver can decide
if X is in L basing only on U.
Proving that some language is incompressible
(from standard assumptions)
is an open problem.
This is why showing an FSS scheme provably-secure
in the hybrid model may be hard!
Plan
1. Private Circuits
2. Bounded-Retrieval Model
1.
2.
3.
4.
5.
Introduction
Entity authentication
Forward-Secure Storage
Intrusion-Resilient Secret Sharing
Leakage-Resilient Stream Cipher
3. Open Problems
a-out-of-a secret sharing
dealer’s secret S
(a = 5)
Q0
Q1
Q2
Q3
Q4
1. All a players can reconstruct S.
2. Any set of less than a-1 players has no information
about S.
Why is secret sharing useful?
Suppose the users store the shares on their PCs.
Q0
Q1
Q2
Q3
Q4
(by e.g. installing a virus)
The adversary that got an access to some proper subset of these machines
learns nothing about S.
Question
What if the adversary can
access all of the
machines?
(but not to all at
the same time)
Assumption: one corruption at a time!
Problem:
Q0
the adversary knows:
Q1
Q2
S0 S1 S2 S3 S4
Q3
reconstructs S
Q4
How to deal with this problem?
• Proactive security
[Ostrovsky and Yung, PODC’91]:
add refreshment rounds.
• Our approach:
use
the Bounded-Retrieval Model.
Intrusion-Resilient Secret Sharing (IRSS) =
Secret Sharing secure in the BRM
short secret S
Q0
Q1
Q2
Q3
the shares are large!
(e.g. 10 GB)
Q4
Does it make sense?
• How to define security?
these questions are related
• What about the reconstruction?
The two-party case
The adversary may “hop” between the parties
Q0
Alice
Q1
h20(Q
(S00))
...
h31(Q
(S11))
Bob
...
L-admissible, V-bounded adversaries
We say that an adversary is L-admissible if
he makes at most 2L corruptions.
We say that an adversary is V-bounded if
he retrieves at most V bits from each
player.
V
h0(Q0)
h2Q0)
h4(Q0)
h1(Q1)
h3(Q1)
h5(Q1)
...
h2L-2(Q0)
h2L-1(Q1)
L
How to define security?
Intuition:
Every L-admissible V-bounded adversary
should learn almost no information about the
shared secret S.
how to formalize
this?
A naive idea 1. the dealer chooses S and
shares it
h0(Q0)
h1(Q1)
...
...
2. the adversary
chooses functions
h0,...,h2L-1
and learns the
corresponding
values
h2L-1(Q1)
The adversary should have almost no information about S, i.e.:
even given h0(Q0),...,h2L-1(Q1), the distribution of S should be close to
uniform.
How to define security?
a random S is shared
h0(Q0)
h1(Q1)
...
...
h2L-1(Q1)
has to guess c S’
S if c = 0
a random string otherwise
a random c Є {0,1}
is chosen
We say that the scheme is (ε, L, V) – secure if every adversary
guesses c correctly with probabity at most 0.5 + ε.
Statistical distance
α,α’ – probability distributions over some set A.
Statistical distance between α and α’ is defined as:
1
δ(α;α'):=  | α(a) - α'(a) |
2 a
If β is a uniform distribution over A, then
d(α) := δ(α; β)
is a distance of α from uniform.
This notation can be extended to random variables.
Conditional statistical distance
A, B – random variables
U – a random variable distributed uniformly
over the same set as A
d(A | B) = δ((A,B) ; (U,B))
is a distance of A from uniformity, given B.
An equivalent definition
We say that the scheme is (ε, L, V) – secure if for
every L-admissible V-bounded adversary
d(S | h0(Q0),...,h2L-1(Q2L-1)) ≤ ε.
Intuition:
“even given h0(Q0),...,h2L-1(Q2L-1), the distribution
of S should be close to uniform.”
But is it the right approach?
Not really:
we shouldn’t assume that S is random...
This is because the adversary may have some a priori
knowledge about S.
But this notion will be useful as a building-block.
Call it:
Intrusion-Resilient Random Secret Sharing
What is the right definition?
Indistinguishability game!
the dealer chooses
random bit b
and shares Sb
S0,S1
h0(Q0)
h1(Q1)
...
...
the adversary
chooses functions
h0,...,h2L-1
and learns the
corresponding
values
h2L-1(Q1)
the adversary
has to guess b
Security definition
We say that a secret sharing scheme is
an (ε,L,V)-secure IRSS if any
L-admissible
V-bounded
adversary
wins the indistinguishability game with probability at
most
½ + ε.
computationally
bounded/unbounded
depending on the settings
Reconstruction procedure
Requirement: small communication
complexity
Trivial observation:
The reconstruction procedure cannot
take less than 2L - 1 messages.
We require that reconstruction takes
exactly 2L messages.
It should also be efficient!
L
Lemma
Every
(ε,L,V)-secure IRRSS scheme Θ
can be transformed into an
(2N · ε,L,V)-secure IRSS scheme Θ’
(where N is the length of the shared secret).
This is OK because we can construct an (ε,L,V)secure IRRSS scheme Θ, for ε << 2-N.
How to construct Θ’?
To share S:
1. Share a random secret S’ using Θ (let the
shares be Q0 and Q1) and
2. Let one of the users (say: Alice) store S’  S.
I.e. the shares are:
(Q0, S’  S) and Q1
In other words:
Encrypt S’ with S using the “one-time pad”
encryption.
We have to show that:
L-admissible V-bounded adversary that breaks
IRSS Θ’ with probability ½ + δ
L-admissible V-bounded adversary that breaks IRRSS
Θ with probability at least ½ + 2-N · δ
E
A
(setting ε := 2N· δ we will be done)
How to do it?
simulate!
Intuition
will try to guess the shared secret S.
He succeeds with probability 2-N.
attacks
IRRSS Θ
simulation
choose a
random Z
S0,S1
shared
random
secret:
S
store it
h0
h0(Z, · )
h0(Z, Q0)
h0(Z, Q0)
h1
h1
h1(Q1)
h1(Q1)
guesses b
c Є {0,1} random
check if Z = S’  Si
for some i Є {0,1}
yes and b = i
output
c =0
Q1
...
attacks
IRSS Θ’
Q0
yes and b ≠ i
output
c =1
S’ :=
no
output
a random bit
S if c = 0
random otherwise
attacks
IRRSS Θ
simulation
S0,S1
shared
secret:
S
store it
choose a
random Z
interaction
attacks
IRSS Θ’
c Є {0,1} random
guesses b
S’ :=
check if Z = S’  Si
for some i Є {0,1}
S if c = 0
random otherwise
Since Z and S’  Si are independent the probability that this happens is equal
to 2 · 2-N = 21-N
moreover the probability that i = 0 is equal to 0.5 and that i = 1 is equal to 0.5
So, suppose that
Z = S’  Si
For a moment suppose also that c = 0.
c Є {0,1} random
S’ :=
S if c = 0
random otherwise
simulation
random
Z Є {S0,S1}
S0,S1
shared
random
secret:
S
store it
h0
h0(Z, · )
h0(Z, Q0)
h0(Z, Q0)
h1
h1
h1(Q1)
h1(Q1)
Q0
Q1
...
guesses b
yes
output
c =0
we know that Z = S  Si
for some i Є {0,1}
check if b = 1
Probability that
outputs c = 0 is equal to
the probability that
no
output
c =1
guesses b correctly.
Now suppose that c = 1.
c Є {0,1} random
check if Z = S’  Si
for some i Є {0,1}
yes and b = i
output
c =0
yes and b ≠ i
output
c =1
both have probability 0.5
S’ :=
no
output
a random bit
S if c = 0
random otherwise
Now we need to calculate the
probabilities
1 + δ := probability that
guesses b correctly.
We want to calculate the probability that
guesses c correctly.
prob. 21-N
Z = S’ 
Si
for some i Є {0,1}
prob. 0.5
prob. 0.5
c=0
prob. 1+ δ
prob. 1-21-N
c=1
prob. 1- δ
outputs
c=0
P(correct
guess of c) = 1
21-N · 0.5 · (1+ δ)
outputs
c=1
outputs
a random bit
P(correct guess of c) = 0.5
(1 - 21-N ) · 0.5
= 0.5 - 2-N
outputs
a random bit
P(correct
guess of c) = 0
= 2-N + 2-N · δ
P(correct guess of c) = 0.5
21-N · 0.5 · 0.5
sum ≥ 0.5 + 2-N · δ
Finalizing the proof
Clearly, if
is L-admissible V-bounded
then
is L-admissible V-bounded.
Hence we are done!
IRRSS – the construction
Idea
We need to construct a two-party function
{0,1}T × {0,1}T → {0,1}N
that
• can be computed by exchanging 2L short messages
• cannot be computed by exchanging 2L - 1 messages
of length V.
In a very strong sense
An observation
f : {0,1}N × {0,1}T → {0,1}N is (ε,V)-BSM secure
K0
R1
uniformly
random
V-bounded
adversary
The adversary that wants to learn K1
needs to corrupt Alice before he corrupts Bob!
K1 := f(K0,R1)
How to force an adversary to “hop”
f : {0,1}N × {0,1}T → {0,1}N is (ε,V)-BSM secure
K0
R0
K2 := f(K1,,R0)
R1
V-bounded
adversary
K1 := f(K0,,R1)
Sharing a random secret S
This is calculated
internaly by the
dealer
K0
K0
K1 :=f(K0,R1)
uniformly
random
K2 := f(K1,R0)
R0
...
shared secret:
S = f(K2L-1,R0)
K3 := f(K2,R1)
K2L-1 := f(K2L-2,R1)
R1
Reconstruction
K0
K0
K1 :=f(K0,R1)
K2 := f(K1,R0)
K3 := f(K2,R1)
output:
S := f(K2L-1,R0)
...
R0
K2L-1 := f(K2L-2,R1)
R1
Theorem 1
Suppose f : {0,1}N × {0,1}T → {0,1}N is (ε,V)-BSM secure.
Then, for every L, the 2-out-of-2 IRRSS scheme constructed on previous slides
is
(2 · L · ε, L,V)-secure.
K0
Main proof idea:
K0
Induction:
K1 :=f(K0,R1)
K2 := f(K1,R0)
“each hop adds one ε to the
distance from uniformity.”
K3 := f(K2,R1)
...
R0
K2L-1 := f(K2L-2,R1)
R1
Proof by induction (over i):
for every i:
d(Ki | (h0(R0),K0),...,(hi-1(R(i -1) mod 2),Ki-1)) ≤ i · ε
Ki = S
Note: we strengthen the hypothesis by
adding the Ki’s
This can be done safely because we
always have:
because we always have:
d(A | BC) ≥ d(A | B)
The basis of the induction
For i = 0:
d(Ki | (h0(R0),K0),...,(hi-1(R(i -1) mod 2),Ki-1)) ≤ i · ε
is equivalent to:
d(K0) ≤ 0
This holds trivially.
The inductive step
To simplify the presentation we show the
inductive step for:
i = 3.
We have to show that:
K0
K0
K1 :=f(K0,R1)
R0
K2 := f(K1,R0)
R1
K3 := f(K2,R0)
d(K2 | (h0(R0),K0),(h1(R1),K1)) ≤ 2 · ε
IMPLIES
d(K3 | (h0(R0),K0),(h1(R1),K1),(h2(R0),K2)) ≤ 3 · ε
First observation
K0
K0
K1 :=f(K0,R1)
R0
K2 := f(K1,R0)
R1
K3 := f(K2,R0)
d(K3 | (h0(R0),K0),(h1(R1),K1),(h2(R0),K2))
equals to:
d(K3 | (h0(R0),K0),(h1(R1),K1),
K2)
Intuition: h2(R0) “doesn’t matter” once you know K0.
(the proof is given in the paper)
We have to show that:
T := h(R0,R1,K0,K1)
d(K2 | (h0(R0),K0),(h1(R1),K1)) ≤ 2 · ε
IMPLIES
d(K3 | (h0(R0),K0),(h1(R1),K1),K2) ≤ 3 · ε
T
Equivalently:
(T = h(R0,R1,K0,K1))
d(K2 | T) ≤ 2 · ε
IMPLIES
d(K3 | T, K2) ≤ 3 · ε
Therefore:
(T = h(R0,R1,K0,K1))
d(K2 | T) ≤ 2 · ε
From the BSM security of f:
if K is fresh and uniform then:
d(f(K,R1) | T, K) ≤ ε
Fact (c.f. Ex. 3, Homework):
d(f(K2,R) | T,K2) ≤ d(f(K,R)| T,K) + d(K2 | T)
d(f(K2,R1) | T, K2) ≤ 3 · ε
QED
IRSS
(2 · L · ε, L, V)-secure 2-out-of-2 IRRSS scheme exists
(2N · 2 · L · ε, L, V)-secure 2-out-of-2 IRSS scheme exists
(where N is the length of the shared secret).
Since there exists BSM-secure functions with ε << 2-N we are done!
The multi-party case
How to extend the two-party case to the multiparty case?
“Hop”
“Loop”
To learn the secret the adversary
needs to make more than L loops
L
Theorem
Suppose f is (ε,V)-BSM secure. Then, for every a and L,
there exists
was: a = 2
a (2N · a · L · ε, V)-secure
a-out-of-a IRSS scheme
(where N is the length of the shared secret).
Computationaly-secure IRSS
Note:
A (2N · a · L · ε, L, V)-secure IRSS scheme is useful only if
N < - log ε.
Problem:
In the BSM functions: log ε > |K|, and hence each message sent in
the reconstruction procedure is longer than the shared secret!
Solution:
Share a short key  for symmetric encryption and encrypt S with it
(let one of the users store the ciphertext).
This communication complexity can be reduced even further,
assuming that some problem is incompressible [Harnik and Naor
2006].
Simultanious corruptions
Q: What if the adversary corrupts more than one
player at once?
Answer 1: If the amount of data that he can exchange
between the corrupted parties is limited then our
scenario also covers this case.
Answer 2: If the shares “leak completely” then we are
also OK.
In general - an open problem
Plan
1. Private Circuits
2. Bounded-Retrieval Model
1.
2.
3.
4.
5.
Introduction
Entity authentication
Forward-Secure Storage
Intrusion-Resilient Secret Sharing
Leakage-Resilient Stream Cipher
3. Open Problems
Leakage-resilient stream ciphers
We construct a
stream cipher
that is secure against a
very large and well-defined class of
leakages.
stream ciphers ≈ pseudorandom generators
short key X
a computationally
bounded
adversary
should not be able
to distinguish K from
random
S
long
stream
K
?
How do the stream ciphers work in practice?
short key X
S
K1
K2
time
K3
K4
X
stream K is
generated in
rounds
(one block per
round)
...
An equivalent
security definition
the adversary
knows:
should look random:
K1
K1
K2
X
K3
K3
K4
...
K2
Leakage-resilient stream cipher
- the model
Our assumption
We will assume that there is a leakage each time a key Ki is
generated (i.e. leakage occurs in every round).
S
K1
K2
X
K3
...
...
K4
the details follow...
Examples of the “leakage functions”
from the literature:
• Y. Ishai, A. Sahai, and D. Wagner. Private Circuits:
Securing Hardware against Probing Attacks.
The adversary can learn the value of some wires
of a circuit that computes the cryptographic
scheme.

another example (a “Hamming attack”):
The adversary can learn the sum of the secret
bits.
We consider a very general class of leakages
ff
In every ith round the
adversary choses
a poly-time computable
“bounded-output
function”
X
h : {0,1}n → {0,1}m
for m < n
and learns h(X)
We say that the adversary “retrieved m bits”
(in a given round).
How much leakage can we tolerate?
In our construction
the total number of retrieved bits
will be
larger than
the length of the secret key X
(but in every round the number of retrieved bits will
be much less than |X|)
this will be
a parameter
How can we achieve it?
by key evolution!
Key evolution
In each round the secret key X gets refreshed.
Assumptions:
key evolution has to be
deterministic
(no refreshing with external
randomness)
also the refreshing
procedure may cause
leakage
X
K1
X0
K2
X1
K3
X2
K4
X3
How to define security?
Is “indistinguishability” possible?
Problem
If the adversary can “retrieve” just one bit of Ki
then he can distinguish it from random...
Solution
Indistinguishability will concern the “future” keys Ki
Security “without
leakage”
the adversary
knows:
should look random:
K1
X0
K1
K2
X1
K2
K3
X2
K3
K4
Security “with leakage”
the adversary
knows:
should look random:
K1
X0
the aersary
chooses h1
h1(X0)
K1
K2
X1
K2
X2
K3
the adversary
chooses h2
the adversary
chooses h3
h2(X1)
K3
h3(X2)
K4
Key evolution – a problem
Recall that:
1. the key evolution is deterministic
2. the “leakage function hi” can by any poly-time
function.
Therefore:
the function hi can always compute the “future” keys
What to do?
We us the principle introduced in:
S. Micali and L. Reyzin.
Physically Observable Cryptography.
TCC 2004
“only computation leaks information”
in other words:
“untouched memory cells do not leak information”
Divide the memory into three parts: L, C and R
round 0
accessed only in
the even rounds
accessed
always
accessed only in
the odd rounds
L
C
R
L0
C0
R0
modified
round 1
L1
unmodified
C1
unmodified
round 2
L2
R1
modified
C2
R2
modified
round 3
L3
unmodified
C3
unmodified
R3
modified
..
.
...
...
...
Our cipher – the outline
the key of the cipher =
“the initial memory contents (L0, C0, R0)”
L0
C0
R0
S
unmodified
L1
C1
R1
S
L2
unmodified
C2
R2
S
unmodified
L3
C3
R3
...
...
...
The output
The output is the contents of the “central” part of the memory.
C→K
(L0, K
C0, R0)
All the keys
Ki
will be given
“for free” to
the adversary
L0
K
C0
R0
S
L1
K
C1
R1
K
C2
R2
S
(also Ki)
L2
S
L3
K
C3
R3
The details of the model
(L0, K0, R0)
L0
K0
K0
R0
K1
R1
L2
K2
R2
L3
K3
R3
K2
K3
h3(R2)
K3
S
K1
h2(L1)
K2
S
should look
random:
h1(R0)
K1
S
L1
the adversary
knows:
K4
Leakage-resilient stream cipher
- the construction
A tool: IRRSS!
K0
L
R
K1= f(K0, R)
L
K1
R
K2
R
K2 = f(K1, L1)
L
K3 = f(K2, R)
L
K3
R
...
...
...
A fact from [DM07]
Even if
a constant fraction of L and R leaks
the keys K1,K2,..
look “almost uniform”
Idea: “add key evolution to [DM07]”
What to do?
Use a pseudorandom generator (prg) in the following
way:
K0
R
K1= f(K0, R)
K1
K0
R0
(K1, Y1) = f(K0, R)
R
K1
R1 = prg(Y1)
Our scheme
L0
K0
R0
1) = 0f(K
(K1,K1Y=f(K
, R)0, R)
L10
K1
R1 = prg(Y
R0 1)
K2
R20
1, L11,)L1)
(K2K, 2Y=2)f(K
= f(K
L2 = prg(Y
L0 2)
2, R)
2, R)
(K3K, 3Y=3)f(K
= f(K
L30
K3
R3 = prg(Y
R0 3)
...
...
...
Our results (1/2)
assume the existence of pseudorandom generators
then
the cipher constructed on the previous slides is
secure against the adversary that in every round
retrieves:
λ = ω( log(length of the key))
bits
this covers many real-life attacks
(e.g. the “Hamming attack”)
169
Our results (2/2)
assume the existence of pseudorandom generators
secure against exponential-size circuits
then
the cipher constructed on the previous slides is
secure against the adversary that in every round
retrieves:
λ = ϴ(length of the key)
bits
170
Plan
1. Private Circuits
2. Bounded-Retrieval Model
1.
2.
3.
4.
5.
Introduction
Entity authentication
Forward-Secure Storage
Intrusion-Resilient Secret Sharing
Leakage-Resilient Stream Cipher
3. Open Problems
Extend (and unify) the existing models
“Private circuits”:
• strong results
• weaker model
anything
in
between?
Bounded-Retrieval
Model:
h(S)
• weaker results
• strong model
Key evolution
S0
S1
S2
…
information
time →
some alternatives to “only computation leaks information” paradigm?
Active attacks?
In the BRM we considered only the passive
attacks.
Can we have some interesting results when the
adversary can modify the circuit?
Bibliography
[ADR02] Y. Aumann, Y. Z. Ding, M. O. Rabin: Everlasting security in the bounded storage model. IEEE
Transactions on Information Theory ‘02
[CDDLLW07] D. Cash, Y. Z. Ding, Y. Dodis, W. Lee, R. J. Lipton, and S. Walfish. Intrusion-Resilient Key Exchange in
the Bounded Retrieval Model. TCC 2007,
[D06a] S. Dziembowski Intrusion-Resilience via the Bounded-Storage Model. TCC 2006,
[D06b] S. Dziembowski On Forward-Secure Storage. CRYPTO '06,
[DLW06] G. Di Crescenzo, R. J. Lipton, and S.Walfish. Perfectly Secure Password Protocols in the Bounded
Retrieval Model. TCC 2006,
[DM04] S. Dziembowski and U. Maurer Optimal Randomizer Efficiency in the Bounded-Storage Model. Journal
of Cryptology, 2004, STOC 2002,
[DP07] S. Dziembowski and K. Pietrzak Intrusion-Resilient Secret Sharing. FOCS 2007,
[DP08] S. Dziembowski and K. Pietrzak Leakage-Resilient Cryptography. FOCS 2008,
[IPSW06] Y. Ishai, M. Prabhakaran, A. Sahai, and D. Wagner. Private Circuits II: Keeping Secrets in Tamperable
Circuits. EUROCRYPT 2006.
[ISW03] Y. Ishai, A. Sahai, and D. Wagner. Private Circuits: Securing Hardware against Probing Attacks. CRYPTO
2003
[L04] C. J. Lu. Encryption against Storage-Bounded Adversaries from On-Line Strong Extractors. J. Cryptology
`04.
[M92] U. Maurer. Conditionally-Perfect Secrecy and a Provably-Secure Randomized Cipher
Journal of Cryptology, 1992
[P09] K. Pietrzak. A Leakage-Resilient Mode of Operation. Eurocrypt 2009, to appear
[V04] S. Vadhan. Constructing Locally Computable Extractors and Cryptosystems in the Bounded-Storage
Model. CRYPTO `03, J. Cryptology `04.