Implementation using Combinators
Download
Report
Transcript Implementation using Combinators
CS321 Functional Programming 2
Implementation using Combinators
This approach builds upon the λ-calculus model.
The aim is to translate programs into graphs of combinators and
constant identifiers.
Combinators are simply functions which have no free (global)
variables.
By translating into combinators we can get rid of all variables.
© JAS 2005
5-1
CS321 Functional Programming 2
Example Combinators
Identity
I
λx.x
Permutator
C
λfxy.fyx
Compositor
B
λfxy.f(xy)
Cancellator
K
λxy.x
Distributor
S
λxyz.xz(yz)
Fixed pt comb
Y
λh.((λx.h(xx))(λx.h(xx))
© JAS 2005
5-2
CS321 Functional Programming 2
It can be shown that all the combinators can be expressed in
terms of S and K.
(We have already seen that I = SKK).
S and K can those be seen as “primitive” combinators.
If everything can be expressed in terms of S and K then we
can define an SK-reduction machine which works by only
evaluating S’s and K’s. To use such a machine we have to
translate our code into combinators.
© JAS 2005
5-3
CS321 Functional Programming 2
An expression is
• an identifier
• a λ-expression consisting of
– a bound variable part which is a variable name
– a body which is an expression
• an application consisting of
– an operator which is an expression
– an operand which is an expression
© JAS 2005
5-4
CS321 Functional Programming 2
Now to compile an expression we have to consider how to compile
each possible alternative.
cmp(E) is defined as follows:if E is an identifier
cmp(E) => E
if E is the λ-exp λx.x
cmp(E) => I
if E is the λ-exp λx.y
cmp(E) => Ky
if E is the λ-exp λx.λy.E‘ cmp(E) => cmp(λx.cmp(λy.E'))
if E is the application FA
cmp(E) => cmp(F)cmp(A)
if E is the λ-exp λx.FA
cmp(E) =>
S(cmp(λx.F))(cmp(λx.A))
otherwise “error in expression”
© JAS 2005
5-5
CS321 Functional Programming 2
This simple minded compilation yields some rather long
pieces of code, but fortunately some optimisations are
possible. These optimisations were suggested by Dave
Turner (Kent) in his implementation of the language SASL
which begat KRC which begat Miranda.
S (K E1)
S (K E1)
S (K E1)
S E1
(K E2)
I
E2
(K E2)
=
=
=
=
K (E1 E2)
E1
B E 1 E2
C E 1 E2
It should be noted that the third and fourth optimisations are
only done if the first two are not applicable.
© JAS 2005
5-6
CS321 Functional Programming 2
To include these optimisations in our compilation function we
substitute the following for the compilation of expressions of the
form λx.FA
cmp(E) = opt(cmp(λx.F),cmp(λx.A))
where
opt(E1,E2) is defined:if E1 = (K E3) and E2 = (K E4) opt(E1,E2) => K (E3 E4)
if E1 = (K E3) and E2 = I
opt(E1,E2) => E3
if E1 = (K E3)
opt(E1,E2) => B E3 E2
if E2 = (K E4)
opt(E1,E2) => C E1 E4
otherwise S E1 E2
Note when E does not contain any free occurrences of x then
cmp(λx.E) = K E
© JAS 2005
5-7
CS321 Functional Programming 2
Most of the features of any functional programming language,
such as Haskell, can now be translated into the λ-calculus
and hence into combinator form.
Auxiliary definitions such as the where or let
clauses in Haskell can be translated into λ-calculus as
follows:-
f x = g x 2 where g z y = z * y
λx.g x 2 where g = λz.λy. * z y
(λg.(λx.g x 2))(λz.(λy.* z y))
We also need to define a prefix function for conditionals:
cond b-exp then-exp else-exp
© JAS 2005
5-8
CS321 Functional Programming 2
So we have:sum n = if n == 1 then 1 else n + sum (n-1)
translates as
sum = λn.cond (== 1 n) 1 (+ n (sum (- n 1)))
which becomes in combinator code
sum = S(C(B cond (==1))1) (S+(B sum (C-1)))
The cyclic/recursive nature of this could be resolved using the Y
combinator
© JAS 2005
5-9
CS321 Functional Programming 2
S
1
+
S
B
C
1
C
B
cond
==
-
1
S ( C ( B cond ( == 1 ) ) 1 ) ( S + ( B sum ( C - 1 ) ) )
© JAS 2005
5-10
CS321 Functional Programming 2
Evaluating an S-K graph
A function application is represented by a node whose left
subgraph is the function graph and whose right subgraph is
the operand value.
The graph is “collapsed” by reducing combinator redexes and
by replacing the parent node of a redex by a pointer to the
reduced expression, or by a primitive value.
To carry out normal order reduction we need to reduce the
leftmost redex first.
To find the leftmost redex we carry out the following steps
• start at root and move left until reducible operator or
function found
• when a primitive operator whose operand is not a primitive
value is encountered, the search proceeds down the
operand branch.
© JAS 2005
5-11
CS321 Functional Programming 2
I
I = λx.x
x
y
K = λxy.x
K
x
z
S = λxyz.xz(yz)
y
S
© JAS 2005
x
5-12
CS321 Functional Programming 2
C = λfxy.fyx
y
x
C
f
B = λfxy.f(xy)
y
x
B
f
© JAS 2005
5-13
CS321 Functional Programming 2
Y = λh.((λx.h(xx))(λx.h(xx)))
Y
h
h
h
© JAS 2005
5-14
CS321 Functional Programming 2
2
(S(S(K+)I)(K1))2
S
K
1
I
S
K
+
Unoptimised successor of 2
© JAS 2005
5-15
CS321 Functional Programming 2
2
2
S
2
K
1
I
S
K
© JAS 2005
+
5-16
CS321 Functional Programming 2
2
I
2
2
K
1
S
K
© JAS 2005
+
5-17
CS321 Functional Programming 2
+2 3
2
2
K
I
2
K
1
+
© JAS 2005
5-18
CS321 Functional Programming 2
Altering the evaluation order
The SK reduction machine naturally uses normal order
evaluation
The S combinator can force duplication of work
S = λxyz.xz(yz)
For Lazy evaluation we share
the z part of the graph
x
© JAS 2005
z
y
z
5-19
CS321 Functional Programming 2
For eager or applicative order evaluation the obvious strategy
is to search down the right subgraph first as this is the
argument.
The problem is the K combinator which discards its second
argument.
y
K = λxy.x
K
© JAS 2005
x
5-20