Think like a goose.

Programmers love rules of thumb. I tend to the view that that’s a product of living in a pre-paradigm era (wrt computer science). Nonetheless, these three principles come up often:

  • DRY — Don’t repeat yourself.
  • KISS — Keep It Simple, Stupid!
  • YAGNI — You Ain’t Gonna Need It.

There really is just one principle at play here: Omit Needless Code. So, if you struggle to remember the three rules of thumb, just think of the sound geese make: ONC! ONC!

Advertisements

A cute piece of logic

Can you implement all logical operations with the operator AND-NOT? A AND-NOT B is defined as A AND (NOT B). If you interpret 1 and 0 as TRUE and FALSE, respectively, AND-NOT can also be thought of as the greater-than operator, >.

It turns out that you can implement all logical operators with this one operator. The proof is as simple as implementing NAND, which is well known to have the same property:

    A NAND B = 1 > (A > (1 > B))

It then naturally follows that all logical operators can be converted into their NAND-equivalent and then further translated into AND-NOT, followed by some reductions:

    NOT A    = 1 > A
    A AND B  = A > (1 > B)
    A OR B   = NOT A NAND NOT B
             = (1 > A) NAND (1 > B)
             = 1 > ((1 > A) > (1 > (1 > B)))
             = 1 > ((1 > A) > B)
    A NOR B  = (1 > A) > B
    A NAND B = 1 > (A > (1 > B))
    A XOR B  = (A > B) OR (B > A)
             = 1 > ((1 > (A > B)) > (B > A))
    A XNOR B = 
             = (1 > (A > B)) > (B > A)

For good measure, here’s a comprehensive list of truth tables:

A 0 0 1 1 Operation AND-OR logic
B 0 1 0 1
0 0 0 0 FALSE 0
0 0 0 1 A AND B B > (B > A)
0 0 1 0 A AND-NOT B A > B
0 0 1 1 A A
0 1 0 0 B AND-NOT A B > A
0 1 0 1 B B
0 1 1 0 A XOR B (1 > (B > (B > A))) > ((1 > A) > B)
0 1 1 1 A OR B 1 > ((1 > A) > B)
1 0 0 0 A NOR B (1 > A) > B
1 0 0 1 A XNOR B (1 > (B > A)) > (A > B)
1 0 1 0 NOT B 1 > B
1 0 1 1 1 > (B > A)
1 1 0 0 NOT A 1 > A
1 1 0 1 1 > (A > B)
1 1 1 0 A NAND B 1 > (B > (B > A))
1 1 1 1 TRUE 1

Aside: Can you remove 0 and 1 from the above expressions? 0 is easy enough: A > A. But I can’t think of a way to synthesise 1. There may even be a proof that it’s impossible. Since the top-level expression will be x > y, x must be 1 for the result to be 1 (note that this requirement doesn’t apply if you want a result of 0). We thus have to solve the same problem to formulate x, and we end up in an infinite recursion. So, while we can do away with the single 0 in the above table, I guess we’re stuck with all those 1’s.

What conceivable practical value could an AND-NOT algebra have? Well, it turns out that this system of logic has a property that makes it handy for implementing operations on infinite sets resulting from the NOT operator: if R is finite, then R > S is finite. The interpretation of operators is slightly different when dealing with sets. The term A AND-NOT B (henceforth “A > B”) returns a set comprising all tuples that are in A and the inversion of S (NOT S). The net effect is to return all the elements of R that are not in S.

Thus, any AND-NOT expression with a finite LHS is also finite. You can thus determine the finiteness of an expression tree simply by inspecting its left-most node. If it’s a simple set, the whole expression is finite. If it’s an infinite set (including 1, which represents the universal set), then it is possibly infinite.

In the latter case, it isn’t always trivial to tell whether the final result is infinite. For instance PRIMES > EVENS (“non-even primes”) is infinite, but PRIMES > ODDS (“non-odd primes”) is finite, even though both expressions have infinite relations on both sides. Since some sets are undecidable (“the set of programs that complete in finite time”), this well has no bottom.

All is not hopeless, however. Finite left-most nodes are actually a fairly common and important case.

Hello world!

The fixed-point combinator drives me nuts. I’ve figured out the principle several times now, only to have it escape out the other ear in under a minute. So I’m going to document it here to help myself understand it better.

All code is in Python.

So, the basic problem is how to implement a recursive algorithm with a non-recursive function. The fibonacci sequence is a good example of this:

def fib(n):
    return n if n < 2 else fib(n - 2) + fib(n - 1)

print fib(10)

So, how do we turn this function into a non-recursive one? A first attempt might look like this:

def fib(f, n):
    return n if n < 2 else f(f, n - 2) + f(f, n - 1)

print fib(fib, 10)

So the basic idea is to pass fib itself so that it doesn’t have to know its own name in order to call itself. Note, however, that we have to pass f to itself every time, even in the kernel of the fibonacci algorithm. Ignoring this for now, the proof that we have eliminated direct recursion is in the fact that we can now define fib using a lambda:

fib = lambda f, n: n if n < 2 else f(f, n - 2) + f(f, n - 1)

print fib(fib, 10)

There is an obvious problem here: it’s rather annoying to have to pass fib to itself in order to recurse. A wrapper function fixes this:

def fib(n):
    f = lambda f, n: n if n < 2 else f(f, n - 2) + f(f, n - 1)
    return f(f, n)

print fib(10)

We don’t want to have to write a wrapper function for each recursive function we want to write. Can we write a generalised function that will take our fibonacci and any other recursive algorithm kernel and convert it into a recursive function? As a stepping stone, let’s change our latest fib to a function creator, rather than a simple wrapper:

def fib_maker():
    f = lambda f, n: n if n < 2 else f(f, n - 2) + f(f, n - 1)
    return lambda n: f(f, n)

fib = fib_maker()

We can see that the fibonacci kernel is just a function inside the fib_maker function. So let’s replace fib_maker with a general-purpose recursive function generator:

def recursive(f):
    return lambda n: f(f, n)

fib = recursive(lambda f, n: n if n < 2 else f(f, n - 2) + f(f, n - 1))

It seems that we are almost there. We must now tackle the hard part that we’ve been putting off for a while. The lambda should not have to forward the recursive function to itself. We want to be able to write this:

fib = recursive(lambda f, n: n if n < 2 else f(n - 2) + f(n - 1))

Clearly, the function being passed in has to be a full copy of the created recursive function. Allowing, ‘recursive’ to use recursion itself makes this quite simple:

def recursive(f):
    return lambda n: f(recursive(f), n)

fib = recursive(lambda f, n: n if n < 2 else f(n - 2) + f(n - 1))

print fib(10)

(Note carefully that the nested call to recursive(f) occurs inside the lambda n. If you try to compute it in advance, you’ll trigger an immediate infinite recursion and blow the stack.)

But can we go one step further, and define ‘recursive’ without directly using recursion? Loaded question. It turns out that our original trick of passing a function to itself to help it recurse is just the ticket.

def recursive(f):
    def recurse(r):
        return lambda n: f(r(r), n)
    return recurse(recurse)

Further reductions are possible:

def recursive(f):
    recurse = lambda r: lambda n: f(r(r), n)
    return recurse(recurse)

def recursive(f):
    recurse = lambda r: lambda n: f(r(r), n)
    return (lambda R: R(R))(recurse)

recursive = lambda f: (lambda R: R(R))(lambda r: lambda n: f(r(r), n))

Finally, we have arrived at a non-recursive function that converts other non-recursive functions into recursive algorithms! This is what’s commonly known as a fixed-point combinator, renamed here and Pythonified:

fix = lambda f: (lambda R: R(R))(lambda r: lambda *args, **kwargs: f(r(r), *args, **kwargs))

fib = fix(lambda f, n: n if n < 2 else f(n - 2) + f(n - 1))

In case you haven’t grasped the full implications of this approach consider that I can mash the above lines together into a single expression that performs a recursive computation without defining any named functions:

>>> ((lambda f: (lambda R: R(R))(lambda r: lambda *args, **kwargs: f(r(r), *args, **kwargs)))(
...      lambda f, n: n if n < 2 else f(n - 2) + f(n - 1)
...      ))(10)
55

What conceivable benefit can such a construct have? Well, in Haskell the question probably doesn’t come up as much since the fixed-point combinator looks like this…

fix :: (a -> a) -> a
fix f = f (fix f)

…possibly leading many Haskell programmers to wonder what all the fuss is about. Even for more mundane languages, there are some practical ideas in the fixed-point combinator. I’ll explore these in a future post.

Footnote: This is not quite the fixed-point combinator you’ll see written up elsewhere. Usually, the kernel is written as a nested lambda and passed to a Z-combinator, thus:

fib = Z(lambda f: lambda n: n if n < 2 else f(n - 2) + f(n - 1))

This is a minor detail, however. The principles are the same. Also, the (lambda R: R(R))(lambda r:…) is my special touch. Z-combinators usually pass the lambda r:… to a full copy of itself.

Correction: The Z-combinator is not quite the same thing. This warrants further investigation.