The Kleene recursion theorems are two basic (and often confused) results in computability theory. The first theorem guarantees that recursive definitions make sense, while the second one shows (among other things) the existence of quines. This post will explain the first recursion theorem.
Recursion vs. Computability
There are many equivalent definitions of the class of “computable functions” (cf. the Church-Turing thesis). The definition used in the Kleene recursion theorems, and in recursion theory in general, uses the mu minimization operator. In contrast, a “programmer-friendly” definition of computable functions says that computable functions are exactly the functions computed in one’s favorite programming language. Though they define the same class, “recursive” functions and “computable” functions have different definitions, and coupled with the fact that “recursion” is a common tool in computer programming, this has led to some confusion (see this excellent Stack Exchange answer for some discussion). The Kleene recursion theorem is a statement from the “recursion theory” perspective; on the way to it we’ll state it in the computability perspective, and we’ll start by talking about it from the mathematical perspective. I think the discussion of this theorem emphasizes the clarity that can be gained from keeping in mind real-world programming languages even when thinking about “mathematically friendly” objects like the μ-recursive functions.
Say we want a function that satisfies some recursive equalities,
It’s not a priori clear that such an exists. There is a slightly more primitive mathematical object that always exists, though, which is the recursive operator obtained by replacing recursive calls with calls to ,
Given , we can always define a new function in this way. Now, notice that the satisfying our recurrence are exactly fixed points of our recursive operator . Kleene’s first recursion theorem says that recursive operators always have fixed points. In this way, the theorem says that our recurrence equations do, in fact, always have solutions.
Theorem (Kleene’s First Recursion Theorem, mathematical version)
Every (mathematical) recursive operator has a fixed point.
In the mathematical case, we require that every recursive operator only evaluate on smaller natural numbers. What this condition says is that our recurrences for a desired fixed point only use previous values of . In this case we can define by induction like a normal recursive definition.
Kleene’s Recursion Theorem in Any Programming Language
Let’s now talk about recursive operators from the computability/programming language perspective. Say we’re writing a recursive function in our favorite language,
f(0) = 0 f(1) = 1 f(n) = f(n - 1) + f(n - 2)
As before we have a natural notion of “recursive operator” , which is just a higher-order function,
f(0) = 0 f(1) = 1 f(n) = g(n - 1) + g(n - 2)
Theorem (Kleene’s First Recursion Theorem, programming language version)
Every (programming language) recursive operator has a fixed point.
This proof is rather different from the mathematical case, and it will be different from the full Kleene recursion theorem, but it is philosophically interesting. Compile the block of code
f(0) = 0 f(1) = 1 f(n) = f(n - 1) + f(n - 2)
The compiler automatically makes this a valid definition of the function . Note that we didn’t need the restriction we had in the mathematical case, that recursive calls are only made on smaller natural numbers. Consequently the output of compilation might be a partial function (undefined, , on some inputs), but it’s a well-defined mathematical object.
Kleene’s First Recursion Theorem
In recursion theory we have a different definition of recursive functions via the mu minimization operator. Let be the set of μ-recursive functions. This calls for a new definition of a recursive operator. This seems like a hard task: we don’t have any “code” to speak of, a recursive operator should be a special case of a function . We can extract one key mathematical property of programming language recursive operators to be our guide: we require that a recursive operator allows to be computed from finitely many values of (notice that every programming language recursive operator i.e. higher order function has this property). More discussion of recursive operators is at the end of this post; the intuition here suffices to understand the proof of Kleene’s first recursion theorem,
Theorem (Kleene’s First Recursion Theorem)
Every (recursion theory) recursive operator has a fixed point. Furthermore, there is a least fixed point i.e. a function such that , and for any function satisfying , .
The notation (pronounced “ is less defined than “) says that implies . Notice that the function defined nowhere (denoted ) satisfies for every .
The proof can be thought of as “making recursive calls until termination”. Say we want to know what value we should assign to . If we want , this means we want to know the value of . tells us a finite list of values of that, if we define those, will define ; this is our recurrence for computing . By repeatedly asking for and evaluating those values, either all recursive calls for will terminate at base cases, or they won’t; if it ever terminates, we have ‘s value, otherwise is undefined. To use our running example of the Fibonacci recurrence, tells us that in order to define , it suffices to define and , which in turn require and , and so on until we meet the base cases and , which don’t require any other values of .
A bit more formally, start with the function that loops on every input, . Define functions that terminate after recursive calls by and . Finally, let i.e. if an input terminates in any finite number of recursive calls, it is defined.
It’s not too hard to see that this is least: any satisfying the recursion must have carried out any finite number of recursive calls.
Ultimately, the proof is nothing more than “compiling our recursive operator,” in the same way the compiler in the programming language perspective “automatically” defines a function that terminates if all recursive calls terminate, or is undefined if there’s an infinite loop.
Finally, we should mention the idea behind the definition of recursive operators. As seen in the proof, and as holds true when thinking of programming language recursion, the key idea is that computing some value should only require finitely many values of . This can be formalized by saying that is a continuous function on .
We define the positive information topology on . Let be those functions that are defined on only finitely many inputs, and are elsewhere. We think of functions in as specifying finitely many values of functions in . A basis of open sets of the positive information topology is , for , defined by
That is, is exactly those functions that meet the specified (finitely many) output values of . A continuous function has that the preimage of every basis set contains a basis set . Taking to be a singleton function, and so is exactly those functions with a specified single output, continuity says that defining the finitely many values in ensures that the single output is fixed.
A recursive operator is necessarily a continuous function on this space. A “coherence condition” is also necessary to pin down the proper definition, but continuity suffices for intuition. For full definitions, see Cutman’s book on computability theory, Chapter 10.