One of my associates recently remarked, “I can write down the Ackermann recursion, but I have zero intuition where it comes from”. This expository post explains the answer I gave him, that the Ackermann function arises via hyperoperation.

The Ackermann function is variously defined, but the most popular these days is the Ackermann-Péter function:

One way to build up to the Ackermann function is through hyperoperation. We’ll show that is pretty much for the -th hyperoperator : in fact .

Fixing the first argument of yields a function based on the -th hyperoperator, which is why we often think of the Ackermann function as a family of functions parameterized by the first argument. In retrospect, maybe the better definition for the Ackermann function is the simpler , so that the ubiquitous exercise to compute the first few functions becomes just a little bit less of an exercise.

## Hyperoperators

Interpret multiplication as repeated addition: add to itself, times. Interpret exponentiation as repeated multiplication: multiply with itself, times. Going one step further, interpret tetration as repeated exponentiation: exponentiate with itself, times. For example,

Going infinitely many steps further, we can define the -th hyperoperation to “recursively apply the previous hyperoperation on with itself, times” (this of course stops being commutative at and beyond exponentiation, though the first hyperoperators constructed were actually commutative). To actually implement this we should use the recursion “compute hyperoperated with itself times, then apply one more time”:

Notice we fold right here i.e. compute rather than .

We also need a trio of base cases for what happens when you “apply to itself times” in order to align with the normal arithmetic operations:

for

Finally, we set , to capture that addition, , arises as the iterated successor function, our . After making all these definitions, we have our hyperoperators. These extend the normal arithmetic operators in the following way:

## Proof of Ackermann/Hyperoperator Identity

The point of defining these hyperoperations is to note that the Ackermann function is itself a specific case: . The proof of this isn’t too exciting, but let’s prove it by induction. Check the base case ,

For the base case , we need key properties of using instead of a different constant:

for

for

Now we have

And the primary recurrence for and :

Substituting for by induction, we see they are in fact the same recursion.