## Fun with Ackermann's function

[examples

Isabelle

Ackermann's function

]
An undergraduate course on recursion theory typically introduces Turing machines, register machines, general recursive functions and possibly the λ-calculus. They learn about the principle of primitive recursion, which is easy to grasp, and the minimisation operator, which is less so. Ackermann’s function is invariably mentioned as an example of a function that is obviously computable but not computable by primitive recursion alone. Unfortunately, it is not easily expressible in the familiar models of computation, although its definition is simplicity itself.

### Ackermann’s function

In its modern form (simplified by Rózsa Péter and Raphael Robinson), Ackermann’s function is defined as follows:

\[\begin{align*} A(0,n) & = n+1\\ A(m+1,0) & = A(m,1)\\ A(m+1,n+1) & = A(m,A(m+1,n)). \end{align*}\]It resembles the calculation of the upper bound of the Ramsey number from the proof of Ramsey’s theorem. Both can easily express unimaginably large numbers. I was taught this material at Caltech by Frederick B Thompson, who as a homework exercise asked his students to write out in full the calculation $A(4,3)$. Nobody did, but one of the students managed to estimate that

\[\begin{gather*} A(4,3)\approx 10^{10^{20000}}. \end{gather*}\](Precisely, it’s $2^{2^{65536}}-3$.) The first argument determines how rapidly it rises: note that $A(3,4)=125$, and don’t even think about $A(125,4)$. Oddly enough, physicists regard $10^{80}$ as a big number, greater than the number of neutrinos in the universe. Greater than the diameter of the universe. (In nanometres. Squared.) An astronomical number is, mathematically speaking, tiny.

### Expressing Ackermann’s function

Ackermann’s function is syntactically recursive but that does not help us express it using recursive function theory. Cutland, in *Computability* (pages 46–47) devotes an entire page to sketching a laborious construction of a register machine, before remarking that ``a sophisticated proof’’ is available using more advanced results.

One model of computation that can easily express Ackermann’s function is the λ-calculus, through the glory of Church numerals (for details, see my lecture notes, page 17):

\[\DeclareMathOperator{\Suc}{Suc} \DeclareMathOperator{\ack}{ack} \newcommand{\cons}{\mathbin{\#}} \begin{align*} F &\equiv \lambda f n. n f (f \underline{1}) \\ A &\equiv \lambda m. m F \mathbf{suc} \end{align*}\]### Ackermann’s function as a rewrite system

What’s fiendish about the recursion in Ackermann’s function is that it is nested. Nevertheless, reducing recursion to iteration is second nature to a computer scientist. The recursion corresponds fairly straightforwardly to a stack computation, or again to a term rewriting system:

\[\begin{align*} \Box \cons n\cons 0\cons L &\longrightarrow \Box \cons \Suc n \cons L\\ \Box \cons 0\cons \Suc m\cons L &\longrightarrow \Box \cons 1\cons m \cons L\\ \Box \cons \Suc n\cons \Suc m\cons L &\longrightarrow \Box \cons n\cons \Suc m\cons m \cons L \end{align*}\](The boxes anchor the rewrite rules to the start of the list. The # operator separates list elements.)

The point of this version is that it corresponds naturally to a register or Turing machine program. The problem is, how can we be sure that it corresponds to Ackermann’s function? How do we even know that it terminates? The termination of Ackermann’s function itself is trivial, but that proof doesn’t work here.

This question is an instance of the halting problem. Although it is undecidable in general, sophisticated techniques and tools exist for proving the termination of a rewrite system. For this one, all existing tools fail. Fortunately, there’s a remarkably simple termination proof using Isabelle/HOL, and it illustrates the treatment of tricky recursive functions.

### Defining the functions in Isabelle/HOL

The definition of Ackermann’s function can be typed into Isabelle/HOL more-or-less verbatim.
That it’s well-defined and terminating is detected automatically, and the recursion equations shown are *proved* from a low-level, non-recursive definition. All that happens automatically.

fun ack :: "[nat,nat] ⇒ nat" where "ack 0 n = Suc n" | "ack (Suc m) 0 = ack m 1" | "ack (Suc m) (Suc n) = ack m (ack (Suc m) n)"

The rewrite rule system shown above can also be typed in verbatim. The box symbols are unnecessary, since pattern matching inherently targets the start of the list. But how does Isabelle/HOL handle the issue of termination?

function (domintros) ackloop :: "nat list ⇒ nat" where "ackloop (n # 0 # l) = ackloop (Suc n # l)" | "ackloop (0 # Suc m # l) = ackloop (1 # m # l)" | "ackloop (Suc n # Suc m # l) = ackloop (n # Suc m # m # l)" | "ackloop [m] = m" | "ackloop [] = 0" by pat_completeness auto

Alex Krauss’s wonderful function definition package anticipates such difficult cases.
It allows us to define function `f`

first and deal with its termination later.
When prompted by the keyword `domintros`

, it defines the predicate `f_dom`

expressing the termination of `f`

for a given argument. Then arbitrary recursion equations for `f`

can be accepted, but the package makes them conditional: they will hold only if `f`

terminates on the given argument.
Then our task is to prove that `f_dom`

holds on the arguments we are interested in
and even partial recursive functions can be dealt with.
Here we shall prove that `ackloop`

is a total function by proving that `ackloop_dom`

is always satisfied.

### Proving termination

It’s obvious that `ackloop`

terminates if the length of the list is less then two.
Close examination of the three recursive `ackloop`

equations suggests
the following lemma, which turns out to be the key to proving termination for all lists.

lemma ackloop_dom_longer: "ackloop_dom (ack m n # l) ⟹ ackloop_dom (n # m # l)" by (induction m n arbitrary: l rule: ack.induct) auto

The proof of the lemma above is only one line long! The trick is the reference to `ack.induct`

, the special induction rule tied to the definition of Ackermann’s function. It generates a system of base cases and induction steps corresponding exactly to the recursion in `ack`

. Without it, the proof would still be easy enough (a couple of nested mathematical inductions) but nowhere near as slick.

Our next step is aimed at proving the equivalence of the two definitions of Ackermann’s function, but it also allows a super-simple proof of termination. We have just proved a lemma that proves termination of the rewrite system (as expressed by the predicate `ackloop_dom`

) for a list of length two or longer provided we already know termination of a list whose head has the form `ack m n`.

Consider that the very purpose of `ackloop`

is to compute values of Ackermann’s function on a stack (represented by the list), with these values at the head, exactly as required for the termination proof. Now we can define a function expressing the idea of `ackloop`

except that, rather than performing a stack-based computation, calls Ackermann’s function explicitly.
So in particular, its termination is trivial.

fun acklist :: "nat list ⇒ nat" where "acklist (n#m#l) = acklist (ack m n # l)" | "acklist [m] = m" | "acklist [] = 0"

The induction rule generated by this definition, `acklist.induct`

, makes separate cases for lists of length 0, 1 and two or more.
It is exactly what we need to prove that the termination predicate holds for an arbitrary given list, `l`

.

lemma ackloop_dom: "ackloop_dom l" by (induction l rule: acklist.induct) (auto simp: ackloop_dom_longer)

Now that termination has been proved for all possible arguments, we can point out that fact to Isabelle/HOL:

termination ackloop by (simp add: ackloop_dom)

The effect is to make the recursion equations for `ackloop`

unconditional.

### Proving the two definitions equivalent

We are nearly there. Knowing that `ackloop`

is a total function, and with the help of its own induction rule, we trivially prove its equivalence to `acklist`

.

lemma ackloop_acklist: "ackloop l = acklist l" by (induction l rule: ackloop.induct) auto

It follows directly that Ackermann’s function can be computed with the help of `ackloop`

.

theorem ack: "ack m n = ackloop [n,m]" by (simp add: ackloop_acklist)

This example is unusual in that the formal proofs are all one-liners. More commonly, formal proofs are horrible. And yet there is nothing trivial about the termination of `ackloop`

.
The system of rewrite rules for Ackermann’s function has been added to the Termination Problems Data Base (TPDB) as
`Paulson_20/ackermann_iterative.xml`

.
As of this writing, no termination checker can handle it.

### Odds and ends

I have published a more thorough treatment of this example in the *Bulletin of Symbolic Logic*.

You’ll find the Isabelle proof text at `src/HOL/Examples/Ackermann.thy`

in your downloaded copy of Isabelle, and can also browse it online.

The Isabelle/HOL libraries also contain a formal proof that Ackermann’s function is not primitive recursive. The set of primitive recursive functions is defined inductively and the development is straightforward.

Finally, see Rosetta code for implementations of Ackermann’s function in hundreds of programming languages, such as SNOBOL4.

### Postscript added 28-09-2022

Nachum Dershowitz and Zohar Manna treat this stack-based implementation of Ackermann’s function in their seminal paper, Proving Termination with Multiset Orderings.