Ackermann's function is not primitive recursive, I

31 Aug 2022

[ examples  Isar  recursion  Ackermann's function  ]

A university-level course on computation theory will invariably cover recursive functions: the model of computation developed by Gödel and Kleene. Students learn about the primitive recursive (PR) functions, a fairly natural idea, but also learn that the PR functions are insufficient. True, these functions include all the familiar arithmetic operations, as well as all the obvious syntactic operations on expressions that have been “Gödel-numbered” (coded in terms of arithmetic formulas). Among the PR functions are many that cannot be regarded as feasibly computable because they grow at an utterly unimaginable rate. (Do not say “exponential”: in this context, exponential growth is negligible.) The PR functions are insufficient because, in three simple lines, we can define an obviously computable function that grows faster than any of them. It is, of course, Ackermann’s function.

The Ackermann-Péter function

As mentioned in a previous post, Ackermann’s “generalised exponential” (simplified by Rózsa Péter) is as follows:

\[\begin{align*} A(0,n) & = n+1\\ A(m+1,0) & = A(m,1)\\ A(m+1,n+1) & = A(m,A(m+1,n)). \end{align*}\]

Its first argument determines the rate of growth. Arguments of 0 and 1 yield trivial functions, but things get nasty from 4 onwards. The proof that Ackermann’s function is not primitive recursive begins with a series of results about its growth for various argument patterns. We’ll cover those in today’s post. In a future post, we’ll see how to define the primitive recursive functions themselves, inductively, and prove by induction (on the construction of any given PR function $f$) that we can always find an argument to dominate $f$.

Expressing Ackermann’s function

As mentioned in that previous post, Isabelle/HOL provides a package that accepts a wide variety of recursive function definitions, in most cases dealing automatically with issues such as pattern matching and termination. We can specify Ackermann’s function to the machine as follows:

fun ack :: "[nat,nat]  nat" where
  "ack 0 n             = Suc n"
| "ack (Suc m) 0       = ack m 1"
| "ack (Suc m) (Suc n) = ack m (ack (Suc m) n)"

Recall that such a specification is automatically transformed to eliminate the recursion, and the desired recursion equations are generated from the hidden, low level definition. Isabelle’s recursion package also returns an induction rule tailored to this specific recursion.

Properties involving the second argument

The formal development follows a 1993 paper by Nora Szasz, and the names of the properties are hers. She proved that Ackermann’s Function was not PR using ALF, an early implementation of Martin-Löf type theory and a predecessor of Agda.

The Isabelle development doesn’t strictly follow Szasz, and in particular the Ackermann recursion equations are already available to us, so the first property be proved is A4. It’s an elementary inequality, proved by Ackermann induction (Ack.induct) followed by simplification.

lemma less_ack2 [iff]: "j < ack i j"
  by (induct i j rule: ack.induct) simp_all

The main theorem uses currying to freeze the first argument, regarding Ackermann’s as a unary function $A(i,-)$ for some suitable $i$. We need to know that this function will be strictly monotonic, i.e. that Ackermann’s is monotonic in its second argument. This can be proved first for the successor case, again by Ackermann induction.

lemma ack_less_ack_Suc2 [iff]: "ack i j < ack i (Suc j)"
  by (induct i j rule: ack.induct) simp_all

Szasz calls this more general monotonicity result A5.

lemma ack_less_mono2: "j < k  ack i j < ack i k"
  by (simp add: lift_Suc_mono_less)

Here’s the non-strict version.

lemma ack_le_mono2: "j  k  ack i j  ack i k"
  by (simp add: ack_less_mono2 less_mono_imp_le_mono)

Properties involving the first argument

Next come a variety of inequalities involving both arguments, and concerned with what happens when we increase the first argument. The first property, A6, holds by a simple mathematical induction. Note that sledgehammer has generated a proof of the induction step.

lemma ack2_le_ack1 [iff]: "ack i (Suc j)  ack (Suc i) j"
proof (induct j)
  case 0 show ?case by simp
next
  case (Suc j) show ?case
    by (metis Suc ack.simps(3) ack_le_mono2 le_trans less_ack2 less_eq_Suc_le)
qed

From this, we get a basic monotonicity result for the first argument:

lemma ack_less_ack_Suc1 [iff]: "ack i j < ack (Suc i) j"
  by (blast intro: ack_less_mono2 less_le_trans)

And then a result reminiscent of A4:

lemma less_ack1 [iff]: "i < ack i j"
proof (induct i)
  case 0
  then show ?case
    by simp
next
  case (Suc i)
  then show ?case
    using less_trans_Suc by blast
qed

Curiously enough, Szasz did not mention requiring any of these results.

First arguments of 1, 2 and 3

We already know that Ackermann’s function, given a first argument of 0, denotes the successor. Its behaviour for 1 is Szasz’ A8, namely $A(1,j) = j+2$:

lemma ack_1 [simp]: "ack (Suc 0) j = j + 2"
  by (induct j) simp_all

Its behaviour for a first argument of 2 (essentially doubling) is A9:

lemma ack_2 [simp]: "ack (Suc (Suc 0)) j = 2 * j + 3"
  by (induct j) simp_all

The Ackermann development doesn’t need the following lemma about an argument of 3, but it’s instructive to note what happens. Instead of $2j+3$ it returns $2^{j+3}-3$. We’ve already reached exponential growth, and we’re just getting started. You can imagine what happens with an argument of 4. And you can’t imagine what happens with an argument of 11.

lemma ack_3: "ack (Suc (Suc (Suc 0))) j = 2 ^ (j+3) - 3"
proof (induct j)
  case 0
  then show ?case by simp
next
  case (Suc j)
  with less_le_trans show ?case
    by (fastforce simp add: power_add algebra_simps)
qed<

I’d like to mention a couple of fine points.

  1. The proofs aren’t using Ackermann induction any more, but mostly good old mathematical induction. How do you know which to choose? Mostly it’s trial and error. It’s often possible to get a proof using the “wrong” induction rule, but with a lot of needless pain.

  2. Sometimes we write natural number constants as decimal integers and sometimes as strings of successors (Suc). The successor form is necessary for rewriting by recursion equations. You can’t count on Isabelle magically converting from one form to the other is needed. Simplifying by eval_nat_numeral translates decimal integers into unary (Suc) form, and 1 is always simplified to Suc 0.

Monotonicity in the first argument

The following three lemmas are all related to property A7. And I confess to a touch of blindness: monotonicity in Ackermann’s first argument should follow easily from ack_less_ack_Suc1, but it isn’t easy enough for sledgehammer, but we can prove it another way.

lemma ack_less_mono1_aux: "ack i k < ack (Suc (i+j)) k"
proof (induct i k rule: ack.induct)
  case (1 n) show ?case
    using less_le_trans by auto
next
  case (2 m) thus ?case by simp
next
  case (3 m n) thus ?case
    using ack_less_mono2 less_trans by fastforce
qed
lemma ack_less_mono1: "i < j  ack i k < ack j k"
  using ack_less_mono1_aux less_iff_Suc_add by auto

And once again, we need a non-strict version:

lemma ack_le_mono1: "i  j  ack i k  ack j k"
  using ack_less_mono1 le_eq_less_or_eq by auto

Building up the first argument

As mentioned above, the proof that Ackermann’s function is not PR involves choosing a suitable first argument. Each of the following results exhibits some $i$ such that $A(i,{-})$ grows faster than some other expression involving Ackermann’s function. These will deal with various inductive cases connected with the construction of primitive recursive functions.

A10 deals with nested function calls:

lemma ack_nest_bound: "ack i1 (ack i2 j) < ack (2 + (i1 + i2)) j"
proof -
  have "ack i1 (ack i2 j) < ack (i1 + i2) (ack (Suc (i1 + i2)) j)"
    by (meson ack_le_mono1 ack_less_mono1 ack_less_mono2 le_add1 le_trans less_add_Suc2 not_less)
  also have " = ack (Suc (i1 + i2)) (Suc j)"
    by simp
  also have "  ack (2 + (i1 + i2)) j"
    using ack2_le_ack1 add_2_eq_Suc by presburger
  finally show ?thesis .
qed

A11 deals with the sum of two function calls:

lemma ack_add_bound: "ack i1 j + ack i2 j < ack (4 + (i1 + i2)) j"
proof -
  have "ack i1 j  ack (i1 + i2) j" "ack i2 j  ack (i1 + i2) j"
    by (simp_all add: ack_le_mono1)
  then have "ack i1 j + ack i2 j < ack (Suc (Suc 0)) (ack (i1 + i2) j)"
    by simp
  also have " < ack (4 + (i1 + i2)) j"
    by (metis ack_nest_bound add.assoc numeral_2_eq_2 numeral_Bit0)
  finally show ?thesis .
qed

I find this last result (A12) rather curious. Adding 4 to the first argument is a super gigantic leap and doesn’t seem necessary. On the other hand, it can be as big as we wish, so who cares? And as stated, the theorem follows quickly from the previous one.

lemma ack_add_bound2:
  assumes "i < ack k j" shows "i + j < ack (4 + k) j"
proof -
  have "i + j < ack k j + ack 0 j"
    using assms by auto
  also have " < ack (4 + k) j"
    by (metis ack_add_bound add.right_neutral)
  finally show ?thesis .
qed

The full development can be downloaded from Isabelle’s Archive of Formal Proofs. You can confirm that the proofs really are as simple as they appear on this post. We’ll examine the rest of the proof next week!