Googology Wiki
Advertisement
Googology Wiki
Ackermann function
Based onSuccessor function
Growth rate\(f_{\omega}(n)\)

A page from Sushi Kokuu Hen demonstrating the Ackermann function.

The Ackermann function \(A(x,y)\) is a recursive function which was originally invented by Wilhelm Ackermann and later simplified by Rozsa Peter and then by Raphael M. Robinson. The exact definition of the Ackermann function varies slightly between authors.

Definition

Original definition

Original definition of Ackermann function is as follows.[1] \begin{align*} A(0,x,y)&:=x+y\\ A(n+1,x,y)&:=\underbrace{A(n,A(\cdots(A(n,x))\cdots)}_{y \text{times}} \end{align*} It can be expressed with Arrow notation, which was not devised when this function was defined, as \(A(n+2,x,y)=x\uparrow^n y\). It was defined in terms of higher-order primitive recursion (i.e., primitive recursion on functional).[2][3]

Robinson's definition

Robinson's version is the most commonly-refered-to rendition of the Ackermann function and is defined as follows:

  • \(A(x,y) = y + 1\) if \(x = 0\), else
  • \(A(x,y) = A(x - 1,1)\) if \(y = 0\), otherwise,
  • \(A(x,y) = A(x - 1,A(x,y - 1))\).[4]

This function grows at a rate comparable to the lesser-known Sudan function.

The expansion of \(A(2,2)\) using Robinson's definition is given below as an example:

\[ \begin{array}{cclclcl} A(2,2) &=& A(1,A(2,1))\\ &=& A(1,A(1,A(2,0)))&&\\ &=& A(1,A(1,A(1,1)))\\ &=& A(1,A(1,A(0,A(1,0))))\\ &=& A(1,A(1,A(0,A(0,1))))\\ &=& A(1,A(1,A(0,2)))\\&=& A(1, A(1, 3))\\ &=& A(1, A(0, A(1, 2)))\\ &=& A(1, A(0, A(0, A(1, 1))))\\ &=& A(1, A(0, A(0, A(0, A(1, 0)))))\\ &=& A(1, A(0, A(0, A(0, A(0, 1)))))\\ &=& A(1, A(0, A(0, A(0, 2))))\\ &=& A(1, A(0, A(0, 3)))\\ &=& A(1, A(0, 4))\\ &=& A(1, 5)\\ &=& A(0, A(1, 4))\\ &=& A(0, A(0, A(1, 3)))\\ &=& A(0, A(0, A(0, A(1, 2))))\\ &=& A(0, A(0, A(0, A(0, A(1, 1)))))\\ &=& A(0, A(0, A(0, A(0, A(0, A(1, 0))))))\\ &=& A(0, A(0, A(0, A(0, A(0, A(0, 1))))))\\ &=& A(0, A(0, A(0, A(0, A(0, 2)))))\\ &=& A(0, A(0, A(0, A(0, 3))))\\ &=& A(0, A(0, A(0, 4)))\\ &=& A(0, A(0, 5))\\ &=& A(0, 6)\\ &=& 7 \end{array} \]

The Ackermann function is related to Knuth's up-arrow notation and Hyper operators in the following way:

\(A(x,y) = 2\uparrow^{x-2}(y+3)-3 = 2[x](y+3)-3\).

It is thus capable of significantly fast growth rates. For example, \(A(4,2)\) computes to the following value:

\(A(4,2) = (2 \uparrow^{2}5) -3 = 2 \uparrow\uparrow 5 -3 = 2 \uparrow 2 \uparrow 2 \uparrow 2 \uparrow 2 -3 = 2^{2^{2^{2^{2}}}}-3 \approx 2.003529930 \times 10^{19,728}\)

Friedman's definition

Harvey Friedman defined the Ackermann function \(A(x,y)\) (\(x,y > 0\)) like so:[5]

  • \(A(x,y) = 2\) (if \(y=1\))
  • \(A(x,y) = 2y\) (if \(x=1\) and \(y > 1\))
  • \(A(x,y) = A(x-1,A(x,y-1))\) (if \(x > 1\) and \(y > 1\))

According to this definition, \(A(x,y)\) is defined for any positive integers \(x\) and \(y\), and equal to \(2 \uparrow^{x-1} y\).

Goucher's definition

A.P. Goucher proposed the following definition of the Ackermann function in his blog post:[6]

  • \(f_1(n)=n+2\)
  • \(f_{m+1}(n) = f_m^n(2)\)
  • \(A(n) = f_n(n)\)

The post describes yet another variant of the this function that is related to the following problem:

Given a row of boxes, and some number of coins in each, we can pick one box and operate by one of the following rules:

  • Remove one coin from this box and add two coins in the box n+1.
  • Remove one coin from this box and invert the number of coins in boxes n+1 and n+2.

We can choose a strategy of picking boxes and applying rules for them. Consider the situation when all boxes except the rightmost one are empty. Now, given a row of n boxes with one coin in each of them, f(n) is the largest number of coins in the rightmost box. Computing exact values of this function can be tricky, but it is trivial to see that \(f(1) = 1\), \(f(2) = 3\) and \(f(3) = 7\). Proving that \(f(4) = 28\) is slightly harder.

Other trivia

Ackermann numbers are defined with the original definition of the Ackermann function.

In googology, the Ackermann function may alternatively refer to the single-argument function \(A(n) = A(n, n)\) which diagonalizes over the two-variable counterpart. \(A(n)\) is also known as the gag function.

The inverse of the single-argument Ackermann function, \(\alpha(n)\), is called the inverse Ackermann function.[7] dead link[citation needed] Since \(A(n)\) grows somewhat rapidly, the inverse Ackermann function consequently grows very slowly. Interestingly, it has seen applications in time complexity theory.

Wilhelm's original function was devised to show the existence of computable functions that are not primative-recursive, as that had been an open question at the time.

Computerphile_description_of_the_Ackermann_function

Computerphile description of the Ackermann function

Sources

Advertisement