Googology Wiki
Advertisement
Googology Wiki

In order to avoid the repetition of the same explanations, I list several common mistakes in googology and questions about them.


Introduction[]

Why can't we regard ∞ as a large number?[]

Because we are trying to define a finite number as a common rule. You can coin ∞ as your number, but few people following the common rule will pay attention.


Why should we define everything?[]

Because the size of an ill-define number does not make sense. You can post your number without a fixed precise definition, but it will be meaningless to ask whether it is larger than a given well-defined number, e.g. 1.


Why do mathematical notions in googology look difficult?[]

Because they are actually difficult. You might discover some great simple explanations on some web pages, but those are usually incorrect. Why? It is simply because "too simplified" explanations are frequently written by beginners who do not understand what they are writing. Beginners who cannot understand the precise definition tend to search "too simplified" explanations, and believe them as cool explanations without checking the correctness by themselves. After then, such beginners will think that they sufficiently understand what they read, and start writing new wrong explanations on their web sites. In this way, fake articles will be spread arround the world.

In particular, many of explanations on Graham's number, ordinals, FGH, OCFs, ordinal notations, \(\textrm{TREE}\), and \(\omega_1^{\textrm{CK}}\) are wrong. Even many articles in this wiki were wrong as far as I know. Therefore it is meaningless to state something like "my statement is obviously correct because it is written so in this wiki".

People frequently use a difficult mathematical notion in googology before they learn its precise definition. In that case, they tend to make the same mistakes again and again, because they cannot even undrstand the issues pointed out by others. They just believe their intuition based on wrong explanations in web sites, and ignore correct explanations. As a result, they create something meaningless like "\(f_{\psi_0(\omega_{\alpha}^{\textrm{CK}})}(G_{\textrm{TREE}})\), where \(\alpha\) is the greatest ordinal definable in \(1000000000\) letters in some ordinal notation definable in some mathematical system" by ignoring all corrections. In order to avoid such an incident, it is better for us to honestly use only what we understand. We do not have to behave as if we knew many difficult notions about which we have no idea.


Formal Theory[]

Why is formal logic important in googology?[]

Because it helps us to argue in a precise manner. For example, it enable us to reject Berry's paradox like "My large number \(N \in \mathbb{N}\) is the natural number definable by \(10^{100}\) or less letters! I win!" or similar naive intuition-based explanations.


What does "definition" mean in mathematics?[]

A definition of a term, e.g. a large number, in first order logic is simply a formula \(D\) with a free variable \(x\) such that \(\exists ! x(D)\) is provable under the axiom on which you are working. This is just one of formulations of a definition explained in Section 8 and 13 in [Kun][1], but other formulations are essentially equivalent to this. In particular, you can never define non-trivial term without axioms unless you are working on other powerful formal systems than first order logic.

You may say "My large number \(N \in \mathbb{N}\) is the greatest one definable by \(10^{100}\) or less letters in \(\textrm{FOST}\)! I win!" But it is not a valid definition in any theory written in \(\textrm{FOST}\). You should declare and specify the axiom on which the definability by letters in \(\textrm{FOST}\) is based, and also the meta theory in which \(N\) is actually definable. For details on meta theory, see the explanation here.

Also you may name a large number relative to a model \(M\) of set theory by a formula. A formula \(D\) with a free variable \(x\) in set theory names a natural number relative to \(M\) if \(M \models \exists ! x((x \in \mathbb{N}) \wedge D)\). I note that a natural number relative to \(M\) is just an element \(n\) of \(M\) which satisfies \(n \in^M \mathbb{N}^M\), where \(\in^M\) and \(\mathbb{N}^M\) denote the interpretations of \(\in\) and \(\mathbb{N}\) in \(M\) respectively, and hence is not necessarily a natural number, i.e. an element of \(\mathbb{N}\). For details on a model, see the explanation here.


What does the computability of a large number mean?[]

The computability of a large number is not defined in mathematics, and is usually referred without a complete definition. For example, people sometimes say that a computable large number means an output of a computable large function with an input which is a computable large number, the critetion does not work because every natural number is the output of a constant map, which is commputable by definition. Instead, I give a candidate of the definition of the notion of "a defining formula of a computable natural number".

  1. The formula \(n = 0\) is a defining formula of a computable natural number \(n\).
  2. The formula \(n = 1\) is a defining formula of a computable natural number \(n\).
  3. The formula \(n = 2\) is a defining formula of a computable natural number \(n\).
  4. The formula \(n = 3\) is a defining formula of a computable natural number \(n\).
  5. The formula \(n = 4\) is a defining formula of a computable natural number \(n\).
  6. The formula \(n = 5\) is a defining formula of a computable natural number \(n\).
  7. The formula \(n = 6\) is a defining formula of a computable natural number \(n\).
  8. The formula \(n = 7\) is a defining formula of a computable natural number \(n\).
  9. The formula \(n = 8\) is a defining formula of a computable natural number \(n\).
  10. The formula \(n = 9\) is a defining formula of a computable natural number \(n\).
  11. The formula \(n = 10\) is a defining formula of a computable natural number \(n\).
  12. For any defining formula \(F(x)\) of a computable natural number \(x\), the formula \(\textrm{Code}(F)(f)\) given as "for any \(x \in \mathbb{N}\), \(F(x)\) implies that \(f\) is the computable function coded by \(x\)" is a defining formula of a computable function \(f\). (Here, we fix an encoding of computable functions into natural numbers.)
  13. For any defining formula \(F(f)\) of a computable function \(f\) and any defining formula \(G(x)\) of a computable natural number \(x\), the formula \(\forall f(\forall x((F(f) \land G(x)) \to (n = F(x))))\) is a defining formula of a computable natural number.

A well-formed expression \(E\) of a natural number \(n\) naturally gives a defining formula \(F(n)\) of \(n\), and hence we can define the validity of \(E\) in computable googology by the statement that \(F(n)\) is a defining formula of a computable natural number.

For example, the expression \(f(0)\) with an uncoumputable function \(f\) is not valid in computable googology in this sense, although there can be a defining formula \(F(n)\) of a computable natural number \(n\) such that the formula \(F(f(0))\) is provable in \(\textrm{ZFC}\) set theory. (For example, consider \(\textrm{BB}(0)\) or \(\textrm{Rayo}(0)\). I think that almost all "computable large numbers" in this wiki are defined by defining formulae of computable natural numbers, and hence this formulation is not so bad.


Is every consistent first order theory incomplete?[]

No. You need to read the conditions of the incompleteness theorem carefully. For example, it is well-known that \(\textrm{RCF}\) does not satisfy the condition. I am afraid that you are confounding incompleteness theorem with completeness theorem.

Also, there is a consistent first order theory which is not incomplete. Let \(\Sigma\) denote the set of consistent systems of axioms written in a fixed formal language. Then by Zorn's lemma, there is a maximal element \(A\) in \(\Sigma\) with respect to the inclusion relation. If there is a closed formula \(F\) independent of \(A\), \(A \cup \{F\}\) is consistent. It contradicts the maximality of \(A\). Therefore \(A\) is not incomplete.


Is a second order theory stronger than first order theory?[]

Partially yes, but partially no. It depends on the meaning of the strength, and on what theories you are considering.

  • Second order logic is a generalisation of first order logic. (strong for you in this sense?)
  • Every second order theory is described in a formal language, which is a term of a first order theory. (weak for you in this sense?)
  • It is impossible to characterise the structure of the standard model of natural numbers by the first order arithmetic \(\textrm{PA}\) because of the existence of a non-standard model, while it is possible to characterise it by the full semantics, which is a standard semantics for second order logic, of arithmetic. (strong for you in this sense?)
  • It is possible to use a natural number \(\infty\) relative to a specific model \(N\) of the first order arithmetic \(\textrm{PA}\) which is greater than any natural numbers relative to \(N\) derived from the meta theory, while it is impossible in the case of the second order interpretation. (weak for you in this sense?)
  • If a first order formula in a second order set theory with axioms in which second order variables appear only to describe a axiom schema is provable, then it is also provable in a \(2\)-sorted first order set theory. (weak for you in this sense?)
  • The consistencies of the second order interpretations of the \(2\)-sorted first order set theories \(\textrm{NBG}\) and \(\textrm{MK}\) is provable under the consistency of the first order set theory \(\textrm{ZFC} + \textrm{Inaccessible}\). (weak for you in this sense?)
  • The consistency of the first order arithmetic \(\textrm{PA}\) is provable under the second order arithmetic \(\textrm{ACA}_0 + \textrm{Con}(\textrm{PA})\). (strong for you in this sense?)
  • The consistency of the second order arithmetic \(\textrm{ACA}_0\) is provable under the first order arithmetic \(\textrm{PA} + \textrm{Con}(\textrm{ACA}_0)\). (weak for you in this sense?)
  • The proof-theoretic ordinal of the first order arithmetic \(\textrm{PA}\) coincides with the proof-theoretic ordinal of its second order extension \(\textrm{ACA}_0\). (equivalent for you in this sense?)
  • The proof-theoretic ordinal of the first order arithmetic \(\textrm{PA}\) coincides with the proof-theoretic ordinal of the second order arithmetic given as \(\textrm{PA}\) augumented by second order induction schema by definition. (equivalent for you in this sense?)
  • The proof-theoretic ordinal of the first order arithmetic \(\textrm{PA}\) is greater than the proof-theoretic ordinal of the second order arithmetic given as the same axiom as \(\textrm{PA}\). (weak for you in this sense?)

Therefore you need to declare what you precisely mean. Also the same problem occurs when you refer to higher order theory.


Meta Theory[]

What is the meta theory?[]

Roughly speaking, the meta theory means the theory in which a given theory is defined. I need more precise explanations.

Let \(T\) be a set theory such as \(\textrm{ZFC}\) set theory, \(L\) a formal language defined in \(T\), and \(A\) a set of axioms described in \(L\). For example, \((L,A)\) is the pair of a formal language and the axiom of group theory, or the pair of the formal language \(\textrm{FOST}\) and the \(\textrm{ZFC}\) axiom written in \(\textrm{FOST}\). Then we call \(T\) the meta theory of the coded theory described by \((L,A)\). You can state meta theoretic proposition on \((L,A)\) as a formula in \(T\). For example, the provability of a formula in \(L\) under \(A\) is a formula in \(T\).

Although I restricted it to the case where \(T\) is a set theory, we can also define a coded theory in an arithmetic by using the pair function and the Goedel correspondence. Then we call the arithmetic the meta theory of the coded theory.

Of course, \(T\) itself might also be a coded theory in another theory, say, the meta meta theory \(MT\) of \(T\). Otherwise, \(T\) might be described as a naive finite collection of symbols with purely syntax theoretic comparison of sequences.

When we do not consider the meta theory of \(T\), then a formula in \(T\) often means a Goedel number in \(T\) itself, so that meta-theoretic arguments like provability and consistency can be formalised in \(T\). In this sense \(T\) can be the meta theory of \(T\) itself. Therefore usually we do not have to care about what the meta theory is, as long as you do not use the meta theory in your definition of a large number. For more deatails, see this.

I note that \(MT\) is fixed before you consider \(T\). Therefore when you fix \(T\), the maximum of the length of the tower of theories \(T,MT,MMT,\ldots\) is fixed. In particular, when you want to define a large number in \(T\), you are not allowed to arrange the length of the tower of meta theories saying like "I take the meta theory, the meta meta theory, and the meta \(\times \infty\) theory! Using them all, I define a great number! I win!" If you want to do so, then declare the rule which allows you to do so. Then other googologists can also enjoy your rule fairly with you.


What is a meta natural number?[]

Roughly speaking, it is a natural number syntax-theoretically described as a repetition of the successor applied to \(0\), i.e. \(0, 0+1, 0+1+1, 0+1+1+1, \ldots\). It is a natural number, which can be regarded as a term in any arithmetic. You may ask "Isn't every natural number \(n\) presented as \(0 + \underbrace{1+ \cdots + 1}_n\)?" It is correct, but the repetition of \(+1\) is not given in a syntax-theoretic way. In order to distinguish them, I need to clarify the meta theory.

Let \(T\) denote the base theory in which we are working, and \(MT\) denote the meta theory of \(T\). Namely, every formula of \(T\) is coded in \(MT\) into formal strings or a natural number. Moreover, we have a definable \(1\)-ary function \(S(n)\) in \(T\) interpreting the successor function in an arithmetic. If \(T\) itself is an arithmetic, then \(S(n)\) is the usual successor. If \(T\) is a sufficiently strong set theory, then \(S(n)\) is defined as \(n \cup \{n\}\). More precisely, \(S(n)\) is defined by the \(2\)-ary function formula \(F(n,m)\) given as \(m = n \cup \{n\}\), which is a formula in a Henkin extension of \(T\) interpreted as the formula \(\forall x((x \in m) \leftrightarrow ((x \in n) \lor (x = n)))\) in \(T\).

Let \(a\) be a natural number in \(MT\). Then we can define its formalisation in \(T\) as the natural number \(\textrm{Formalise}(a)\) in \(T\) defined by the defining formula \(F_a(n)\) defined as a code in \(MT\) in the following recursive way:

  1. If \(a = 0\), then \(F_a(n)\) is the formula \(n = 0\) in \(T\).
  2. If \(a > 0\), then \(F_a(n)\) is the formula \(\forall x(F_{a-1}(x) \to (n = S(x)))\) in \(T\).

Namely, \(\textrm{Formula}(a)\) is the unique natural number \(n\) characterised by the formula \(F_a(n)\). In this way, every natural number in \(MT\) corresponds to (a definition formula of) a natural number in \(T\). Roughly speaking, \(\textrm{Formula}(a)\) is given as \(0 + \underbrace{1 + \cdots + 1}_a\) in \(T\). The point is that \(a\) is a natural number in \(MT\), and the defining formula \(F_a(n)\) of \(\textrm{Formula}(a)\) is syntax-theoretically defined in \(MT\). A meta natural number is a natural number in \(T\) defined by \(F_a(n)\) for some natural number \(a\) in \(MT\). The intuitive predicate "\(n\) is a meta natural number" itself is not formalised in \(T\), while the statement \(F(n)\) is a defining formula in \(T\) of a meta natural number is formalised in \(MT\). Namely, it means the formula "There exists a natural number \(a\) such that the formula \(\forall n(F(n) \to F_a(n))\) in \(T\) is provable in \(T\)" in \(MT\).

For example, consider the case where \(T\) is \(\textrm{ZFC}\) set theory. How about the natural number \(n\) defined by the defining formula \(F(n)\) given as \((\textrm{CH} \to (n = 0)) \wedge ((\neg \textrm{CH}) \to (n=1))\)? As long as \(\textrm{Con}(\textrm{ZFC})\) holds in \(MT\), neither \(\forall n(F(n) \to F_0(n))\) nor \(\forall n(F(n) \to F_1(n))\) is provable in \(T\), while \(\forall n(F(n) \to (F_0(n) \lor F_1(n)))\) is provable in \(T\). In this sense, \(F(n)\) is not a defining formula of a meta natural number, while \(F(n)\) defines a natural number which coincides with either one of the meta natural numbers \(0\) or \(1\).


What is a standard natural number?[]

In mathematics, a standard natural number means an element of \(\mathbb{N}\). You may ask "Isn't every natural number a standard natural number?" It is correct, as long as we are not considering coded theories. Let \(T\) denote the base theory in which we are working, and \(FT\) a coded theory in \(T\). In other words, \(T\) is the meta theory of \(FT\). Suppose that \(T\) is a sufficiently strong set theory so that \(\mathbb{N}\) makes sense, and that \(FT\) includes an arithmetic so that it admits a predicate \(R(n)\) in \(FT\) meaning "\(n\) is a natural number". Let \(M\) be a model of \(FT\). Relativising \(R(n)\) to \(M\), the set \(M^{\mathbb{N}}\) of natural numbers in \(M\) makes sense. Then given an \(n \in M^{\mathbb{N}}\), we may ask whether \(n\) is a standard natural number or not. It is non-trivial, because \(M^{\mathbb{N}}\) does not necessarily coincides with \(\mathbb{N}\).

For example, suppose that \(FT\) is a coded \(\textrm{PA}\), and \(M\) is a model satisfying \(\{n \cup \{n\} \mid n \in M\} \subset M\) in which the interpretation of \(0\) is \(\emptyset\) and the interpretation of the successor is given by the \(1\)-ary map \(M \to M, \ n \mapsto n \cup \{n\}\). In this case, we have \(\mathbb{N} \subset M^{\mathbb{N}}\) by induction, while we do not know whether there is an \(n \in M^{\mathbb{N}}\) which does not belong to \(\mathbb{N}\). Such an \(n\) is called a non-standard natural number. A model admitting a non-standard natural number is called a non-standard model of \(\textrm{PA}\), and is known to exist.

Since \(T\) is the meta theory of \(FT\), for a natural number \(a\) in \(T\), we can consider a meta natural number in \(FT\) given as the formalisation \(\textrm{Formalise}(a)\) in \(FT\) of \(a\). Then by induction on \(a\) in \(T\), the relativisation of \(\textrm{Formalise}(a)\) in \(M\) coincides with \(a\), and hence is a standard natural number. In this sense, a meta natural number is naturally regarded as a standard natural number.


Is every computable large number a meta natural number?[]

The answer depends on the meaning of "computable", which is quite ambiguous. Instead, given a defining formula \(F(n)\) of a computable natural numeber \(n\) in the sense here, we can ask whether \(F(n)\) is a defining formula of a meta natural number or not in the sense here. Then the answer is "usually" yes, but is not necessarily so. In order to argue on this issue in a precise manner, we need to fix the base theory.

Let \(T\) denote the base theory in which we are working, and \(MT\) denote the meta theory of \(T\). Let \(y_i = f_i(x_i)\) denote the enumeration of atomic subformulae of \(F(n)\) including a reference to a computable function. If there is an \(f_i\) which is not coded by a natural number with a defining formula of a meta natural number, then there exists a definining formula of a computable natural number which is not a defining formula of a natural number, because \(f_i\) is coded by a natural number defined by a defining formula of a computable natural number. Therefore we may assume that all of \(f_i\)'s are coded by meta natural numbers. Similarly, we may assume that all of \(y_i\)'s and \(x_i\)'s are defined by defining formulae of meta natural numbers. In this case, the atomic formula \(y_i = f_i(x_i)\) can be obtained as the formalisation of an equality in \(MT\), which I also denote by \(y_i = f_i(x_i)\). The issue is that even if \(y_i = f_i(x_i)\) is provable in \(T\), it is not necessarily provable in \(MT\). If it is unprovable in \(MT\), then the defining formula \(F(n)\) is not a formalisation of a defining formula in \(MT\).

For example, if \(MT\) is \(\textrm{ZFC}\) set theory augumented by \(\textrm{Con}(T)\) and \(T\) is a coded \(\textrm{ZFC}\) set theory augumented by \(\neg \textrm{Con}(\textrm{ZFC})\), we can define a natural number in \(T\) as the least natural number which codes a proof of \(\perp\) in \(\textrm{ZFC}\) set theory, it is not defined by a defining formula of a meta natural number, because there is no natural number in \(MT\) which codes a proof of \(\perp\) in \(T\).

Usually, such a problem does not occur because we only use \(f(x)\) for a computable function \(f\) and an input \(x\) such that the termination of the computation of \(f(x)\) is "expected" to be provable in a weak arithmetic. (Unlike the totality of \(f\), the pointwise well-definedness of \(f\) is often provable in an weak arithmetic even if \(f\) is fast-growing.) However, the pointwise well-definedness is usually quite difficult to verify, and the "expectation" is usually not based on any theoretic background.


Is there a large number greater than any meta natural number?[]

Partially yes. LittlePeng9's \(N\) explained here is an example of such a large number in an arithmetic. Also, if you are working in \(\textrm{ZFC} + \neg \textrm{Con}(\textrm{ZFC})\), which is consistent as long as \(\textrm{ZFC}\) is consistent by Goedel's incompleteness theorem, then the least Goedel number \(p\) of the proof of the contradiction under the coded \(\textrm{ZFC}\) set theory is a well-defined natural number, which is greater than any meta natural number in the sense that the equality \(p > n\) is provable for any meta natural number \(n\). It does not mean \(p > p\), because \(p\) is not given as a meta natural number. Similarly, \(S(1919)\) is known to be greater than any meta natural number if you are working in the same theory.


Formula[]

Is natural number irrelevant to formal strings?[]

No. Formal strings in proof theory are encoded into natural numbers so that theories including arithmetic can refer to them. However, googologists who know few about formal logic but believe that they deeply understand googological notions using formal logic frequently state that formal strings are irrelevant to natural numbers, and spread wrong information about Rayo's number, the least transcendental integer, TR function, and so on.

For example, a typical claim cast by them is something like "Why are you talking about natural numbers instead of actual formal strings? Since natural numbers are irrelevant to them, the whole discussion is pointless!" Although this sounds quite illogical for people who know the precise definitions of googological notions using formal logic, they tend not to understand the incorrectness of their claims even when we explain it, because they do not know the precise definitions of the notions which they are referring to as if they fully know about them. See also #I fully understand it although I do not know the precise definition!.

Some of them even claim that the use of natural numbers just makes the argument too complicated, and hence is useless. If it were actually useless, then why would the professional mathematicians working in logic be still using natural numbers in that way? If they tried once to imagine the reason, they would notice the incorrectness of their illogical claim, unless they are unreasonably believing that they understand formal logic better than professional ones.

Indeed, there are misfortune people who are somewhy believing that anything is "obvious" for them even when it is unproved. One tragedy is that they even refer to propositions which are known to be false in the same way. For example, I have ever seen people insisting the following illogical claims as if they were "trivial" facts for them:

  1. "The least transcendental integer is ill-defined in ZFC set theory!" (See transcendental integer#well-definedness.)
  2. "Formal proofs can never be natural numbers!" (Learn the precise definition of the notion of formal proofs.)
  3. "There is no difference between natural numbers in meta theory and those in the base theory!" (See #Is every formula in a coded theory a formula in the base theory?.)
  4. "If \(\textrm{ZFC}\) is consistent, then \(\textrm{ZFC} + \neg \textrm{Con}(\textrm{ZFC})\) is inconsistent!" (Learn incompleteness theorem.)

Of course, it is very natural for beginner to make these mistakes, because they are difficult topics. However, it is quite illogical to insist that they are trivially correct, even after it is pointed out that their understandings of formal strings is wrong. Since it has turned out that their understanding of formal strings is incorrect, they should doubt their proofless belief. Also, some of them claimed that requiring proofs for uncertain statements is non-sense because they are "clear" for them. I hope them to learn the importance of actual definitions and proofs from their failures.

When people do not know the precise definition of a notion, then they can prove essentially nothing non-trivial about it. Therefore if they understood the basic knowledge that whether a given mathematical statement is correct or not should be determined by actual proofs rather than their belief, they would not spread incorrect information. However, people who ignore the precise definitions of what they are talking about unfortunately tend to also ignore actual proofs. Since the notion of formal strings "looks" quite easy to intuitively understand, it frequently becomes the target of their proofless statements which are claimed to be "obvious" for them. See also #It is obvious!.


Can't we deal with properties of formulae in the base theory?[]

Partially no, but partially yes. Formulae in the base theory are not terms of the base theory, and hence you are not allowed to deal with properties of them in the base theory. On the other hand, there is a well-known method using Goedel numbers called the Goedel correspondence, which enables us to deal with formalisations of properties of formulae such as provability.


Is it possible to deal with natural numbers without axioms?[]

No. First of all, if you do not know the definition of a definition in formal logic, see the explanation here. Then you notice that if you could define the notion of natural numbers in an appropriate way (e.g. \(0 \neq 1\)) in first order logic without axioms, then you can define it in the same way in any first order logic such as group theory and category theory. It contradicts the obvious fact that they have a model whose underlying set is a singleton.

I note that there are several ways to translate axioms into rules of inference, and there are several formal systems other than first order set theory. Therefore logicians can avoid using axioms explicitly. On the other hand, if you were actually a logician, then you would not ask the question in the title of this section.


Is it possible to deal with computable functions without axioms?[]

No. To begin with, if you do not know the reason why we need axioms to define large numbers, see the explanation here. Although googologists sometimes say "Since we are dealing computable functions, the choice of axioms is irrelevant to this context!", it is completely incorrect.

Usually, we are working in \(\textrm{ZFC}\) set theory. Let \(T\) be a stronger set theory than \(\textrm{ZFC}\) set theory, and \(f\) a recursive function defined in a sufficiently weak arithmetic so that each step of the computing process of \(f\) replace meta natural numbers by meta natural numbers. Suppose that the termination of the computing process of \(f(n)\) for the case \(n = 10^{100}\) is provable under \(T\). Then \(f(10^{100})\) is well-defined under \(T\).

The validity of each step of the computing process of \(f\) is provable under a sufficiently weak arithmetic, and hence under \(\textrm{ZFC}\) set theory. Then does the concatenation of the proofs in a suitable sense form a proof of the termination of the computing process of \(f(n)\) for the case \(n = 10^{100}\) under \(\textrm{ZFC}\) set theory? It is not necessarily true. In order to justify this construction of a proof, we need to know the meta-theoretic finiteness of the number of the steps, i.e. the termination of the computing process in the meta theory.

Remember that we have not used the assumption of the termination in \(T\) in the argument in the previous paragraph. Does the termination in \(T\) ensure the termination in the meta theory? Such a property is called the \(\Sigma_1\)-soundness of \(T\), which does not necessarily hold. If \(T\) is \(\omega\)-consistent, then \(T\) is \(\Sigma_1\)-sound. On the other hand, the \(\omega\)-consistency is stronger than the usual consistency, and its formalisation is not provable under the meta theory if the meta theory is weaker than or equivalent to \(T\) and \(T\) is actually \(\omega\)-consistent.

As a conclusion, googologists who believe that the choice of axioms is irrelevant to this context might confounding the following distinct notions:

  • the finiteness in \(\textrm{ZFC}\) set theory
  • the finiteness in \(T\)
  • the finiteness in the meta theory
  • the intuition-based finiteness in the real world


Is every formula in a coded theory a formula in the base theory?[]

No even if both theories are set thoeries with the same axioms. In order to explain the reason, we assume the consistency of \(\textrm{ZFC} + \neg \textrm{Con}(\textrm{ZFC})\) in the meta theory. Is it a reasonable assumption? Well, we often assume the consistency of \(\textrm{ZFC}\) in the usual mathematics in non-formalised sense. If it is actually consistent, i.e. there is no meta natural number which is the Goedel number of a formal proof of \(\perp\) under \(\textrm{ZFC}\), the consistency in the formalised sense is not provable under \(\textrm{ZFC}\) itself by incompleteness theorem. Therefore we can just believe the consistency of \(\textrm{ZFC}\) in the non-formalised sense. Then it is natural to believe the consistency of \(\textrm{ZFC}\) (and hence the consitency of \(\textrm{ZFC} + \neg \textrm{Con}(\textrm{ZFC})\)) in the non-formalised sense. Well, it is completely complicated, and hence we often make mistakes on these arguments.

Now we come back to the original question. Indeed, if you could do so in general, since there is a model \(M\) of \(\textrm{ZFC} + \neg \textrm{Con}(\textrm{ZFC})\) under the assumption in the meta theory by completeness theorem, and the sequence of formulae forming the proof of \(\perp\) in \(M\) should be derived from a sequence of formulae in the base theory and also a sequence of formulae in the meta theory. It contradicts the assumption of the consistency of \(\textrm{ZFC}\) in the meta theory.

What occurs here? So, consider the following formula \begin{eqnarray*} \textrm{ZFC} + \textrm{Con}(\textrm{ZFC} + \neg \textrm{Con}(\textrm{ZFC})) \vdash \left(\exists M(M \models \textrm{ZFC} + \neg \textrm{Con}(\textrm{ZFC})) \right) \end{eqnarray*} I call the theory in which \(M\) lives the base theory. Then it is a provable formula in the meta theory of the base theory. In order to make the formula clearer, I put a suffix on each occurrence of \(\textrm{ZFC}\). \begin{eqnarray*} \textrm{ZFC}_1 + \textrm{Con}(\textrm{ZFC}_2 + \neg \textrm{Con}(\textrm{ZFC}_3)) \vdash \left(\exists M(M \models \textrm{ZFC}_4 + \neg \textrm{Con}(\textrm{ZFC}_5)) \right) \end{eqnarray*} Then \(\textrm{ZFC}_1\) is a set of formulae in the meta theory, \(\textrm{ZFC}_2 = \textrm{ZFC}_4\) is a set of formulae in the base theory, and \(\textrm{ZFC}_3 = \textrm{ZFC}_5\) is a set of formulae in the coded theory.

In particular, \(\textrm{Con}(\textrm{ZFC}_2)\) is not the same as \(\textrm{Con}(\textrm{ZFC}_5)\). Such a problem occurs in the same reason why there is a natural number which is not derived from a meta natural number.


Model[]

What is a model?[]

You may say "I use the truth in the model! It is stronger than the provability! I win!" It might be true, if you understand the definition of the notion of a model well. But sometimes googologists refer to models in mathematically incorrect ways.

There are several definitions of the notion of a model in first order logic. Let \(T\) be a set theory.

  1. Let \(FT\) be a coded theory in \(T\) based on a formal language \(L\) in \(T\), and a set \(A\) of formulae described in \(L\).
    1. The formula \(M \models A\) with a free variable \(M\) in \(T\) is defined as the formula in \(T\) given as the satisfaction property of \(A\) presented by using the formal language \(L_M\) associated to \(M\) with a suitable embedding \(L \hookrightarrow L_M\).
    2. Suppose that \(FT\) is also a set theory. Then the formula \(X \models A\) with a definable class \(X\) in \(T\) is defined as the provability of each formula in \(A\) relative to the the formalisation of \(X\) in \(FT\).
  2. Let \(MA\) be a set of formulae in \(T\), which is a meta set in the meta theory, or is identified with the set of the corresponding Goedel numbers in \(T\) in the sense here.
    1. Suppose that \(MA\) is a finite set. I denote by \(F\) the formula in \(T\) given by connecting all formulae in \(MA\) by \(\wedge\). The formula \(M \models MA\) with a free variable \(M\) in \(T\) is defined as the formula \(F^M\) in \(T\).
    2. The formula \(X \models MA\) with a definable class \(X\) in \(T\) is defined as the provability of each formula in \(MA\) relative to \(X\).
      1. When we consider the meta theory of \(T\), then the formula \(X \models MA\) is a meta formula.
      2. When we consider the formalised provability in \(T\) by using Goedel numbers, then the formula \(X \models MA\) is a formula in \(T\).
    3. When \(T\) is also a coded set theory in the meta theory, the declaration "if \(M\) is a model of \(MA\)" with a free variable \(M\) in \(T\) sometimes means "under the additional axiom schema \(\{ M^F \mid F \in MA\}\)".

Also, there are notions of a model in second or higher order logics. Unlike first order logic, you need to specify semantics in order to consider a model, and the corresponding arguments become much more difficult. Therefore the terminology of a model is quite ambiguous.

In the usual mathematics, such ambiguity of the terminology does not cause so many troubles, because other precise arguments in the same context helps us to understand which terminology we are using. On the other hand, in googology, arguments with uncomputable functions often contain intuition-based rough sketch of idea, and hence the ambiguity of the terminology possibly becomes serious.


Is the satisfaction equivalent to the relativisation?[]

Partially yes, and partially no. In order to explain the reason in a formal way, we denote by \(MT\) the mata theory, and by \(L\) a language of first order set theory formalised in \(MT\).

Let \(\varphi\) be an \(L\)-formula, and \(S\) a term in \(L\), i.e. a set as \(L\) is a language of first order set theory. Then the relativisation \(\varphi^S\) is defined in \(MT\) as an \(L\)-formula.

On the other hand, in order to consider the satisfaction \(S \models \varphi\), we need to clarify an additional assumption. We fix a theory \(T\) given as \(L\)-formulae so strong that the predicate \(\models\) is formalisable in \(T\) in a standard way. By the definition, \(\models\) is applicable only to a pair of a term in \(L\) and a Goedel number in \(T\), which is also a term in \(L\). As \(\varphi\) is an \(L\)-formula, which is not literally a term in \(L\), we need to justify the convention \(S models \varphi\).

For this purpose, we should assume that \(L\) is formalised in \(MT\) in terms of natural numbers, i.e. Goedel numbering, instead of other sorts of an implementation of formal strings. In that case, every \(L\)-formula \(\varphi\) is identified with its Goedel number in \(MT\), corresponding to a Goedel number \(\ulcorner \varphi \urcorner\) in \(T\) through the standard formalisation of natural numbers, and hence gives a formula in the formalisation of \(L\) in \(T\). Then \(S \models \varphi\) makes sense as a short hand of \(S \models \ulcorner \varphi \urcorner\). Moreover, the meta theorem that the equivalence of \(\varphi^S\) and \(S \models \ulcorner \varphi \urcorner\) is provable under \(T\) for any term \(S\) in \(L\) and any \(L\)-formula \(\varphi\) is provable in \(MT\), as long as \(T\) is sufficiently strong. In that sense, \(\varphi^S\) and \(S \models \varphi\) is equivalent.

The situation drastically changes when we replace \(\varphi\) by a schema \(\Gamma\), i.e. a possibly infinite set of \(L\)-formula. We denote by \(\Gamma^S\) the relativised schema \(\{\varphi^S \mid \varphi \in \Gamma\}\). On the other hand, \(S \models \Gamma\) does not directly make sense, because the construction \(\varphi \mapsto \ulcorner \varphi \urcorner\) is just given as a map in \(MT\) from an \(L\)-formula to a defining \(L\)-formula of a Goedel number in \(T\) instead of a map in \(T\). It does not cause a problem when \(\Gamma\) is a finite set, as we can replace \(\Gamma\) by the logical adjunction of formulae in \(\Gamma\) in that case. When \(\Gamma\) is an infinite set, we should fix a Turing machine \(M\) computing a recursive enumeration of \(\Gamma\), and formalise \(S \models \Gamma\) as the \(L\)-formula \(\forall n \in \mathbb{N}, S \models \ulcorner M \urcorner(n)\), where \(\ulcorner M \urcorner\) is the formalisation of \(M\) corresponding to the code given as the formalisation of the code of \(M\).

The schema \(\{\varphi^S \mid \varphi \in \Gamma\}\) is not a single \(L\)-formula, while the formalised satisfaction \(S \models \Gamma\) is a single \(L\)-formula depending on a non-unique choice of \(M\). Unfortunately, they are not necessarily equivalent to each other. See also the issue that every formula in the formalisation of \(L\) in \(T\) is not necessarily given as the formalisation of an \(L\)-formula.


Does a model of the base set theory form a model of a coded set theory with the same axioms?[]

Partially yes, but partially no. First of all, if you do not know the definition of a model, see the explanation here. Let \(MA\) denote the set of axioms of the base set theory \(T\), and \(A\) the set pf axioms pf a cded set theory given as the formalisation of \(MA\).

When \(MA\) is a finite set, then the satisfaction property \(M \models MA\) for a set \(M\) in \(T\) is a formula in \(T\) which is equivalent to the satisfaction property \(M \models A\) by Lemma 9.11 in [Dev][2]. So in this context, a set model of the base set theory is a set model of the coded set theory.

When \(MA\) is not necessarily a finite set and we regard \(MA\) as a set of Goedel numbers, the satisfaction property \(M \models MA\) for a set \(M\) in \(T\) is a formalised formula in \(T\) which is equivalent to the satisfaction property \(M \models A\) by definition. So in this context, a set model of the base set theory is a set model of the coded set theory.

On the other hand, when \(MA\) is not necessarily a finite set and \(T\) is also a coded set theory in the meta theory, the declaratin "if \(M\) is a model of \(MA\)" with a free variable \(M\) in \(T\) sometimes means "under the additional axiom schema \(\{ F^M \mid F \in MA\}\) to \(T\)". Then the additional axiom schema consists of formulae in \(T\) whose Goedel numbers are meta natural numbers. Therefore the assumption does not imply the formalised satisfaction property \(M \models MA\) by the non-presentability of the existence of non-meta natural number. So in this context, a set model of the base set theory is not a set model of the coded set theory.

For the last case, it is better to explain an example. Suppose that \(T\) is \(\textrm{ZFC}\) set theory. Let \(\textrm{FOST}\) be a formal language of set theory defined in \(T\). Then the formalised axiom \(A\) of \(\textrm{ZFC}\) set theory described in \(\textrm{FOST}\) might be "different" from the original axiom \(MA\). Therefore the declaration "if \(M\) is a model of \(MA\)" does not imply \(M \models A\) if the meaning of a model is the last one in the context.

You might say "Huh? But \(\textrm{FOST}\) is the language of set theory. So every formula written in \(\textrm{FOST}\) is derived from \(T\)!" However, it is wrong. See the example of a formula in the coded theory which is not derived from a formula in the base theory here.

When you are considering a class model \(X\) of the base set theory, there are two distinct satisfaction properties. One is the meta theoretic satisfaction, and the other one is the formalised satisfaction.

If you are considering the meta theoretic satisfaction, there is no way to compare the meta theoretic formula \(X \models MA\) with the formula \(X \models A\) in \(T\). So in this context, a class model of the base set theory is not a class model of the coded set theory.

If you are considering the formalised satisfaction, then the formulae \(X \models MA\) and \(X \models A\) in \(T\) are equivalent to each other. So in this context, a class model of the base set theory is a class model of the coded set theory.


Is it possible to use a model in the definition of a large number?[]

Partially yes, but partially no. You can include a model \(M\) in the definition of a large number, as long as either one of the following holds:

  • You are working on set theory \(T\) and your formula using \(M\) is actually a definition in \(T\) in the sense of this subsection.
  • You are working on \(M\) itself and your formula actually names a natural number relative to \(M\) in the sense of this subsection.

For example, imagine that you are working on set thoery \(T\). The sentence like "I denote by \(N\) the cardinality of a finite model \(G\) of group thoery" is not a closed formula in \(T\) defining \(N\) as long as the definition of \(G\) is specified in \(T\).

Similarly, even if \(T\) is \(\textrm{ZFC}\) set theory, then you arre not allowed to use a model \(M\) of \(\textrm{ZFC}\) axiom by the same reason as long as you are participating in a large number contest without a rule specifying the use of such an \(M\). The sentence like "I denote by \(N \in \mathbb{N}\) (caution: it is not \(\mathbb{N}^M\)) the minimum of the set of natural numbers \(k\) such that there is a formula \(D\) in \(T\) with \(M \models (\exists ! x(D)) \wedge D[k/x]\)" is not a definition in \(T\) as long as the definition of \(M\) is not specified.

Of course, if you are working on a model \(M\) of \(T\) but not on \(T\) itself, then you can use \(M\). It depends on the rule of the googology which you are enjoying. In this case, the sentence like "I denote by \(N \in \mathbb{N}^M\) the minimum of the set of natural numbers greater than any natural number \(k\) named by some formula \(D\) in \(T\) with length smaller than or equal to \(10^{100}\)" can be easily interpreted to be a definition in \(T\), using the meta theory or the formalisation of the satisfaction.

Indeed, let \(\Sigma\) denote the finite set of (complete representatives of) formulae in \(T\) with length smaller than or equal to \(10^{100}\) with a free variable \(x\), and \(D\) the formula with a free variable \(y\) given by connecting \(\forall x \in \mathbb{N}, ((\exists ! x(\delta)) \to (\delta \to (x < y))\) for each \(\delta \in \Sigma\) by \(\wedge\). They are formulae in \(T\), and \(N\) can be defined as the minimum of the subset of \(\mathbb{N}\) defined by \(D^M\) by the separation axiom.

Also, if you are working on \(T\) and if you are allowed to use a model \(M\) to define a natural number \(N \in \mathbb{N}\) (not \(\mathbb{N}^M\)), then you can interpret your definition in a similar way.

Recall that in the usual mathematics, we are alloed to use a model \(M\) on \(T\) even if \(M\) is not specified. It is simply because we are implicitly quantifying \(M\). This does not mean that you can freely use an unquantified unspecified model \(M\). For example, we deal with statements like "if \(M\) satisfies \(\textrm{CH}\), then there is an extension \(M'\) of \(M\) which does not satisfy \(\textrm{CH}\)", which are appropriately quantified.

Also, the sentence like "I denote by \(N \in \mathbb{N}\) the minimum of the set of natural numbers greater than any natural number \(k\) admitting a formula \(D\) in \(T\) with length smaller than or equal to \(10^{100}\) such that \(D^M\) names \(k\) for any transitive model \(M\) of \(T\)" can be easily interpreted to be a definition in \(T\). I note that such a \(D\) actually defines \(k \in \mathbb{N}\) in \(T\) by completeness theorem, and hence the use of the quantified model \(M\) can be essentially removed.

On the other hand, if you define a function \(D_{\bullet}\) sending a model \(M\) to (the Goedel number of) a formula \(D_M\) in \(T\) such that \(D^M\) names a natural number relative to \(M\), \(D_{\bullet}\) is not a definition of a natural number in \(T\). You can define a model-dependent natural number in \(T\) only by a single (model-independent) formula in \(T\).


Can a formula in FOST define a natural number?[]

Partially yes, but partially no. First of all, if you do not know the reason why we need axioms to define large numbers, see the explanation here. Then you will notice that you need a specific axiom in order to define a natural number. On the other hand, \(\textrm{FOST}\) itself is just a formal language, and hence is not attached a specific axiom to. Usually, we use the axiom of the usual set theory called \(\textrm{ZFC}\) set theory, but the choice of an axiom should be clearly declared if there is ambiguity.


Is the satisfaction at a segment of \(V\) equivalent to the truth in \(V\)?[]

Not necessarily. One typical mistake is to confound the satisfaction at \(V_I\), where \(I\) denotes the least inaccessible cardinal, with the truth at \(V\), when one tries to explain the definability of Rayo's function in first order set theory. For example, the existence of the leat inaccesible cardinal is the example of a sentence for which the satisfaction and the truth is not equivalent, as long as \(\textrm{ZFC}\) set theory is consistent.

Unfortunately, the satisfaction at \(V_I\), which is formalised in \(\textrm{ZFC}\) augmented by the existence of the least inaccessible cardinal, is much less "powerful" than the the truth predicate for \(V\), which is unformalisable as long as the set theory is consistent, in formulation of a large function. Indeed, Rayo's function refers to the truth of a sentence at both \(V\) and \(V_I\) if the least accessible cardinal exists because the satisfaction at \(V_I\) is formalised in \(V\), but a variant of Rayo's function formulated in first order set theory by using the satisfaction at \(V_I\) cannot refer to the truth at \(V\).


Uncomputable Function[]

Is every function defined by finite steps computable?[]

No. A function returning a transfinite ordinal cannot be computable. In addition, a function returning natural numbers whose definition refers to transfinite ordinals, comparison of growth rates, or the reasult of a sarching in an infinite set does not naturally admit algorithms to compute it. It might be fortunately computable, but you should not assume it as an obvious fact unless you write down an actual algorithm, which cannot refer to ordinals or comparison of growth rates. See also #Is FGH computable?, #Is an OCF computable?, and User blog:P進大好きbot/List of Common Failures in Googology#Infinite Searching.


Does any uncomputable function dominate all computable functions?[]

No. For example, there are many uncomputable function whose values are either \(0\) or \(1\), e.g. the characteristic function of the image of Busy Beaver function. They do not dominate the identity function, which is a computable total function.


Is Busy Beaver function ill-defined?[]

No. It is just uncomputable, and is well-defined in a sufficiently strong theory such as \(\textrm{ZFC}\) set theory. See also the explanation of an uncomputable number here.


Is Busy Beaver function comparable to ω_1^CK in FGH?[]

It does not make sense, because an ordinal without specific choice of a system of fundamental sequences in FGH is ill-defined. Even if you consider the system of fundamental sequences associated to Kleene's \(\mathcal{O}\) notation, it heavily depends on a choice of an enumeration of Turing machines. Even if you fix an enumeration of Turing machines, it is known that \(\omega_1^{\textrm{CK}}\) in FGH does not necessarily outgrow \(\omega+3\) in Wainer hierarchy[3]. See also the issue on the order-preserving property of FGH.


Is n-th order Busy Beaver function comparable to ω_n^CK in FGH?[]

It does not make sense, because an ordinal without specific choice of a system of fundamental sequences in FGH is ill-defined. It is known that there is a counterexample of a system of fundamental sequences for the case \(n = 1\) as explained above.


Is every function definable in Peano arithmetic computable?[]

No. For example, Busy Beaver function is known to be definable in Peano arithmetic. In particular, a function definable in Peano arithmetic is not necessarily bounded by \(\varepsilon_0\) in the FGH with respect to the system fundamental sequences for Veblen hierarchy.


Uncomputable Number[]

What is an uncomputable number?[]

First of all, you need to understand the definition of the computability of a function. A formula is said to define a computable function if it defines a partial map \(f\) on \(\mathbb{N}\) for which there exists a Turing machine \(M\) such that \(f(n)\) coincides with the output of \(M(n)\) for any \(n \in \mathbb{N}\) in the domain of \(f\). Therefore the computability of a function is just a restriction, and there are many well-define uncomputable functions.

A computable number (in googoloby, but not in the computation theory dealing with a computable real number) roughly means "a number described as \(f(n)\) for some computable function \(f\) and a meta natural number \(n\)". Since every natural number can be expressed as the output of a constant function, which is a computable function by definition, the rough explanation above is not so meaningful. Actually, this terminology is not a mathematical one, but an intuitive one. Therefore there is no precise definition of the notion of a computable number. But it is often used, because the ambiguity does not cause many troubles. Then an uncomputable number just means a natural number which is not computable.

One easy and fair solution to reject such a constant function is to require the creator of a computable number to explicitly write down an algorithm to compute it. Conversly, if you allow any computable function without an explicit written algorithm to compute it, then you need to allow such a constant function.


Can we compare two uncomputable large numbers?[]

Partially yes, but partially no. To begin with, we need to fix a formal theory in order to define a large number by the reason here. Let \(N_0\) be a large number defined in a formal theory \(T_0\), and \(N_1\) be a large number defined in a formal theory \(T_1\).

First, suppose that \(T_0\) and \(T_1\) shares an arithmetic \(A\). If there is a theory \(T_2\) extending both \(T_0\) and \(T_1\) in a compatible way respecting \(A\), then the comparison of \(N_0\) and \(N_1\) makes sense in \(T_2\). Such a \(T_2\) is not unique, and hence the result of the comparison depends on the choice of \(T_2\).

For example, we can compare \(BB(10^{100})\) defined in \(\textrm{ZFC}\) set theory and \(\textrm{Rayo}(10^{100})\) defined in sufficiently strong second order set theory such as second order \(\textrm{NBG}\) set theory, because \(\textrm{ZFC}\) set theory can be embedded in that second order set theory.

However, if \(T_0\) and \(T_1\) are inconsistent over \(A\), contradiction is provable under \(T_2\), and hence the comparison is eventually non-sense. Namely, both inequalities \(N_0 < N_1\) and \(N_0 > N_1\) are provable under \(T_2\).

Next, suppose that \(T_0\) and \(T_1\) do not share an arithmetic. The situation becomes worse. There is no way to compare \(N_0\) and \(N_1\). For example, we cannot compare \(BB(10^{100})\) and the non-standard number \(N\) defined in an extension of Robinson arithmetic here. The set \(\mathbb{N}\) of natural numbers in \(\textrm{ZFC}\) set theory satisfies Peano arithmetic, but Peano arithmetic and the extended Robinson arithmetic are inconsistent. The two theories do not share the notion of a natural number.

At least, the comparison between an uncomputable large number and a meta natural number makes sense. It is a formula in the base theory in which the uncomputable large number is defined, and is provable, disprovable, or independent.

By the way, such a problem rarely occurs when we are dealing with computable large numbers, because they are usually meta natural numbers. (Since the notion of the computability of a natural number is not strictly defined in terms of mathematics, sometimes one might refer to a non-meta natural number as a computable number, though.) That is why I restricted the topic in the title to uncomputable large numbers. Of course, there is a similar (but less serious) problem for computable large numbers. For more deatails, see this.


Can we add two uncomputable large numbers?[]

Partially yes, but partially no. The addition, the multiplication and the power of uncomputable large numbers (resp. combinations, e.g. the composite, of uncomputable large functions) are not necessarily well-defined by the same reason as why the comparison of two uncomputable large numbers does not necessarily make sense explained in the previous section. In particular, Sam's number, Croutonillion, and many other similar "naive" combinations of known uncomputable large functions created by beginners are completely ill-defined.

At least, the sum \(N + n\) of an uncomputable large number \(N\) and a meta natural number \(n\) makes sense. It is an uncomputable large number defined in the base theory in which \(N\) is defined, and is larger than \(N\) as long as \(n \neq 0\). In particular, you can always add \(1\) to any known large number. For example, you can state "My large number is Rayo's number plus \(1\)! It is beigger than Rayo's number! I win!" Unfortunately, such a "new" large number is traditionally not regarded as your own work in googology community, because you need no effort to do so and the resulting stuff yields no contribution to valid googology.

By the same reason as why we can compare computable large numbers explained in the last paragraph of the previous section, you can add computable large numbers freely (as long as "computable large numbers" mean specific meta natural numbers in the context). That is why I restricted the topic in the title to uncomputable large numbers.


Is Rayo's number formalisable in a first order set theory?[]

Partially yes, but partially no. Since the original definition uses truth assignments for \(\textrm{FOST}\) formulae, which is a second order object, we need to fix an alternative formulation. If we want to formalise a truth predicate, the the most common first order set theory, i.e. \(\textrm{ZFC}\) set theory, is insufficient, as I explained here. In order words, the first order alternative Rayo's number using a truth predicate is ill-defined in \(\textrm{ZFC}\) set theory. On the other hand, a stronger set theory called \(\textrm{MK}\) set theory can formalise a truth predicate for \(V\), because it can directly deal with classes, which are not objects in \(V\). Since such a truth predicate is not unique, it is not natural to regard it as a standard alternative in a first order set theory. There is another alternative based on the syntax-theoretic definability instead of a truth predicate. Although it is formalisable in \(\textrm{ZFC}\) set theory as the definability in terms of the existence of a defining formula can be determined by an oracle Turing machine, it means that this alternative is not significantly stronger than Busy Beaver function. Therefore it is not natural to regard it as a standard alternative in a first order set theory. See also #Is the satisfaction at a segment of \(V\) equivalent to the truth in \(V\)?.


Is Rayo's number defined without axioms?[]

No. First of all, see the argument here. Then you will notice that you need a specific axiom of the base theory in order to define Rayo's number. If you consider Rayo's original description using a second order set theory, then you need to fix the precise axiom of the second order set theory so that the truth assignment for FOST formulae makes sense. If you consider a first order weak alternative by the syntax-theoretic definability, then you need to in addition fix a specific axiom of the coded set theory written in \(\textrm{FOST}\). For more detail, see the argument here.


Does Rayo's number depend on axioms?[]

Partially yes, but partially no. Replacement of the base theory by a stronger theory does not change the values of Rayo's function, as it does not change the defining formula. In this sense, the value of Rayo's number is independent of axioms of the base theory. On the other hand, we actually need to fix axioms in order to define Rayo's number, as I explained here. In this sense, the formalisability (and hence the well-definedness) of Rayo's number heavily depends on axioms. Moreover, if you are considering a first order weak alternative by the syntax-theoretic definability, then replacement of the coded set theory might change the value of the alternative Rayo's number, because it changes the syntax-theoretic definability. In that sense, the value of Rayo's number depends on axioms on the coded set theory, but we do not usually consider such a first order alternative, because it is not significantly stronger than Busy Beaver function.


Is Rayo's number well-defined?[]

Partially yes, but partially no. First of all, if you do not know the reason why we need axioms to define large numbers, see the explanation here. Then you will notice that you need a specific axiom of the base theory. Since it is not clearly declared, the original definition is not completed.

On the other hand, the declaration of an axiom is traditionally omitted if it is the usual one, i.e. the axiom of (first order) \(\textrm{ZFC}\) set theory. Therefore you might say "The base theory must be the usual one because they are omitted! Then Rayo's number is well-defined!" Unfortunately, it is false. According to the creator, Rayo's number is defined on a second order logic, which is unspecified and is not the usual set theory. In addition, a naive interpretation into \(\textrm{ZFC}\) set theory does not work because the required truth predicate is not definable under \(\textrm{ZFC}\) set theory by the reason explained here. So you actually need to specify an axiom of a second order set theory, or at least a specific alternative definition in \(\textrm{ZFC}\) set theory. After then, Rayo's number becomes well-defined. For first order alternatives, see a first order weak alternative by the syntax-theoretic definability.


Is BIG FOOT well-defined?[]

No. First of all, if you do not know how axioms are used in the definition of an uncomputable large numbers using truth predicate, e.g. Rayo's number, see the explanation here. Similar to Rayo's number, you need a specific axiom of the base theory and another specific axiom of the coded set theory written in \(\textrm{FOOT}\). On the other hand, its creator just wrote ""We also assume that oodles satisfy all the properties which are considered "natural" properties, e.g. extensionality or existence of full power set"" in the definition of BIG FOOT. It is awfully ambiguous, because any natural choice of axiom conflict the definition of \(\textrm{BIG FOOT}\). For more details, see the comment here. Although I asked twice the creator on what axioms \(\textrm{BIG FOOT}\) is defined, he has never answered it. Therefore the creator might have no idea about how to fix the definition. It means that \(\textrm{BIG FOOT}\) is completely ill-defined. A deeply related argument is available here.


FGH[]

Is FGH computable?[]

Partially yes, but partially no. The usual FGH is a map \(\alpha \times \mathbb{N} \to \mathbb{N}\) for a countable ordinal \(\alpha\) equipped with a system of fundamental sequences, and hence is uncomputable if \(\alpha > \omega\) by the definition. Remember that a computable map is defined as a map \(\mathbb{N}^n \to \mathbb{N}\), and hence its domain is not of the form \(\alpha \times \mathbb{N}\). If we fix a \(\beta \in \alpha\), then the segment \(\mathbb{N} \to \mathbb{N}\) of the usual FGH \(\alpha \times \mathbb{N} \to \mathbb{N}\) can be computable. However, it does not mean that you defined a way to compute it, because the segment in the FGH corresponding to \(\beta\) is not necessarily computable even if \(\beta\) is recursive. An obviously uncomputable example is the FGH associated to the fundamental sequence \(\omega[n] = \textrm{BB}(n)\).

Stating "it is computable for each ordinal!" without verifying the computability is something like "My number is computable, but I do not tell you the definition!". If you want to fairly enjoy the computable googology, please specify how to compute it. Also, stating like "it is computable, because it is defined by a recursion" does not make sense, or just tells us that you do not even understand the definition of the computability. What the usual FGH is doing is not a recursive definition in arithmetic, but an inductive definition in set thoery. It does not imply the computability. You might say "Your counterexample using \(\textrm{BB}\) is cheating! Since my fundamental sequences are recursive, my functions are obviously computable!", but what does the "recursiveness" of fundamental sequences for ordinals mean? It does not make sense, because a system of fundamental sequences for ordinals below \(\alpha\) is a map \((\alpha \cap \textrm{Lim}) \times \mathbb{N} \to \mathbb{N}\), which is uncomputable if \(\alpha > \omega\) by the definition again.

On the other hand, if you use a fixed ordinal notation \(OT\) whose ordinal type is \(\alpha\) and a recursive system of fundamental sequences with respect to a fixed enumeration of \(OT\) so that the recursiveness makes sense, then the resulting FGH \(OT \times \mathbb{N} \to \mathbb{N}\) is computable in the sense that the composite \(\mathbb{N} \times \mathbb{N} \to \mathbb{N}\) with the fixed enumeration \(\mathbb{N} \to OT\) is computable. Usually, the recursiveness of a system of fundamental sequences does not depend on the choice of the enumeration as long as we choose it in a natural way, and hence we often omit the enumeration. Of course, we are requiring the recursiveness on the structure of an ordinal notation. For example, Kleene's \(\mathcal{O}\) is not an ordinal notation in this context. If you use such a non-recursive notation, then the resulting FGH is not necessarily computable.


Is FGH well-defined for any countable ordinal?[]

No. The FGH is defined for a tuple of an ordinal and a system of fundamental sequences up to it, but not for an ordinal. Since the resulting function heavily depends on the choice of a system of fundamental sequences, arguments without specific choice of fundamental sequences are meaningless. In particular, for a given function \(f\), something like "the ordinal in FGH approximating \(f\)" and "the ordinal in FGH corresponding to \(f\)" is ill-defined. If you want to consider a correspondence to an ordinal, you need to use the order type of an ordinal notation, a map compatible with an expansion rule, and so on. See also the issue on the dependency of the growth rate of FGH on a system of fundamental sequences.


Does a greater ordinal correspond to a faster function in FGH?[]

It is not necessarily true. See this for an example of a system of fundamental sequences below \(\omega^{\omega}+1\) for which \(f_{\omega^{\omega}}\) is weaker than \(f_{\omega+1}\). See this for a proof of the desired property under a mild condition on a system of fundamental sequences.


Is a function in FGH strictly increasing?[]

It is not necessarily true. See this for the issue.


Is a function in FGH corresponding to PTO(ZFC) computable?[]

It does not make sense, because the \(\textrm{PTO}(\textrm{ZFC})\) is not naturally equipped with a system of fundamental sequences. See the issue onthe dependency of FGH on fundamental sequences and the issue on \(\textrm{PTO}\) in FGH. You might say "Since \(\textrm{PTO}(\textrm{ZFC})\) is recursive, the FGH should be obviously computable for any choice of fundamental sequences!", but it is a wrong deduction by two reasons.

First, the FGH corresponding to a recursive ordinal with a fixed system of fundamental sequences is not necessarily computable, as I explained here. Second, \(\textrm{PTO}\) is not necessarily a well-defined recursive ordinal as I explained here.


Doesn't the difference of fundamental sequences affect the growth rate of FGH?[]

Yes, the difference of fundamental sequences actually affects the growth rate of FGH. For example, oif we define \(\omega[n] = f(n)\) for a specific function \(f \colon \mathbb{N} \to \mathbb{N}\), then the resulting FGH at \(\omega\) goes beyond \(f\). Therefore "the growth rate of \(f_{\alpha}\)" does not make sense for an infinite ordinal \(\alpha\) unless you specify a system of fundamental sequences up to \(\alpha\).


Is the least catching ordinal well-defined?[]

No. First, see the issue on the ill-definedness of FGH for ordinals without specific fundamental sequences and the issue on the ill-definedness of the growth rate of FGH for ordinal without specific fundamental sequences. Therefore you need to fix a system of fundamental sequences. Since there is no known system of fundamental sequences for all countable ordinals, the notion of Catching function is also known to be ill-defined, as Vel! and LittlePeng9 soon pointed out.

Even if we fix a system of fundamental sequences, we do not have an agreed-upon definition of the comparability apprilicable to the definition of the least catching ordinal. Sure, there are many candidates, but none of them fits the purpose, because nobody succeeded in proving the least catching ordinal is \(\psi_0(\Omega_{\omega})\) with respect to Buchholz's OCF and the canoncial system of fundamental sequences. If one roughly states "\(\psi_0(\Omega_n)\) in FGH is comparable to \(\psi_0(\Omega_{n+1})\) in SGH for any \(n \in \omega\)", there is no actual proof of the comparability, and even if we verify it, the statement does not ensure the minimality of \(\psi_0(\Omega_{\omega})\).


Is every notation expressing ordinals below a given ordinal approximated to the ordinal in FGH?[]

No. First of all, given an ordinal \(\alpha\), "\(\alpha\) in FGH" is ill-defined unless we fix a system of fundamental sequences up to \(\alpha\). See more details here. Even if we fix a system \([]\) of fundamental sequences up to \(\alpha\), the statement is not necessarily true, as a notation implementing the FGH based on another system of fundamental sequences up to \(\alpha\) is not necessarily approximated to the FGH with respect to \([]\). See more details here. The problem will be much more serious when we deal with a notation which does not admit a natural structure of a similar function hierarchy, e.g. notations for tree, TREE, and SCG. Moreover, even if a given notation admits a function hierarchy based on a system of fundamental sequence equivalent to \([]\), the statement is not necessarily true, because \(\alpha\) in Hardy hierarchy can be strictly weaker than \(\alpha\) in FGH with respect to those fundamental sequences. Further, if you assume that \(\alpha\) is "catching", the statement is meaningless unless you fully formulate the notion of the catching property. See more details here. People shoule be very careful that many of googological statements are based on these wrong deductions.


PTO[]

Does PTO(T) in FGH make sense?[]

No. FGH is not a hierarchy of functions indexed by ordinals, but is a hierarchy of functions indexed by ordinals equipped with systems of fundamental sequences. Since \(\textrm{PTO}(T)\) itself is not canonnically equipped with a system of fundamental sequences, \(\textrm{PTO}(T)\) in FGH does not make sense unless you specify a system of fundamental sequences. See also the issue on the dependency of the growth rate of FGH on a system of fundamental sequences and the issue on the computability of \(\textrm{PTO}(\textrm{ZFC})\) in FGH.


Is PTO(T) defined as the supremum of ordinals whose well-foundedness are provable in T?[]

No. Since ordinals in the meta theory does not give terms in the coded theory \(T\), the ordinal in your question should mean an ordinal in \(T\), which should be a set thoery in this case. On the other hand, the sentence "every ordinal is well-founded" is provable in the coded theory, and hence the formulation is meaningless. Also, there is no method to take in the meta theory the supremum of a class of ordinals in \(T\), and hence the "supremum" is also meaningless.


Is PTO(T) defined as the minimum of ordinals whose well-foundedness are unprovable in T?[]

No. It is essentially the same mistake as the confusion of \(\textrm{PTO}(T)\) and the supremum of ordinals whose well-foundedness are provable in \(T\).

The reader should be careful that an article in wikipedia explains "The existence of a recursive ordinal that the theory fails to prove is well-ordered follows from the \(\Sigma_1^1\) bounding theorem", but this statement is incorrect by two reasons: (1) The provability of well-foundedness of an ordinal does not make sense as I explained in #Is PTO(T) defined as the supremum of ordinals whose well-foundedness are provable in T?, and hence the statement should be formalised in terms of ordinal notations instead of recursive ordinals as other sections in the article do. (2) Even if we formalise the statement by using ordinal notations, the statement is false unless we add an appropriate condition on the theory, as we will explain a counterexample in the second paragraph of #Is PTO(T) a recursive ordinal?.


Is PTO(T) the supremum of ordinals for which the totality of FGH is provable in T?[]

No. By Kreisel's theorem[4] Proposition 2.24, \(\textrm{PA} + \textrm{Th}(\Sigma^1_1)\) proves the totality of \(f_{\varepsilon_0+1}(n)\) with respect to the system of fundamental sequences for Cantor normal forms, while \(\textrm{PTO}(\textrm{PA} + \textrm{Th}(\Sigma^1_1))\) coincides with \(\textrm{PTO}(\textrm{PA}) = \varepsilon_0\).


Is PTO(T) defined as the supremum of order types of recursive well-orderings which is a provably well-founded provably total ordering in T?[]

Partially yes, but partially no. There are several distinct notions of the well-foundedness of a binary relation:

  1. The non-existence of a primitive recursive infinite descending sequence.
  2. The non-existence of a infinite descending sequence.
  3. The axiom schema of transfinite induction on first order sentences.
  4. The axiom of transfinite induction on a fixed free set variable \(X\). (This makes sense only when \(T\) is a sufficiently strong arithmetic whose language is the language of \(\textrm{Z}_2\).)
  5. The existence of a minimal element in any non-empty subset. (This makes sense only when \(T\) is a sufficiently strong set theory.)

If you are referring to the fourth or the fifth formulation, then the answer is yes. Otherwise, the answer is no. For more details, see the article on proof theoretic ordinal in Japanese Googology Wiki.


Is PTO(T) well-defined for any theory T?[]

No. First, please read the definition of \(\textrm{PTO}\) above. Then you will soon understand that \(\textrm{PTO}(T)\) is defined only for subsystems of \(\textrm{Z}_2\) and sufficiently strong set theories. Then how about \(\textrm{PA}\)? When we deal with arithmetic which is not a subsystem of \(\textrm{Z}_2\), e.g. arithmetic whose language is not the language of \(\textrm{Z}_2\), then we artificially extend it to a subsystem of \(\textrm{Z}_2\) by rearranging axiom schema using the fixed free set variable \(X\).

In particular, a sentence like "Denote by \(\alpha[n]\) the supremum of recursive ordinals given as the \(\textrm{PTO}\) of a formal thoery whose language consists of \(n\) or less symbols and whose set of axioms consists of \(n\) or less formulae of length \(\leq n\)" is ill-defined.


Is PTO(T) a recursive ordinal?[]

Partially yes, but partially no. Although \(\textrm{PTO}\) is a recursive ordinal for many famous theories, it can be \(\omega_1^{\textrm{CK}}\), which is not recursive. We do not even know whether \(\textrm{PTO}(\textrm{ZFC})\) is a recursive ordinal or not. If one rearranges the definition of \(\textrm{PTO}\) so that it should be automatically recursive, then \(\textrm{PTO}\) will be just ill-defined for a theory \(T\) such that \(\textrm{PTO}(T)\) with respect to the original definition is \(\omega_1^{\textrm{CK}}\).

A simple example of an arithmetic whose \(\textrm{PTO}\) is non-recursive is \(\textrm{PA}\) augumented by the disprovable sentence \(0 = 1\). If you prefer a consistent example, then \(\textrm{PA}\) augumented by the negation of \(\textrm{Con}(\textrm{PA})\) is a consistent example (as long as \(\textrm{PA}\) is consistent in the meta theory). In general, there is a criterion of the recursiveness of \(\textrm{PTO}\) using the notion of \(\Pi^1_1\)-soundness[4] Theorem 2.21.

The reader should be careful that an article in wikipedia conflicts the explanation in the last paragraph. See the second paragraph of #Is PTO(T) defined as the minimum of ordinals whose well-foundedness are unprovable in T? for details.


Is the PTO of a stronger theory larger?[]

No, it is not necessarily true. For example, \(\textrm{PA}+\textrm{Con}(\textrm{PA})\) is strictly stronger (with respect to both proof-strength and consistency-strength) than \(\textrm{PA}\) (as long as \(\textrm{PA}\) is consistent in the meta theory) by Goedel's incompleteness theorem, but its \(\textrm{PTO}\) coincides with \(\textrm{PTO}(\textrm{PA})\)[4] Remark 2.25.


Cardinal[]

Is a cardinal an ordinal?[]

Yes by the definition. A beginner should be very careful that people sometimes state something wrong like "a cardinal is not an ordinal". It just happens when they talk about a cardinal without checking the definition. Since set theory is difficult, googologists tend to skip the precise definitions of basic notions, and unintentionally believe their intuition rather than the precise definitions.

Ordinals and cardinals are very difficult notions, while googologists tend to regard them as simple notions. They are actual mathematical formulation of finite and transfinite numbers, and infinities are far beyond our intuition. Therefore when we talk about infinite ordinals, we should be based on the precise definitions and proofs. Otherwise, they will fall into common failures such as using undefined operations on ordinals, e.g. unspecified transfinite hyperoperators, or confounding different functions, e.g. mixing two disctinct OCFs, mixing two distinct transfinite extension of hyperoperators, and mixing ordinal arithmetic and cardinal arithmetic.


Is a cardinal just a formal string?[]

No. A beginner should be very careful that people sometimes state something wrong like "a cardinal is just a symbol, which can be used to describe collapsing". It just happens when they confound an ordinal with an ordinal notation encoded into formal strings. See also the issue on the uncomputability of an OCF and the issue on confounding of an ordinal notation and an OCF. As I clarified above, every cardinal is an ordinal. Unfortunately, googologists who misunderstood a cardinal as a formal string irrelevant to an ordinal tend to insist that cardinals and OCFs are quite simple, and to spread wrong information to other beginners in order to share the fantasy.

One typical mistake associated to this misconception is defining an ordinal function \(\psi(\alpha)\) as \(\omega^{\alpha}\) but stating \(\psi(\Omega) = \psi(\cdot \psi(0) \cdot) = \varepsilon_0\). If you intend that \(\psi(\alpha)\) is defined as \(\omega^{\alpha}\) for any ordinal \(\alpha\), then \(\psi(\Omega) = \omega^{\Omega} = \Omega \neq \varepsilon_0\) by definition. Please remember that every cardinal is an ordinal, instead of a formal symbol. If you intend that \(\psi(\alpha)\) is defined as \(\omega^{\alpha}\) for any \(\alpha\) in an unspecified set of ordinals to which (\Omega\) does not belong, then \(\psi(\Omega)\) is just ill-defined, because you have not given any characterisation of the value. In order to define a function, you need to define both the domain and the correspondence for any imputs in the domain, and hence giving the correspondence for restricted imputs does not define a function. For more details on how to define a function, see "General Setting" section in the guideline to create a recursive notation. See also the issue that \(\Omega\) does not mean "nesting", User_blog:P進大好きbot/List of Common Failures in Googology#Non-Unique Expression, and User_blog:P進大好きbot/List of Common Failures in Googology#Intuitive Use of Expressions.


Does Ω in an OCF mean "nesting"?[]

No. First of all, see the issue on the misconception that \(\Omega\) is a formal string. Then you will understand that \(\Omega\) is an ordinal, and the value of an OCF \(\psi\) for an input including \(\Omega\) in a fixed expression should be characterised by the definition of \(\psi\). It sometimes admits a fundamental sequence given by replacing an occurence of \(\Omega\) in the expression by a nesting. However, it sometimes does not admits such a fundamental sequence.

For example, \(\psi_0(\varphi_{\Omega}(1))\) in Buchholz's function is expanded as \(\psi_0(\psi_1(\psi_1(\cdots \psi_1(0) \cdots))\) instead of the "natural" nesting \(\psi_0(\varphi_{\psi_0(\varphi_{\cdot_{\cdot_{\cdot_{\psi_0(\varphi_{\psi_0(0)}(1))}\cdot}\cdot}\cdot}(1))}(1))\), and \(\psi_{\Omega}(\varphi_{\Omega}(1))\) in Rathjen's psi function is expanded as \(\psi_{\Omega}(\varphi_{\psi_{\Omega}(\varphi_{\cdot_{\cdot_{\cdot_{\psi_{\Omega}(\varphi_{\psi_{\Omega}(0)}(\Omega+1))}\cdot}\cdot}\cdot}(\Omega+1))}(\Omega+1))\) instead of the "natural" nesting \(\psi_{\Omega}(\varphi_{\psi_{\Omega}(\varphi_{\cdot_{\cdot_{\cdot_{\psi_{\Omega}(\varphi_{\psi_{\Omega}(0)}(1))}\cdot}\cdot}\cdot}(1))}(1))\).

Beginners tend to make this mistake, because the notions of a cardinal and an OCF are far beyond their comprehension. In that case, they ignore the precise definitions, and try to search for web pages which explain those notions in a "simpler" way. However, there are so many wrong information written by other beginners, and hence they catch wrong but "simpler" descriptions. Then they get happy, and think that they completely understand the notions. "Wow, how simple they actually are! This web page is great, while other mathematcal articles explaining the annoying unreadable definitions are awfully useless." No, no. Unfortunately, they do not go back to the precise definition, and do not doubt the wrong description. After then, they start to "teach" other beginners such a wrong description, and spread it by writing it in a new web page.


Can we avoid using large cardinals by regarding I, M, K, and so on as formal constant term symbols?[]

Partially yes, but partially no. It might be possible for you to "describe" the rules of your OCF without using large cardinals. However, if you prove the well-definedness of it, then you often need to assume the existence of large cardinals as an additional axiom. In order to determine whether you actually need large cardinals or not, you have to complete a proof of the well-definedness. Also see the argument on their recursive analogues here.

I note that there is a common mistake in analysis of a large function. The original OCFs with such weakly inaccessible cardinals are defined by Rathjen. Since the original definition is quite complicated, googologists here just imitated the definition without any reasonable comparison. Therefore even though they state "Our OCFs are also of weakly Mahlo/compact level!", it does not mean that they are as strong as Rathjen's OCFs. It just means that they look similar to Rathjen's OCFs because they used similar symbols and similar patterns of expansions. Further, such OCFs are not necessarily well-defined. For example, UNOCF is a famous stuff which looks like an OCF with symbols like \(I\), \(M\), and \(K\), but has not been completely defined yet. I guess any OCFs here which look very simple but contain similar symbols are much weaker than Rathjen's OCFs or do not yield simple ordinal notation systems. Without associated ordinal notation systems, OCFs are useless in computable googology by definition, although many googologists create OCFs without associated ordinal notation systems. For example, they are useless in analysis by the reason explained here.


Does a recursive analogue of a large cardinal play a similar role to the original one?[]

Partially yes, but partially no. You need to be very careful of the use of a recursive analogue of a large cardinal. See the guideline of the use of large cardinals in the definition of an ordinal notation here.


Is a value of a computable function independent of large cardinal axioms?[]

Usually yes. If its computation process can be presented by explicit syntax-theoretic operators sending a defining formula of a meta natural number to a defining formula of a meta natural number, the output of the computable function with standard input is completely independent of large cardinal axioms, as long as they are consistent. It is simply because of the provability of the behaviour of each computation step. In particular, the same holds for the termination starting from each meta natural number.

On the other hand, it does not imply that a proof of the termination under large cardinal axioms can be reduced to a proof without them. The provability of "the termination of the computation starting from \(n\)" for any meta natural number does not imply the provability of "the termination of the computation starting from \(n\) for any \(n \in \mathbb{N}\)". If it is unprovable, then it is not a tautology by the incompleteness theorem, i.e. there is a model of \(\textrm{ZFC}\) on which the termination is actually false by the incompleteness theorem.

In this case, the growth rate of the computable function is not well-defined, because it is given by the parial order defined by the eventual domination. You can just compare each value. Of course, you can replace the definition of the growth rate so that it always makes sense whenever you hope so, but then you should explicitly declare the alternative definition. A candidate of an alternative definition of the eventual dominaton \(f < g\) is the meta theoretic statement that there is a defining formula of a meta natural number \(N\) such that for any defining formula of a meta natural number \(n\) with \(N < n\), the inequality \(f(n) < g(n)\) is provable under \(\textrm{ZFC}\).

If you do not approapriately replace the definition of the growth rate, your analysis is incorrect because you are using the ill-defined notion or implicit axioms stronger than \(\textrm{ZFC}\). Everyone might agree that any analysis heavily based on the strongest axiom \(0 = 1\) is incorrect.

Maybe it is better to give an example of a computable function whose computation process can be presented by explicit syntax-theoretic operators sending a defining formula of a meta natural number to a defining formula of a meta natural number. Remember that every constant function is computable. Then you can understand that the function \begin{eqnarray*} \textrm{CH}(n) := \left\{ \begin{array}{ll} 0 & (\textrm{if } \forall m \in \mathbb{N}, \aleph_m \neq 2^{\aleph_0}) \\ m & (\textrm{if } \exists ! m \in \mathbb{N}, \aleph_m = 2^{\aleph_0}) \end{array}\right. \end{eqnarray*} is actually a computable function, whose value is not described by a defining formula of a meta natural number. If you say "But none can compute it!", then you need to look up the precise definition of the notion of a Turing machine and a computable function. The "computabilty" in the sense in your natural language has nothing to do with the computability of a function.

This is a trivial example, but always appears when we construct a computable large number through the diagonalisation of provably terminating computable functions or provably well-founded recursive ordinals through formal languages of set theory in order to approach the PTO of \(\textrm{ZFC}\). Please do not say that it is easy to avoid such trivial examples in the diagonalisation, before reading my effort to define a computable large number using the PTO of \(\textrm{ZFC}\). It was not so easy for me. It might be easy for you, if you can define a really high-end computable large number using a computable large function corresponding to the PTO of \(\textrm{ZFC}\).

I guess that when googologists mention to a computable function, they implicitly consider only the case where the correspoding natural number through the universal Turing machine is also described by a model-independent defining formula of a meta natural number. Then the computation process is described in purely syntax-theoretic way, and its output is described by a defining formula of a meta natural number if the input is described by a defining formula of a meta natural number. So it is a natural assumption.

For example, the natural number corresponding to \(\textrm{CH}(n)\) is not described by a defining formula of a meta natural number. On the other hand, the system defining the notion of transcendental integers diagonalises computable functions so that it corresponds to a natural number described by a defining formula of a meta natural number through the universal Turing machine, and hence \(\textrm{CH}(n)\) never appears in the diagonalisation as long as the input is described by a defining formula of a meta natural number.

I also explicitly use similar syntax-theoretic restriction in the construction of a computable large number using the PTO of \(\textrm{ZFC}\) in order to avoid the uncomputability derived from the undecidability of the order between ordinals defined by recursive well-orders on \(\mathbb{N}\). As a result, its computation process can be presented by explicit syntax-theoretic operators sending a defining formula of a meta natural number to a defining formula of a meta natural number.

Summary:

  1. The value of a computable function is not necessarily independent of large cardinal axioms.
  2. The value of a computable function is independent of large cardinals when its computable process is described by syntax-theoretic operators sending a defining formula of a meta natural number to a defining formula of a meta natural number.
  3. The termination of a computable function in the case 2 starting from a fixed meta natural number is independent of large cardinal axioms.
  4. The provability of the termination of a computable function in the case 2 (starting from any natural number) might heavily depend on large cardinal axioms. If it is not provable without large cardinal axioms, then it is false on a model.
  5. The usual definition of the growth rate assumes the totality. Therefore analysis based on a computable function in the case 4 is incorrect as long as you explicitly replace the definition of the growth rate.


Is Stage Cardinal well-defined?[]

No. It was originally introduced by Hyp cos here, but turned out not to work as desired. After then, other googologists used that terminology without a fixed definition. It is not a large cardinal, a recursive analogur of a large cardinal, or a formal string which plays an alternative role of a large cardinal.


OCF[]

Is an OCF computable?[]

No. An OCF is a map whose domain is uncountable. By definition, a computable map is a map whose domain is countable. Therefore an OCF is uncomputable in any sense.


Is an OCF an ordinal notation?[]

No. See this for the difference. It explains that many stuffs called "ordinal notations" in this community are not actually ordinal notations, and hence are useless to define computable large numbers.


Is my OCF cooler than existing ones?[]

Usually no. When people explain how beautiful their OCFs are, then they usually have not created ordinal notations associated to them. We need ordinal notations for the purpose of analysis, and hence if we do not have them, then the OCFs are not so useful as they expect. Even they might not understand what an ordinal notation is. They often say that the existing OCFs are meaninglessly complicated or several components are useless as long as they are not interested in proofs, and "simplify" those. However, most parts of the complexity are actually used to create ordinal notations. Therefore if people state that their OCFs are cool because of the simplicity, then it is doubtful that they understand the importance of an ordinal notation. See also User blog:p進大好きbot/List of common misconceptions about Rathjen's psi#Is my simplification cooler than the original one? for the specific issue on simplifications of Rathjen's \(\psi\).


Is UNOCF well-defined?[]

No. Its original idea was originally introduced by Username5243 here, but UNOCF itself has not been completely defined yet. There is no known explicit expansion rule applicable to all ordinals (not specific finite examples) even below \(\textrm{TFB}\) (not large cardinal level). On the other hand, several googologists understand a rough pattern of expansions for sufficiently large ordinals, and hence it is used as a shorthand of them in their context for convenience.

At least, the definition is not formalised even beyond \(\psi_0(\Omega_{\omega})\) with respect to Buchholz's OCF, and hence UNOCF is useless for precise arguments on computable large functions asscoaited to OCFs with large cardinals. For more details, see this.


What is ψ_0(ψ_I(0))?[]

The question is meaningless unless you specify the definition of the \(\psi\). When you hear that, then you might say that it is Buchholz's OCF, extended Buchholz's OCF, or Rathjen's OCF. But then it conflicts your expectation that ψ_I(0) is the least omega fixed point. The issue is well-known in this community, but a beginner tends to fall into this trap. For more details, see the main article, User blog:p進大好きbot/List of common misconceptions about Rathjen's psi#Does \(\psi_I(0)\) coincide with the least omega fixed point?, and User blog:p進大好きbot/List of common misconceptions about Rathjen's psi#Does \(\psi_{\Omega}(\psi_I(0))\) coincide with the countable limit of Extended Buchholz's function?.


Cheating[]

It is obvious![]

People tend to regard a desired statement as an obvious fact, even when they have no actual proof. We have so many examples:

  1. People frequently state that their numbers are the biggest in the world or bigger than specific numbers as if it were an obvious fact even when there is no actual proof, but later they usually later turned out to be ill-defined or much smaller than their expectations.
  2. BM1 was said to be obviously terminating because someone says so, but later turned out to have an infinite loop.
  3. BM2 was said to be obviously terminating because it looks "natural", but later turned out to have an infinite loop.
  4. BIG FOOT was said to be obviously well-defined because the article said so, but later turned out to be ill-defined.
  5. UNOCF is said to be obviously stronger than any other OCFs because it uses a powerful cardinal \(T\), but is known to be ill-defined.
  6. BEAF is said to be obviously stronger than many other computable notations because someone says so, but is known to be ill-defined.
  7. R-function is said to be obviously well-defined because its definition is written in cool formal-looking sentences, but is known to be ill-defined.

Also, there are so many wrong informations about OCFs regarded as obvious facts, e.g. "\(\psi_0(\psi_1(\psi_2(\psi_3(0))))) = \psi_0(\psi_3(0))\) with respect to Buchholz's fucntion" and "\(\psi_I(0)\) is the least omega fixed point with respect to Buchholz's or Rathjen's OCF".

Why do people regard wrong informations as obvious ones? It is simply because they state them before proving them, or even do not understand what a proof is. In their convention of a proof with intuitive arguments, they can prove a statement together with its negation. For example, they state a fact like that a number \(X\) is bigger than another well-defined number \(Y\), which makes sense only when \(X\) is well-defined, and also can check that \(X\) is ill-defined after others point out problems in the definition. Namely, they are contradicting.

We can say "obvious" only when we have an actual proof instead of an expectation. Otherwise, it is kind of cheating. See also the cheating on proofs.


It is easy to show the statement![]

Please write down the proof. Then you will notice that you are not able to show what you think that is trivial. One of the typical example is the termination of pair sequence system. People often say that it is easy to prove it without writing down the proof, but once they are required to write down a proof, then they show something poor like "Since \(\pi\) looks similar to \(3.14\), \(\pi\) is a rational number."

We can say "easy to show" only when we have actually written down a precise proof. Otherwise, it is kind of cheating. The attitude is worse than doing nothing, because it reduces others' motivation to actually show a given statement, or even insults the effort to show it.


It is easy to outgrow the function![]

Please write down the precise definition. Then you will notice that you are even unable to write down the definition of what you think that is easily defined.

We can say "easy to define" or "easy to outgrow" only when we have actually defined the stuff. Otherwise, it is kind of cheating. The attitude is worse than doing nothing, because it reduces others' motivation to actually create a new function, or even insults the effort to define it.


It works in my mind![]

Please notice that if you understand that you cannot write down the precise definition, it is just ill-defined. Although there are so many googologists who insist that they have cool ideas of new functions which obviously outgrow any other well-known functions, but it does not mean that their thoughts are correct. Everyone including a beginner is able to just say "My function in my mind is the fastest in the world" without showing efforts. Therefore what really matters is what is really written, instead of what googologists have in their minds.

There are typical excuses of this claim:

  1. "It actually works because we can understand how it is intended to work!"
  2. "It actually works because we can implement it someday by a programming language!"
  3. "It actually works because we can formalise it in the future!"

Unfortunately, as long as you have never fixed the precise definition, whether a given implemantation or formalisation is "compatible" with your intention or not is non-sense. Say, you can insist that a given one is correct, even when the one is incompatible with your hidden intention.

In addition, even if you have written down a precise definition, whether it actually works in a way compatible with your explanation or not is still open unless it is proved or disproved. There are also typical excuses of this claim:

  1. "It actually works because we have never found errors!"
  2. "It actually works because we can easily correct all errors which you have pointed out!"
  3. "It actually works because all of my explanations whose incorrectness you have pointed out are just intended to be rough descriptions!"

The non-existence of "known" counterexample does not ensure the non-existence of a counterexample. The possibility that you can correct errors in the future do not mean that there are currently no errors. If you insist that your explanation is just rough one, then whether a given implemantation or formalisation is "compatible" with your intention or not is non-sense again. Say, you can reject any explanations after the incompatibility is pointed out.

We can say "it works" only when we have actually written down how the stuff precisely works. Otherwise, it is kind of cheating. The attitude is worse than doing nothing, because it reduces others' motivation to complete what they have in their mind, or even insults the effort to complete them.


I have already done it![]

Please share your great work with us, instead of just stating that you have done it. There are so many people who just state that they have done something great, e.g.

  1. extended known OCFs,
  2. created their own notation stronger than known ones, and
  3. verified the termination of a known computable function,

without showing the result. Then who knows whther they actually have done it or not?

We can say "have done" only when we have actually done something. Otherwise, it is kind of cheating. The attitude is worse than doing nothing, because it reduces others' motivation to actually complete what you state that you have done. See also User blog:p進大好きbot/List of common misconceptions about Rathjen's psi#Does my extension/simplification in my mind work?.


I fully understand it although I do not know the precise definition![]

Please keep in mind that you cannot check whether you understand a notion correctly or not, if you do not know the precise definition. Indeed, many members in this community stated that they fully understood OCFs even though many of them did not even know what a cardinal means. What happened then? They spread wrong information, and deceived many beginners. When I explained how they were wrong by citing precise theorem numbers in a mathematical paper, they just denied my explanations by believing their intuitions. Without the knowledge of the precise definition, it might be impossible even to judge whether explanations based on the precise definition are correct or not.

We can say "understand" only when we know the definition. Otherwise, it is kind of cheating. When you are talking about notions whose definitions you do not know, then you might not be unable even to understand errors pointed out by others. The attitude is worse doing nothing, because it causes pollution by spreading wrong information.


I conjecture this![]

Please keep in mind that a conjecture is not an arbitrary statement which you do not know whether it is true or not. Googologists express a statement as a conjecture, even when they do not know what the statement precisely means. For example, even when the statement refers to very difficult notions, e.g. large cardinals, provability, well-foundedness, PTO, actual OCFs, \(\models\), and so on, whose precise definitions they do not know. Why? They might believe expressing a conjecture including "professional" terminology might make themselves look so cool.

Unfortunately, if they use notions whose precise definitions they do not know, then others easily notice this cheating. Please imagine that a beginner says "An OCF is a simple notation. Since \(\Omega\) is a formal symbol expressing nesting, we can just expand ordinals into a nested ordinals." You will soon understand that the beginner is talking about what he or she does not know. Like that, other people immediately notice when you are talking about notions whose precise definitions you do not know.

We can say "conjecture" only when we know what we are talking about and we have some theoretic evidence to justify the expectation. Otherwise, it is kind of cheating. The attitude is worse doing nothing, because it makes other beginners to believe such reasonless statements to some extent. For example, people who did not know the definitions of Rathjen's OCFs had spread conjectures on the comparisons which indecate how they were supposed to be weak. After all, many beginners believed that actual OCFs were really weak, and what those people admire were great, even though they were ill-defined. This crisis perhaps delayed the study of OCF-related googology for two or three years.

The same for BMS. Many people have conjectured that their notations are stronger than BM2.3, but none of them are currently known to be actually stronger than BM2.3, or even its susbsystem caller TSS. Nevertheless, the existence of such conjectures makes other members to believe that their own notations are perhaps stronger than BMS, and start to conjecture the comparison.


I have never learned well-defined OCFs, because they are too weak![]

Please keep in mind that it is quite unreasonable to estimate what you know nothing about. Why could you conclude that well-defined OCFs, which you have never learned, are too weak? Perhaps you have innocently believed analyses by ill-defined OCF users, but the analyses are also known to be based on massive common mistakes. So, you might be a victim of their harmful fake. However, if you also try to spread the fake even though you have no evidence about it, then there will be more victims due to you. The problem is essentially the same as the other issue #I use ill-defined functions, because well-defined function are too weak!.

We can say "too weak" only when we have actually understood the stuff. Otherwise, it is kind of cheating. The attitude is worse than doing nothing, because it reduces others' motivation to learn actual OCFs, or even insults the effort to learn them. Indeed, this fake seriously delayed the study of OCFs in googology, and hence is awfully harmful.


I use ill-defined functions, because well-defined function are much weaker![]

Please keep in mind that the analysis using ill-defined functions is meaningless. Indeed, there is a very typical fake "well-defined OCFs are much weaker than ill-defined OCFs", but it was spread by people who did not even know the definitions of actual OCFs in order to justify their use of ill-defined OCFs or beginners who just believed other beginners. The logic is based on common misconceptions in this list such as "\(M\) in Rathjen's \(\psi\) works in the same way as \(M\) in this notation because it looks like the same symbol!", and hence their underestimations of actual OCFs have been corrected so many times. This implies how unreliable their intutive analyses which were not based on actual definitions were. At any rate, there is no reason why we should believe statements on OCFs given by people who did not even know their definitions.

Ill-defined functions might be very attractive for you because the lack of the precise definition prevents others from denying your desired analyses using them and beginners might believe that your notations are much stronger than actual OCFs. However, such analyses just make your notations look weak for experts, because the use of ill-defined OCFs implies that your notations are created by a googologist who does not know the definition of an OCF and does not even understand how meaningless the comparison to ill-defined OCFs is. People sometimes mention to famous unformalised objects such as BEAF, UNOCF, Catching function, SAN, and so on in order to explain that OCFs are weaker than them, but the logic is still meaningless because the comparison between such objects without fixed definitions and actual OCFs itself does not make sense. Here are examples of such an intentional use of ill-defined functions:

  1. "I know that UNOCF is ill-defined, but I need to use it because it is the strongest OCF in the world." (cf. #Is UNOCF well-defined?)
  2. "I know that Stage cardinal is ill-defined, but I need to use it because it is stronger than anything well-defined." (cf. #Is Stage Cardinal well-defined?)
  3. "I know that the catching function is ill-defined, but I need to use it because it surpasses other googological notations." (cf. #Is the least catching ordinal well-defined?)
  4. "I know that SAN is ill-defined, but I need to use it because computable notations are much weaker than it." (cf. SAN#Common misconceptions)

Also, OCFs have benefits that they usually admit ordinal notations, which help precise analyses, and the precise comparison to them ensures the termination of the targets of analyses. Comparison by other ordinal functions without associated ordinal notations is quite unreliable because of the lack of the algorithm to check the correctness, and comparison by notations whose well-foundedness is unknown does not ensure the termination of the targets of analyses.

We can say "much weaker" only when we have actually evidences such as meaningful analyses, where ill-defined functions can never appear. Otherwise, it is kind of cheating. The attitude is worse than doing nothing, because it reduces others' motivation to learn actual OCFs, or even insults the effort to learn them. Indeed, this fake wasted precious time for people to study OCFs, and hence is awfully harmful.


It is a strong notation, although somebody says that it is ill-defined![]

Please keep in mind that the strength (resp. correctness) of a given googological notation (resp. algorithm) makes sense only when it is well-defined. Therefore such an estimation should be based on the precise definition rather than an intuitive interpretation. However, many members in this community state the strength (resp. correctness) of an ill-defined googological notation (resp. algorithm) as the truth. This means that they are not based on the precise definition.

There are two cases.

  1. People stated the strength or the correctness as the certain fact, even after other people pointed out that the work was ill-defined.
  2. People stated the strength or the correctness as the certain fact, but later other people pointed out that the work was ill-defined.

For the first mistake, see #I use ill-defined functions, because well-defined function are much weaker!. The second mistake might be due to the negligence of the precise definition.

"I analysed this notation. It reaches the countable limit of Extended Buchholz's function."
"But it is ill-defined due to the fact that there is no rule applicable to \(X\)."
"Oops, but I obviously assumed that the last entry of \(X\) should be deleted before applying rules."
"Even in that case, it is ill-defined due to the fact that there are two distinct rules which are applicable to \(Y\), and the resulting value heavily depends on the choice of a rule."
"Oops, but I obviously assumed that the rule with a smaller numbering should be applied."
"Even in that case, it is ill-defined due to the fact that there are two distinct ways to apply the rule 1 to \(Z\), and the resulting value heavily depends on the choice of a way."
"Oops, but I obviously assumed that we could choose the strongest choice."
"How do you choose the strongest one for each case...? Anyway, even in that case, it is ill-defined due to the fact that the expansion of \(W\) causes an infinite loop."
"Oops, but I obviously assumed that we could freely resolve an infinite loop into a well-define value."

For example, BM1 and BM2 were regarded as strong notations, but later found to be ill-defined. See User blog:p進大好きbot/Infinite Loop in BMS.

We can say state the strength (resp. correctness) of a given googological notation (resp. algorithm) as a truth only when we refer to the precise definition. If we are just expressing our uncertain expectation which are not based on the actual definition, then we can simply clarify so instead of simply state it as if it were a certain fact. Otherwise, it is kind of cheating. The attitude is worse doing nothing, because the statement makes others believe that the work is well-defined and hence prevents them to check the precise definition.


References[]

  1. K. Kunen, Set Theory: An Introduction to Independence Proofs, Studies in Logic and the Foundations of Mathematics, Volume 102, North Holland, 1983.
  2. Keith J. Devlin, Constructibility, Perspectives in Mathematical Logic, Volume 6, Springer-Verlag, 1984.
  3. T. Kihara, omega-1-ck.pdf.
  4. 4.0 4.1 4.2 M. Rathjen, The Realm of Ordinal Analysis, S.B. Cooper and J.K. Truss (eds.): Sets and Proofs. (Cambridge University Press, 1999), pp. 219--279.
Advertisement