User:Edwin Shade 2/Forum Archives

Fandom's handling of forums
Forums are going to be deleted soon. See here and here under FOUR FEATURES BEING RETIRED/REPLACED.

True there's not that much to save here, but this is worth mentioning. If there's anything you felt was helpful on this namespace, save it now. When i think about you i touch my elf (talk) 18:15, November 1, 2019 (UTC)

Should pages be removed for not including number in sources even though notation within makes the number name valid?
I mean when the creator of the numbers (in the source) makes a consistent naming system, and somebody makes a page for number not in the page itself but is valid within the notation. uber sketch 📞  23:29, June 14, 2019 (UTC)

Error with MathJax
When I look at the blog posts, a bunch of errors are raised and the $$\LaTeX$$ doesn't render correctly. What do I do to fix it?



TheKing44 (talk) 05:33, June 11, 2019 (UTC)


 * If you are using $$...$$ tag, please try \( ... \) tag. $$...$$ tag does not work anymore.
 * p-adic 07:11, June 11, 2019 (UTC)

Help Needed In Finding A Video Joiner
This is of course unrelated to the study of large numbers, but will help me in creating more interesting videos on my YouTube channel. I would like to know if there are any video joiners, (services which will allow me to take two or more videos and join them together in one continuous video), which are compatible with Chrome, allow you to make videos of unlimited length, (essentially videos whose length is only dependent on the limitations of your computer), and which are free.

Seeing as how both an expensive laptop and a Chromebook is Turing complete, there should be no technical restrictions to prevent me from finding such a service, even if it is rare. I would like to know if anyone can think of something, in the meantime I'll be looking as well. Edwin Shade (talk) 01:10, January 5, 2018 (UTC)


 * How about looking for a regular offline video editor? I'm pretty sure there are some free ones, and the thing you describe is a really basic feature to have.


 * Of-course, since your laptop is Turing complete, you could also program it to do the job yourself ;-) PsiCubed2 (talk) 08:12, January 5, 2018 (UTC)


 * True, but I don't yet have the expertise required to program such a thing. Edwin Shade (talk) 19:52, January 6, 2018 (UTC)

What does it mean by "PA has strength e 0?"
I often see that PA has the strength of $$\epsilon_0$$, what does it exactly mean? PA is some definitions of natural numbers. It looks nothing to do with ordinals. How can it be $$\epsilon_0$$? --Nayuta Ito (talk) 09:25, April 8, 2018 (UTC)


 * The "strength" refered to here is the proof-theoretic ordinal. I don't have time to write out everything right now, but I recommend reading this paper by Rathjen. If you have any questions afterwards, feel free to ask. LittlePeng9 (talk) 11:32, April 8, 2018 (UTC)

Trying to understand the binary Veblen function
I understand that, say, φ(ω,0) is the nth ordinal in the set [ω, ε0, ζ0...], but I don't know how to calculate anything past that. What is φ(ω+1,0), for example? Just a DB fan trying his best (talk) 00:32, February 4, 2018 (UTC)

\(\varphi(\omega+1,0)\) is the limit of \(\varphi(\omega,0)\), \(\varphi(\omega,\varphi(\omega,0))\), \(\varphi(\omega,\varphi(\omega,\varphi(\omega,0)))\), etc. &#123;hyp/^,cos&#125; (talk) 01:18, February 4, 2018 (UTC)


 * Firstly, \(\phi(\omega,0)\) is not the nth ordinal in the set \(\{\omega,\varepsilon_0,\zeta_0,\ldots\}\), but the fundamental sequence of \(\phi(\omega,0)\), denoted \(\phi(\omega,0)[n]\) is.


 * Secondly, \(\phi(\omega+1)\) is equal to the first fixed point of \(\xi\mapsto\phi(\omega,xi)\), or the limit of \(\phi(\omega,\phi(\omega,\phi(\omega,\ldots\phi(\omega,0)\ldots)))\), possessing a fundamental sequence of \(\phi(\omega,0),\phi(\omega,\phi(\omega,0)),\phi(\omega,\phi(\omega,\phi(\omega,0))),\ldots\).


 * Lastly, this question has been asked before, by myself, and therefore it would helpful in your case to refer to the comments section of the blog post in which I asked the question. I had trouble understanding it at first, but now I know Veblen notation well, so I'm sure you'll catch on to.

Trying to Calculate a Number Outside of my Comfortable Range
So, I decided to try figuring out that, assuming there's a multiverse, and that the Many Worlds Interpretation is true, and that the number of both parallel universes and alternate timelines double every planck time, what the minimum number of Universes would be.

If I'm not mistaken, this is the formula for the above: 4^8.01384475841e+60

Would anyone mind solving for it? My limit is a million digits, and this is, well... ...a lot more.

Is it even possible to solve, or is it one of THOSE numbers?

Is the formula even correct? —Succeeding unsigned comment added by Zeifyl (talk • contribs) 21:13, January 11, 2018 (UTC)
 * This is 2^x if x Planck times passed. 80.98.179.160 13:30, January 25, 2018 (UTC)

Does this ordinal exist??
Is there an \(\alpha\) satisfying the equation \(\alpha=\varepsilon_{\alpha+1}\)? 80.98.179.160 14:49, December 26, 2017 (UTC)


 * No. Rpakr (talk) 14:53, December 26, 2017 (UTC)
 * No, because we always have \(\varepsilon_{\alpha+1}>\varepsilon_\alpha\geq\alpha\). LittlePeng9 (talk) 15:37, December 26, 2017 (UTC)
 * The ordinal is in the form \(\varepsilon_{\varepsilon_\cdots+1}\), with \(\omega\,\varepsilon\)s and \(+1\)s hidden by cdots. But thanks! 80.98.179.160 13:20, December 28, 2017 (UTC)
 * The limit of \(0,\varepsilon_1,\varepsilon_{\varepsilon_1+1},\varepsilon_{\varepsilon_{\varepsilon_1+1}+1},\dots\) is \(\zeta_0\) and it doesn't satisfy \(\alpha=\varepsilon_{\alpha+1}\). LittlePeng9 (talk) 13:54, December 28, 2017 (UTC)
 * "the limit of 0,ε1,ε(ε1+1),ε(ε(ε1+1)+1),... is 2φ0" Shouldn't this lead to a paradox? 80.98.179.160 12:59, January 5, 2018 (UTC)
 * ...no, why should it? LittlePeng9 (talk) 13:29, January 5, 2018 (UTC)

A Problem With The Blog Posts
When I try to access the recent blog posts I am confronted with the following, which is not an error persay, but rather an absence of anything. How do I remedy the situation ?



If there was a way to purge the page I would like to know. Edwin Shade (talk) 15:23, December 29, 2017 (UTC)


 * It works fine for me, so it might be some problem on your side. Did you try using a different browser or even a different device? LittlePeng9 (talk) 15:26, December 29, 2017 (UTC)


 * No, I am using Chrome so I unable to use different browsers, and I do not have another device I could easily turn to. Edwin Shade (talk) 15:32, December 29, 2017 (UTC)


 * How does using Chrome make you unable to use different browsers? LittlePeng9 (talk) 15:34, December 29, 2017 (UTC)


 * Because when I try to download programs that purport to offer access to different browsers I get messages like this one:




 * Works fine for me, too. Rpakr (talk) 15:28, December 29, 2017 (UTC)

You can add "?action=purge" at the end of the URL to refresh the server cache of the page. If there's already a question mark in the URL, add "&action=purge" instead. -- ☁ I want more clouds! ⛅ 17:31, December 29, 2017 (UTC)


 * It worked, thank you ! Edwin Shade (talk) 23:23, December 30, 2017 (UTC)

I'm using chrome too. Rpakr (talk) 20:03, December 29, 2017 (UTC)


 * But that's not all. Edwin Shade is using Chrome OS (an operating system made by Google), and thus unable to use other browers or use Windows executable files. -- ☁ I want more clouds! ⛅ 05:59, December 30, 2017 (UTC)

I can't understand w 1^CK :(
Please give me some concrete $$\omega_1^{CK}[n]$$'s and O's. The abstract explanation did not make me understand. --Nayuta Ito (talk) 23:30, February 13, 2016 (UTC)
 * There is no such thing as \(\omega_1^{CK}[n]\), because nobody has developed such a fundamental sequence. You can come up with one, but it wouldn't be very useful on its own. A useful definition of \(f_{\omega_1^{CK}}\) would require a built-up fundamental sequence system for all recursive ordinals. Nobody's ever done that. That's why LittlePeng9 and I strongly advise against writing "comparable to \(f_{\omega_1^{CK}}\)" &mdash; it's an unsolved problem!
 * Kleene's O is an ordinal notation that maps a subset of the nonnegative integers to all recursive ordinals. For every recursive ordinal \(\alpha\), there exists a nonnegative integer \(n\) such that \(\mathcal{O}(n) = \alpha\). So if you list out \(\mathcal{O}(0),\mathcal{O}(1),\mathcal{O}(2),\ldots\), your list will contain all recursive ordinals. (Most ordinals will be represented multiple times in the list, and not all \(\mathcal{O}(n)\) are defined -- but what's important is that you have a complete list.) If you have a more specific question on the definition of Kleene's O, feel free to ask. -- vel! 01:03, February 14, 2016 (UTC)
 * So are the specific values of O like this?: \(\mathcal{O}(0)=0, \mathcal{O}(1)=1, \mathcal{O}(2)=2, \mathcal{O}(4)=3, \mathcal{O}(16)=4\cdots\) And I don't understand the $$3*5^i$$ part and how Turing machines are related to.--Nayuta Ito (talk) 06:02, February 14, 2016 (UTC)
 * And, can you make a list with some specific numbers? Are examples for 2^^i above correct?--Nayuta Ito (talk) 08:33, February 14, 2016 (UTC)


 * Your examples are correct. However, every example beyond that point strongly depends on the choice of the ordering of Turing machines. Let's fix some ordering for now. No matter what ordering this is, there is a number \(n\) such that \(n\)th Turing machine in this ordering computes function \(f(i)=2\uparrow\uparrow(i+1)\). But recall that \(\mathcal O(2\uparrow\uparrow(i+1))=i\) and so limit of \(\mathcal O(f(i))\) is \(\omega\). This means precisely that \(\mathcal O(3\cdot 5^n)=\omega\). Next, \(\mathcal O(2^{3\cdot 5^n})=\omega+1,\mathcal O(2^{2^{3\cdot 5^n}})=\omega+2\) and so on (I hope this is clear). Then, there is a number \(m\) such that \(m\)th Turing machine computes function \(g(i)=2^{\dots^{2^{3\cdot 5^n}}}\). Now, limit of \(\mathcal O(g(i))\) is \(\omega+\omega\), so this tells us that \(\mathcal O(3\cdot 5^m)=\omega+\omega\).
 * Hopefully this example makes some things clearer. LittlePeng9 (talk) 08:56, February 14, 2016 (UTC)


 * I understand. O requires the map from a program to a number, and O itself goes faster than any computable functions. --Nayuta Ito (talk) 10:35, February 14, 2016 (UTC)
 * \(\mathcal O\) isn't a function from natural numbers to natural numbers, let alone it's not defined for all natural numbers. Hence it doesn't make any sense to speak of it being faster than computable functions. LittlePeng9 (talk) 10:41, February 14, 2016 (UTC)


 * You could say that for any computable function \(g(n)\), exists m such that \(f_{\mathcal{O}(m)}(m) > g(m)\). You can't talk about "eventually dominating", since not only is \(\mathcal O\)\((n)\) not defined for all \(n\), it "jumps" up and down: \(\mathcal O\)\((2\uparrow \uparrow \uparrow \uparrow 5)\) is a natural number (it is equal to 2^^n for some n), but it's quite clear that it takes way less than 2^^^^5 states to make a Turing machine that computes 2^^i+1, so we have \(\mathcal{O}(3\cdot 5^n)\) for some n much smaller than 2^^^^4 equal to \(\omega\). Maybe called Googology Noob (talk) 17:02, February 14, 2016 (UTC)
 * Due to lack of specification of fundamental sequences \(f_{\mathcal{O}(m)}(m)\) is not well-defined for great majority of \(m\). Even if you did specify them, it's far from clear why your claim should be true. LittlePeng9 (talk) 19:15, February 14, 2016 (UTC)
 * Given a fundamental sequence, I do not see why this is not true. Do you agree with me that for any computable function \(g(n)\), we have a recursive ordinal \(\alpha\) such that \(f_{\alpha}(n) >^*g(n)\) (because otherwise \(g(n)\) would dominate all computable functions and be a computable function itself)? Because Kleene's \(\mathcal O\) gives all recursive ordinals, we know that there will be an \(m\) such that \(\mathcal O (m)\) produces a recursive ordinal \(\alpha\) such that \(f_{\alpha}(n) >^* g(n)\). In fact, we have an infinite number of such \(m\): \(\mathcal O (2\uparrow\uparrow m)\) is trivially larger. Maybe called Googology Noob (talk) 13:06, February 15, 2016 (UTC)
 * No, I don't agree with your first claim, because I don't see why your parenthetical explanation is true. The scenario in which there are fundamental sequences for all recursive ordinals and yet functions \(f_\alpha\) don't exhaust all recursive growth rates is conceivable for me, and it's certainly a nontrivial thing to prove that some choice of FSs makes FGH exhaust growth rates. LittlePeng9 (talk) 15:26, February 15, 2016 (UTC)

To save time I will try to clear up as many possible misconceptions about O as possible:


 * \(\mathcal O\) is a partial, nonmonotonic function. For instance, \(\mathcal O(n)\) is undefined if n is a nonzero multiple of 7.
 * If \(\alpha\) is a recursive ordinal, there exists a nonnegative integer \(m\) such that \(\mathcal O(m) = \alpha\). For all transfinite \(\alpha\), \(m\) is not unique and there are infinitely many \(m\) that satisfy this property.
 * \(\mathcal O\) requires us to define an ordering of all partial recursive functions \(f_0, f_1, f_2, \ldots\). The exact ordering doesn't matter, but it must contain all partial recursive functions. (EDIT: LittlePeng pointed out to me that the ordering has to be recursive. That is, there has to be a Turing machine that computes \(f_m(n)\). If that seems impossible to you, look up "universal Turing machine.") You can construct such an ordering by lexicographically ordering all Turing machines. Set theorists are generally not concerned with the details of how this ordering is done.
 * \(\mathcal O(0) = 0\). For finite \(n\), \(\mathcal O(2 \uparrow\uparrow n) = n + 1\).
 * If you want to find a value of \(m\) such that \(\mathcal O(m) = \omega\), construct a Turing machine with associated partial function \(f\) such that \(f(0) = 0\) and \(f(n + 1) = 2 \uparrow\uparrow n\) for nonnegative \(n\). Then find the index \(i\) of that Turing machine in the selected partial ordering. Then \(\mathcal O(i) = \omega\).
 * Don't bother asking what the smallest value of \(m\) is such that \(\mathcal O(m) = \alpha\). There is no algorithm that does that for arbitrary \(\alpha\). You might be able to solve that problem for small \(\alpha\), but it will get increasingly harder. Even finding the minimal solution to \(\mathcal O(m) = \omega\) seems very difficult, and I don't see the value in solving that problem anyway.
 * In general the specific values of O(m) aren't that important! What's more important is that there IS an ordinal notation that describes all recursive ordinals.

-- vel! 19:45, February 14, 2016 (UTC)
 * I've written a blog post that describes a system analogous to Kleene's O. It might be easier to understand for people experienced with programming. -- vel! 20:42, February 14, 2016 (UTC)
 * By the way, could we not define the fundamental sequence for \(\omega_1^{CK}\) like this: \(\omega_1^{CK} [n] = \text{max} \lbrace \mathcal{O}(m) : 2 \uparrow \uparrow n \geq m \rbrace\) Maybe called Googology Noob (talk) 11:57, February 27, 2016 (UTC)


 * We could. We also could define it in 23387614307834 different ways. LittlePeng9 (talk) 13:24, February 27, 2016 (UTC)


 * What I meant was, why don't we adopt that as the fundamental sequence for \(\omega_1^{CK}\)? Maybe called Googology Noob (talk) 17:50, February 27, 2016 (UTC)


 * I have at least two reasons why I wouldn't want to adopt it as a "standard" fundamental sequence.
 * Kleene's \(\mathcal O\), and hence also your suggested definition of FS, depends on the enumeration of partial recursive functions, and there is no standard enumeration. Because of that, many different enumerations would lead to many different Kleene's \(\mathcal O\)s and fundamental sequences for \(\omega_1^\text{CK}\), without any specific indication as to which ones are "better" than the others.
 * Since the primary goal (at least for us as googologists) of specifying fundamental sequences is to use them with FGH, the fact that we don't have even a slightest clue how FGH works with this definition of FSes for \(\omega_1^\text{CK}\) (similar definition of FS for smaller limit ordinals could be devised, so I don't consider that problem here). Of course one could argue that we don't know how FGH would define for most other definitions of FSes, but my reply to this would be that I take it as an argument to not take any definition of the fundamental sequence of \(\omega_1^\text{CK}\).


 * One could also argue that using Kleene's \(\mathcal O\) is not a natural way of constructing fundamental sequences, but I've decided to not make it into the third reason because "naturalness" is too subjective matter. LittlePeng9 (talk) 19:41, February 27, 2016 (UTC)


 * A single fundamental sequence for CK is not useful. You need fundamental sequences for all ordinals less than CK to use it in an ordinal hierarchy. Not only that, but the FS system must not be degenerate. -- vel! 07:16, February 29, 2016 (UTC)


 * We can define fundamental sequences for all limit ordinals less than \(\omega_1^{CK}\) by \(\alpha [n] = \text{max} \lbrace \mathcal{O}(m) < \alpha : 2 \uparrow \uparrow n \geq m \rbrace\). We can insure that this gives an increasing FGH hierarchy by modifying the limit rule of FGH to \(F_\alpha (n) = \sup_{m \le n} \lbrace F_{\alpha[m]}(n) \rbrace\). So we can define an explicit increasing hierarchy of FGH up to \(\omega_1^{CK}\). Deedlit11 (talk) 20:46, February 29, 2016 (UTC)
 * Nice, that seems to work. But now we encounter a different problem -- what do we do with this? I don't see how we can prove anything about the growth rate of \(f_\alpha\) for most \(\alpha\). -- vel! 06:14, March 1, 2016 (UTC)

Does this ordinal exist ?
Today I learned more about Madore's Psi function and wondered if there existed an ordinal $$\alpha$$ such that $$\alpha\mapsto\psi_{\alpha}(0)$$, or the ordinal $$\psi_{\psi_{\psi_{\psi_{._{._{._{\psi_0(0)}.}.}.}(0)}(0)}(0)}(0)$$. Is it even valid to speak of such an ordinal ? If so, what comes after this in ordinal notations ?

Help would be appreciated in answering this question. Thank you.Edwin Shade (talk) 04:25, November 20, 2017 (UTC)


 * I suppose that "Madore's Psi function" generally refers to the OCF/OCFs found in the "Ordinal Collapsing Function" page on Wikipedia, which Madore has edited. The main one found there uses just one \(\psi\), and goes up to the Bachmann-Howard ordinal. There is a stronger variant described in a later section, that goes up to \(\Omega_\omega\). So neither of these go as far as you talk about. However, we can indeed define an ordinal collapsing function that can nest $$\alpha\mapsto\psi_{\alpha}(0)$$; the trick is to add \(\alpha \mapsto \Omega_\alpha\) to the closure functions in the Skolem hull \(C(\alpha,\beta)\). With this more powerful notation we can indeed define the ordinal as you do; this ordinal is commonly called the "Omega Fixed Point", as it is also the smallest ordinal \(\Lambda\) such that \(\Lambda = \Omega_\Lambda\). The largest countable ordinal described by this notation will be \(\psi_0 (\Lambda)\).


 * The next step after this is to add weakly inaccessible cardinals. We denote the smallest weakly inaccessible cardinal by I. You can see the ordinal collapsing function with I in a multitude of places here, such as at http://googology.wikia.com/wiki/User_blog:Deedlit11/Ordinal_Notations_IV:_Up_to_a_weakly_inaccessible_cardinal . Anyway, \(\psi_I(0)\) is the Omega Fixed Point described above, and in general, \(\psi_I(\alpha)\) will be the \(1 + \alpha\)th fixed point of the function \(\beta \mapsto \Omega_\beta\). Then, I works as a diagonalizer for hte \(\psi_I(\alpha)\) function, so that \(\psi_I(I+\alpha)\) is the \(1 + \alpha\)th fixed point of the function \(\beta \mapsto \psi_I(\beta)\), \(\psi_I(I2+\alpha)\) is the \(1 + \alpha\)th fixed point of the function \(\beta \mapsto \psi_I(I+\beta)\), \(\psi_I(I^2+\alpha)\) is the \(1 + \alpha\)th fixed point of the function \(\beta \mapsto \psi_I(I\beta)\), and so on.


 * Thus, we can make very large countable ordinals by making very large multiples of I, and we can make larger and larger such multiples by using collapsing functions such as \(\psi_{\Omega_{I+1}}\), \(\psi_{\Omega_{I+2}}\), and so on. Then, we can add larger weakly inaccessible cardinals to collapse down to cardinals larger than I, and the process continues. Deedlit11 (talk) 07:07, November 20, 2017 (UTC)
 * Does \(\psi(\psi_{\Omega_{I+1}}(0))\) equal to \(\psi(\psi_I(\varepsilon_{I+1}))\)? Rpakr (talk) 08:58, November 20, 2017 (UTC)


 * Why are cardinals being used instead of ordinals ? Edwin Shade (talk) 00:07, November 23, 2017 (UTC)


 * Cardinals are just a specific kind of ordinal. (More precisely, they are the smallest ordinals of a particular cardinality.) The reason for using them is to make certain facts easier to prove; for instance, using \(\Omega = \aleph_1\) as our initial diagonalizer is convenient, since we know \(\aleph_1\) will be closed under our closure operations, and that the union of the countable parts of \(C_n\) will still be countable, so they will stay below \(\aleph_1\). Deedlit11 (talk) 06:11, November 29, 2017 (UTC)

No, \(\psi(\psi_{\Omega_{I+1}}(0))\ge\psi(\Omega_{\psi_{I+1}(\Omega_{I+1})})=\psi(\Omega_{\zeta_{I+1}})\gg\psi(\psi_I(\varepsilon_{I+1}))\), to my knowledge. User:Simply Beautiful Art (talk) 11:55, November 20, 2017 (UTC)

How do you represent the LVO ?/Is There a Non-OCF Notation Surpassing the LVO ?
At 8:50 in the below video the person explaining $$\psi(\Omega^{\Omega^{\Omega}})$$ stated it is the supremum of Veblen notation, but not necessarily $$\varphi(1,0,0,0,\ldots)$$, but instead was Veblen notation nested within each other infinitely.



Unfortunately he never explained how this could be done, so I took an educated guess. First, take $$\varphi(1,0,0,0,\ldots)$$ and represent it by a $$v_0$$, next make the rule that $$v_{n+1}=\varphi(v_n,0,0,0,\ldots)$$. I feel the Large Veblen Ordinal is probably equal to the first fixed point of $$\xi\mapsto v_{\xi}$$; if I am mistaken please correct me and explain a procedure involving a nesting of $$\varphi(1,0,0,0,\ldots)$$ that can create the Large Veblen Ordinal.

In addition, is there an ordinal notation that can surpass $$\psi(\Omega^{\Omega^{\Omega}})$$ without resorting to infinite collapsing functions, and which reaches higher ordinals such as $$\psi(\Omega^{\Omega^{\Omega^{\Omega}}})$$, or $$\psi(\psi_I(0))$$ ? Transfinitary-argument Veblen functions vex me, and so I want to find a simpler alternative.

Comments are appreciated.Edwin Shade (talk) 03:05, November 27, 2017 (UTC)


 * Well, there's the pair-sequence notation which follows a pretty intuitive lexicographic ordering and gets you up to ψ(Ωω). It's technically a notation for large numbers rather than ordinals, but it can easily be used as a notation for ordinals by snipping the final "[n]" in the expression. In this notation, the LVO will be written as:


 * LVO = (0,0)(1,1)(2,1)(3,1)(4,1)


 * You can also use nested trees of ordinals to reach ψ(ψɪ(0)) but these are incredibly confusing so they probably defeat the entire purpose of your question.


 * As for Transfinitary-argument Veblen functions, they are actually easy to understand. Instead of writing things like φ(1,0,0,0,...) (which get confusing very quickly), it is better to write the positions of the arguments like this:


 * φ(2,0,0,7,3) = φ( 2@4, 0@3, 0@2, 7@1, 3@0) (read as "phi of 2 at 4, 0 at 3, 0 at 2, 7 at 1, 3 at 0")


 * The key here is that we're allowed to omit the zeros, and that even in the Tranfinitary-argument Veblen functions we can only have a finite number of nonzero arguments.


 * So we can write:


 * φ(2,0,0,7,3) = φ( 2@4, 7@1, 3@0)


 * And what you wrote as φ(1,0,0,0,...) can be written as:


 * φ(1,0,0,0,0, ...) = φ( 1 @ ω ).


 * And this, really, works in exactly the same way as ordinary Veblen functions. So just like we have (say):

The first fixed point of ξ→φ(ξ,0,0,0) is φ(1,0,0,0,0)


 * We also have:

The first fixed point of ξ→φ(ξ @ ω ) is φ(1 @ ω+1 )


 * This is the ordinal you've mentioned in your question, and it is far smaller than the LVO. Remember that you can have any ordinal as a "position", so we can have φ(1 @ ε₀ ) or φ(1 @ Γ₀ )... The limit of this system would be:


 * LVO = The first fixed point of ξ→φ( 1 @ ξ )


 * You can actually extend this further by starting a new Veblen-like hierarchy with the seperator "@". Gotta be careful about how you do that, but this can get you up to the BHO at the very least. PsiCubed2 (talk) 09:08, November 27, 2017 (UTC)


 * Thank you PsiCubed2, I think I understand the LVO now. Edwin Shade (talk) 01:00, November 28, 2017 (UTC)


 * What psi said. Regarding to your request for a way to represent LVO using phi style notation instead of ocf, check this  article, cantorsattic . Also, one last thing, i remember from a while ago that that video uses some non-standard (perhaps wrong) stuff beyond bachmann howard ordinal so beware.  (already up way too late to look up the exact specifics)


 * Chronolegends (talk) 09:50, November 27, 2017 (UTC)


 * I second Chrono's warning regarding those videos. If I remember correctly, some serious errors started to creep in well below the BHO level. PsiCubed2 (talk) 14:36, November 27, 2017 (UTC)

Why does Madore's Psi function get stuck at zeta 0 ?
I don't fully understand the reason why $$\psi(\alpha)=\zeta_0$$ when $$\zeta_0 \leq\alpha\leq\Omega$$, and though I asked, it still seems arbitrary to me. Consider that if $$\psi(0)=\epsilon_0$$, $$\psi(1)=\epsilon_1$$, $$\psi(2)=\epsilon_2$$, and so on, then in general, $$\psi(\alpha)=\epsilon_{\alpha}$$. So why should $$\psi(\zeta_0 +1)$$ be equal to $$\zeta_0$$ instead of $$\epsilon_{\zeta_0 +1}$$ ?

Also, if $$\psi(\zeta_0)=\zeta_0$$, then logically $$\psi(\zeta_0 +1)$$ could be seen to be equal to $${\zeta_0}^{{\zeta_0}^{{\zeta_0}^{.^{.^{.}}}}}$$, according to the definition of the psi function. Therefore, would $$\epsilon_{\zeta_0 +1}={\zeta_0}^{{\zeta_0}^{{\zeta_0}^{.^{.^{.}}}}}$$ ?

Clearly this cannot be so, but I would like to know where I went wrong, and how, in complete and understandable terms, the Psi function gets stuck at $$\zeta_0$$, and why $$\psi(\zeta_0 +1)$$ should behave any differently than $$\psi(\Omega +1)$$. Edwin Shade (talk) 01:24, November 13, 2017 (UTC)


 * The reason is actually quite simple. ψ(α) is defined as the smallest ordinal which can't constructed by using:


 * (1) The ordinals 0, 1, ω and Ω


 * (2) Ordinal arithmetics (addition, multiplication and exponentiation)


 * (3) The function ψ(β), which can only be used for previously constructed ordinals β<α


 * The bolded section is the important part here. We are not allowed to use the function ψ(β) on every ordinal less than α. We are required to build the ordinal β from the above building blocks as well, in order use it.


 * For example, when analyzing ψ(ε₀+1), we find that it is equal to:


 *           ψ(ε₀+1) = ψ(ε₀)^ψ(ε₀)^ψ(ε₀)^... = εε₀+1


 * We are only allowed to do this because ε₀ itself can be written with the basic building blocks (0,1,ω,Ω,ψ):


 *           ψ(ε₀+1) = ψ(ψ(0))^ψ(ψ(0))^ψ(ψ(0))^... = εε₀+1


 * It is quite easy to show that this can be done with any ordinal below ζ₀, which is why ψ(a)=εa for such ordinals.


 * But at ζ₀ itself things get interesting: Since ζ₀ is a fixed point of the ε numbers, the smallest ordinal for which ψ(x)=ζ₀ is ζ₀ itself. So we have a catch 22: We can't build ζ₀ with the ψ function unless we already have ζ₀ itself at our disposal. So if you try to write:


 * ψ(ζ₀+1) = ψ(ζ₀)^ψ(ζ₀)^ψ(ζ₀)^... (this is wrong )


 * You can't justify this equation, because you aren't allowed to use "ζ₀" in your construction.
 * Indeed, since ζ₀ cannot be constructed from 0's and 1's and ω's and ψ's, it will be forever unreachable. This is why we have:


 *           ψ(x)=ζ₀ for ζ₀ ≤ x ≤ Ω.


 * So, how does the function gets "unstuck" at Ω?


 * Well, remember that we are allowed to use Ω itself in our constructions. Therefore, we can actually construct the ordinal ζ₀ by using Ω:
 *            ζ₀ = ψ(Ω)


 * This is a perfectly valid construction. The reason it didn't help us before, is that we aren't allowed to use ψ(a) when calculating ψ(b) for bΩ.


 * And so we have:


 *           ψ(Ω+1) = ψ(Ω)^ψ(Ω)^ψ(Ω)^... = ζ₀^ζ₀^ζ₀^... = εζ₀+1


 * And from here we continue normally.


 * Note that from now on we can use the ordinal ζ₀=ψ(Ω) freely. So when we get to:
 *           ψ(Ω+ζ₀) = εζ₀×2


 * We don't have a similar problem. Indeed, it can be shown that:
 *          ψ(Ω+a) = εζ₀+a for any a ≤ ζ₁


 * At ζ₁ we get stuck again (for very similar reasons):
 *          ψ(Ω+ζ₁) = ζ₁


 * Until Ω comes to the rescue once again, with the representation:
 *          ζ₁ = ψ(Ω+Ω) = ψ(Ω×2) 
 *  PsiCubed2 (talk) 03:40, November 13, 2017 (UTC)

\(C(\zeta_0+1)\) doesn't contain \(\zeta_0\). From \(0,\ 1,\ \omega\), we can apply addition, multiplication, exponentiation and \(\psi\), then we can have \(\psi(0)=\varepsilon_0\), \(\psi(\psi(0))=\varepsilon_{\varepsilon_0}\), and so on. But we can't get \(\zeta_0\) from those things. &#123;hyp/^,cos&#125; (talk) 02:19, November 13, 2017 (UTC)

So if I understand correctly then, $$\Omega$$ is really a symbol telling us to find the first ordinal not accessible through the system of functions we've developed so far, rather than being equal to a real ordinal ? Edwin Shade (talk) 03:36, November 20, 2017 (UTC)


 * It's both.


 * The purpose of Ω is exactly what you said.


 * The mechanism by which this works is that Ω is a very large ordinal (usually we set Ω=ω₁) that we are free to "fetch" without constructing it first.


 * And yes, it is an ordinal just like any other. That's why we can have things like Ω+1 or Ω2 or even ΩΩ. The fact that we can do ordinal arithmetic with Ω's is a crucial part of the system. PsiCubed2 (talk) 05:06, November 20, 2017 (UTC)



Where is Rayo in the fast growing hierarchy?
Where is Rayo's function in the fast growing hierarchy? I think it might be at Δ (as defined here ), but I'm not sure. 128.187.116.16 05:20, November 15, 2017 (UTC)

Rayo is not in the fast growing hierarchy Chronolegends (talk) 18:27, November 15, 2017 (UTC)

Although Rayo's number is almost certain to not be exactly represented in the fast-growing hierarchy, there are educated approximations of it, one of which is $$f_{v(\Omega^{\omega}+\omega )+1}(3)$$, where the 'v' stands for a 'ϑ', which in turn refers to an ordinal collapsing function. (Which one I do not know, as they tend to use the same symbols.) Edwin Shade (talk) 03:11, November 17, 2017 (UTC)


 * Nah, Rayo's number is much larger than that. Any described computable function, including the fast-growing hierarchy using any described recursive ordinal notation and a computable system of fundamental sequences, can very likely be programmed into Turing Machine of fewer than a million states, so I believe that any number that has been or will be described algorithmically (including using a recursive fast-growing hierarchy) is going to beaten by BB(1,000,000).  And Rayo's function is much, much faster growing than the Busy Beaver function.


 * It may be interesting to ask what ordinal could be associated with Rayo's function and FOST. Certainly, if FOST can define fundamental sequences up to some ordinal alpha, then it can evaluate f_alpha(n), so Rayo(n) will beat f_alpha(n).  Is the other direction true?  Meaning if we let alpha be the smallest ordinal that FOST cannot define a system of fundamental sequences for, and then choosing some suitable system of fundamental sequences for alpha (say beta[n] = the largest ordinal gamma < beta such that FOST can define an ordinal notation up to gamma in at most 1000n symbols), how does Rayo(n) compare to f_a(n)?  Can it be signficantly larger?


 * Another question is how this ordinal compares to other large countable ordinals, like in David Madore's zoo of ordinals. How would it compare to the ITTM ordinals for example? These may be very hard questions. Deedlit11 (talk) 04:08, November 17, 2017 (UTC)

Questions Regarding Cardinal Infinities
Just yesterday I got a new book to add to my collection of mathematical books. This time it was Martin Gardner's Wheels, Life, And Other Mathematical Amusements, which I've almost read the entirety of. I found chapter 4, which was entitled "Alephs And Supertasks" particularly interesting. However, there are a few questions I have.

The first question is based off of the following passage from page 35 of the book:

"Is there a set in mathematics that corresponds to $$2^c$$? Of course we know it is the number of all subsets of the real numbers, but does it apply to any familiar set in mathematics? Yes, it is the set of all real functions of x, even the set of all real one-valued functions. This is the same as the number of all possible permutations of the points on a line. Geometrically it is all the curves (including discontinuous ones) that can be drawn on a plane or even a small finite portion of a plane the size, say, of a postage stamp. As for 2 to the power of $$2^c$$, no one has yet found a set, aside from the subsets of $$2^c$$, equal to it. Only aleph-null, c, and $$2^c$$ seem to have an application outside the higher reaches of set theory."

So, if I understand correctly, does this mean the cardinality of the number of possible drawings is $$2^c$$ ?

If so, would a drawing of a single point count as a 'discountinuous curve' ? - How about a smooth line and 1,000 random points ? I'm wondering what counts as a 'real-function of x', according to the passage above.

Lastly, what is an example of a meaningful statement you can make about infinities greater than $$2^c$$ ? If you are not able to correspond such an infinity with a visual set or something easily graspable, then how can you be sure the statements you're making about that infinity are meaningful or not ? Edwin Shade (talk) 21:25, November 3, 2017 (UTC)


 * I have not encountered anyone allowing curves to be discontinuous, and because of that I am unsure of what exactly Gardner considers to be a "curve". Usually, a "curve" is defined either as a continuous function \(f:[0,1]\rightarrow\mathbb R^2\) or the image of such function (think of it this way: the latter is just some doodle on the plane which we can draw with a pen, whereas the former also encodes how we drew the doodle, for example how fast we were moving the pen). If we throw away the continuity assumption, then for the latter definition, a "curve" can be literally any nonempty subset of the plane - for example, the point is the result of keeping the pen in one spot (so it is in fact a continuous curve), I'm going to leave it to your imagination to think of how we might draw your line and points (this one will be discontinuous). Using the former definition of a curve just adds another complication to that.
 * As for "real function of x", I believe this simply means all functions from \(\mathbb R\) to itself (x denotes the variable, but it could be any other letter or symbol).
 * I am not sure what you mean here with "meaningful statements". Even if we agree that there are no sets of those higher cardinalities which are easy to visualize, such sets still exist, for example if we consider the set of all sets of functions \(\mathbb R\to\mathbb R\). An example of a statement about such larger cardinalities is the Cantor's theorem, that a power set of such a set has an even larger cardinality (sure, it holds for smaller sets too, but it doesn't make it invalid for large sets). We don't really care whether the statement is "meaningful" or not, whatever you mean with it. We care whether it is true or not. LittlePeng9 (talk) 22:24, November 3, 2017 (UTC)


 * Thank you for your answers. I was wondering by "meaningful" if there is anything useful you can say about a large cardinal infinity other than that it's bigger than the last. Edwin Shade (talk) 00:50, November 4, 2017 (UTC)

Navigation
I noticed something wrong with the navigation. Under the tab called "all" numbers, there are two Googols. Also, those 6 numbers aren't all of the numbers in googology. I noticed I couldn't change it so I'm just putting it to attention here.

Simon Weston 01:54, June 21, 2017 (UTC)


 * Actually, that's the correct behavior. The tab is named "All numbers" because you can click on it to access the category containing all numbers on this wiki, and seven popular pages in that category are shown under it. Also, both Goggol and Googol are currently listed there, and you thought that these two are the same when they are different (one is a number much larger than the other). -- ☁ I want more clouds! ⛅ 03:18, June 21, 2017 (UTC)


 * I see! Sorry for the misreading. I thought they both said [Googol]. That makes sense.
 * Nathan Richardson "Simon Weston"    03:23, June 21, 2017 (UTC)

A question about tree(3)
After looking over the page I still have 1 question, why the sudden increase from tree(2) to tree(3)?

its almost the same reason that

2^^^^^^^^^^^2 = 4 (corrected thx LittlePeng9)

while

3^^^^^^^^^^^2 = some crazy huge number Chronolegends (talk) 21:41, November 21, 2016 (UTC)


 * I think you mean 4. Also, I believe Schoolglue's question is exactly what this "same reason" is. LittlePeng9 (talk) 21:48, November 21, 2016 (UTC)


 * I'm not exactly certain what a good enough way to phrase it would be.
 * lets see, how about " some fast growing functions don't explode in growth beneath a minimal input, which depends on the way the function works, regarding the weak tree(n), the length of the initial tree is very important, for example if you start with a 1 vertex tree then thats the end of your sequence (because it would embed into all the subsequent ones). Tree(3) gives you 3+1 max length to start instead of tree(2) -> 2+1 So the larger initial trees give you enough headroom for the sequence length to really explode."Chronolegends (talk) 22:41, November 21, 2016 (UTC)

ts

Schoolglue (talk) 01:06, November 22, 2016 (UTC) ahh, now I understand why it grew so big, it's like the massive jump between g1 and graham number, only much, much larger

MathJax difficulties on mobile/tablet devices
I am trying to make fancy math expressions with MathJax for my blog, but the \( \text{Nevermind, thank you!} \) etc scripts aren't producing anything. The built in wikiamath scripts are working, but are quite low-quality. I can only edit on a tablet; I don't know if there's anything I have to type beyond the expression syntax or download otherwise. QuasarBooster (talk) 02:19, June 15, 2015 (UTC)

Anyone else having problems accessing Recent Changes on Safari?
I've been having problems accessing Special:RecentChanges on Safari for the past few days. When I try to go to the page, either by entering the URL or following the link on the side bar, it just hangs there and never finishes loading. I don't think it's a server issue because the Recent Changes page for other Wikia wikis work fine, and I can still access ours using a different browser. Downloading it to my computer via Safari also works. I suspect there is some sort of script makes the page incompatible with Safari. Any admins want to take a look? --Ixfd64 (talk) 17:21, May 25, 2014 (UTC)

http://lmgtfy.com/?q=download+google+chrome WikiRigbyDude (talk) 17:34, May 25, 2014 (UTC)

I don't get this problem on Safari. However, when I tried accessing that page on mobile, it just froze my browser, both on G Chrome and Safari. LittlePeng9 (talk) 18:10, May 25, 2014 (UTC)

Uploading PDF files
I've recently been contacted by Chris Bird, and he is looking for a place to host his papers for his BAN notation. They are written in PDF format. Is that possible to do here, or would they need to be converted to HTML?

Also, Bird has recently completed Beyond Nested Arrays V. :) Deedlit11 (talk) 21:02, April 2, 2014 (UTC)

then host them on sbiis saibian's site if he approves 65.26.80.144 21:36, April 2, 2014 (UTC)


 * Wikia supports PDF hosting by Special:Upload, if I'm not mistaken. Note that all content here is licensed under CC-BY-SA, so please ensure that he is okay with this. you're.so.pretty! 22:01, April 2, 2014 (UTC)


 * Thanks! But what is CC-BY-SA? Deedlit11 (talk) 22:16, April 2, 2014 (UTC)
 * http://creativecommons.org/licenses/by-sa/3.0/ you're.so.pretty! 06:27, April 3, 2014 (UTC)
 * According to http://en.wikipedia.org/wiki/Creative_Commons_license cc-by-sa= Creative Commons+Attribution+ShareAlike 65.26.80.144 22:28, April 2, 2014 (UTC)

Technical problem
When I try to publish my edit, often I get "Preview mode: no changes saved yet! Scroll down to continue editing". What's the problem?
 * This happens when I leave the edit window open for too long and the edit token expires. It's a Wikia bug, not something I can fix :(
 * I have briefly considered moving GWiki off Wikia. (Wikia thinks the universe can be categorized into Video Games, Entertainment, and Lifestyle. We're Lifestyle...) But I don't know a lot about how to set up a web server, and we're just too darn small for it to be worth it. FB100Z &bull; talk &bull; contribs 16:07, May 28, 2013 (UTC)


 * I get this too sometimes. Just pressing publish again fixes it.DrCeasium (talk) 17:40, May 28, 2013 (UTC)

Rules
Here are the rules in these here forums:


 * 1) Be nice.
 * 2) Don't swear.
 * 3) Stay on topic.
 * 4) Don't spam.
 * 5) Don't edit other people's posts.

It's that simple. Have fun! Followed by 100 zeroes (talk | contribs) 00:39, 7 January 2009 (UTC)

Welcome to the help desk
Welcome to the help desk. This is the place to ask for help with anything related to the wiki. There are more help pages in Category:Help or you can also ask questions on the talk pages of any of the site admins.

See Help:Forums for more on how forums work and how to add new forums to the index.

MathJax
Hello, what's wrong with MathJax on talk pages? It is not loading! Ikosarakt1 (talk ^ contribs) 15:01, May 4, 2019 (UTC)

Welcome to the watercooler
Welcome to the watercooler. This is a place to discuss anything about this wiki - how you use it is up to this community! You can discuss the subject of the wiki, or just the wiki itself, or even add an off-topic area. See Help:Forums for more on how forums work and how to add new forums to the index.

Some Silly Advertisements
This is off-topic, so I felt it belonged in the Watercooler section.

Around this time of the year people want savings on various items, but it quickly gets out of hand. Multiple advertisements pop up claiming deals on electronics, clothing, and more. The quickness of swindlers to cheat others, and the sheer number of false ads have prompted me to take screenshots of them every time one pops up. You'll agree these are quite ridiculous.

1.) The Chosen One



I've had this sort of ad pop up dozens of time throughout my time online. You know they're baloney when you're the 1,000,000th visitor to a site 3 times in a row ! Note the '15,534 likes this', which at a quick glance you could misinterpret as meaning the message has had 15,534 likes. What it actually just means is that a user named 15,534 liked it, and the existence of such a user is in severe doubt, because although the pop-up deliberately tried to look like Facebook, it wasn't.

Note the blatant lie in the pop-up tab, saying that if I don't update something bad will happen.

2.) "Your security is being compromised, click here to have your security compromised."



Of course my computer is being tracked. It is monitored by whoever my internet provider is, and I'm okay with that. The thought of an indiscriminate computer network spending hours of server time just to feed me ads that I will not click on doesn't bother me in the least, because I pay it no mind. (It seems to bother some people who feel it is an invasion of privacy, but if you feel that way then why do you post confidential information online ?) If I had clicked the 'Disable' button, I would have ironically enabled my computer to be sent a virus.

3.) The Pandora Ploy





An ad that will only go away once you purchase an advertised service that removes ads. Pandora uses this when you use their services, interrupting your music every five minutes with a saccharine voice that will ask you to purchase the premium service so you can stop hearing the voice ! I find it so ridiculous that it's actually pretty funny, which reminds me of this comic by Gary Larson, creator of Farside:



Lastly, ever notice how Black Friday happens to fall right after Thanksgiving ? Edwin Shade (talk) 03:02, November 28, 2017 (UTC)


 * I remember when I tried to download something, and 3 new tabs opened somewhen at the same time. One said something like "You have won the competition on Seznam.cz (Czech search engine something like Google) blah blah blah" whatever it said. That was the 1st tab. The 2nd and 3rd said something that I can't remember. When I looked to my history what were those website and on one of these (2nd tab) was written "webcam.pl" or I can't remember it anymore. I'm pretty lucky, because I haven't got webcam.


 * Or do you know that Geminus Audience or whatever ? It pop-ups me sometimes and when I read the information about it on some forum, people were saying that it's virus.
 * Okay that's all. I haven't got any screenshots Unknown95387 (talk) 15:50, November 28, 2017 (UTC)

Proposal: Ban counting blogs
I propose to ban all blogs and forums of the following kinds:


 * Counting games.
 * Any competition blogs where creating new entries is a relatively trivial task.

They are littering the wiki with pointless and irritating edits. Those interested in continuing such competitions should move to a new wiki.

Who agrees? it's vel time 20:04, October 19, 2014 (UTC)


 * yes, i agree 100%. counting games are fucking stupid. Cookiefonster (talk) 20:06, October 19, 2014 (UTC)


 * Yes, i agree 100%. Counting games are freaking stupid. LittlePeng9 (talk) 20:09, October 19, 2014 (UTC)
 * Agreed. Wythagoras (talk) 18:49, October 22, 2014 (UTC)

(ec) I should probably explain the rational motivation for this. This wiki is intended as a discussion forum and encyclopedia project about the mathematics of large numbers. "Count to infinity" games may be fun, but they don't align with our purpose. Note that this proposal does allow competitions such as Deedlit's Bakeoff appetizer, where a lot of thought and discussion has to go into each entry. it's vel time 20:09, October 19, 2014 (UTC)


 * I agree. AarexTiaokhiao 20:11, October 19, 2014 (UTC)
 * maybe. -- A Large Number Googologist -- 20:22, October 19, 2014 (UTC)


 * Can you elaborate? Where do you disagree with the proposal? it's vel time 20:23, October 19, 2014 (UTC)

Okay so, i think we're enacting this? nobody seems to be objecting it's vel time 22:32, October 21, 2014 (UTC)


 * I guess so. 3 days is enough to contribute their vote for everybody willing to do so. LittlePeng9 (talk) 11:51, October 22, 2014 (UTC)

I think we can play games like "My number is bigger!", but we should specify it's "language" in order to formally prevent "too big step". For example, the game "Who would make the best TM (returning more ones) with the restriction of 64 states" isn't that meaningless. Ikosarakt1 (talk ^ contribs) 14:07, October 22, 2014 (UTC)
 * The problem is not so much that the competitions are badly run, but that they're obnoxious and spammy and they clutter up the wiki's activity. it's vel time 14:28, October 22, 2014 (UTC)


 * Your idea is covered by what is said on the beginning - such TM competition would require a lot of thought, just like Bakeoff appetizer which Deedlit made. LittlePeng9 (talk) 15:36, October 22, 2014 (UTC)
 * "returning more ones" That means returning a larger repunit number in decimal. 80.98.179.160 13:25, December 28, 2017 (UTC)

Alright, this proposal seems to have passed. From now on, any new blogs, and comments contributing to existing blogs of this kind, will be deleted. No existing blogs or comments will be deleted for archival purposes. it's vel time 18:53, October 22, 2014 (UTC)


 * Will "my function" page be deleted, too? --Nayuta Ito (talk) 22:54, December 20, 2014 (UTC)
 * No, i've only closed comments -- Notorious V.L.E. 06:27, December 21, 2014 (UTC)

I see that he made blog game again. \:O AarexWikia04 - 11:41, September 17, 2016 (UTC)

I think that the rule should be written somewhere easily accessible from the top page. In this wiki, Googology_Wiki:Policy is liked as "Rules", so that is where this rule should be written. Do not expect people to read all the forum posts in the past. 🐟 Fish fish fish ... 🐠 12:44, September 17, 2016 (UTC)

F1rst C0mm3nt!!1
okay so here's how i propose dealing with "first comment" comments:


 * it is okay to do this to your own blog posts
 * it is not okay to do this to other people's blog posts unsolicited
 * admins can take action accordingly but the OP (original poster) has final say as to what comments should be deleted or not

thoughts? it's vel time 08:03, October 2, 2014 (UTC)
 * I think rude or offensive comments should be deleted without asking permission. Ikosarakt1 (talk ^ contribs) 08:19, October 2, 2014 (UTC)
 * yeah, dunno about profanity though. it's vel time 13:39, October 2, 2014 (UTC)

bunp. anyone disagree with this? it's vel time 07:24, October 3, 2014 (UTC)

ok, so is this a forum about first comment comments? because i do them a lot -- A Large Number Googologist -- 20:25, October 19, 2014 (UTC)

by the way vel, your link is, what the hell -- A Large Number Googologist -- 20:31, October 19, 2014 (UTC)


 * it's sweet bro and hella jeff Cookiefonster (talk) 20:35, October 19, 2014 (UTC)
 * but it's still weird -- A Large Number Googologist -- 20:41, October 19, 2014 (UTC)

Too many small numbers
Recently, an IP-addressed user (namely, 84.61.something) created many pages for "small" numbers, between 100 and 1023, without special names (and the titles are the numbers themselves). Some of them have no source links.

The numbers can be found in Category:Binomial coefficients, Category:Calendar-related numbers, Category:Domino-related numbers, Category:Lottery-related numbers, Category:Numbers in engineering, Category:Numbers in group theory, Category:Numbers in metrology, Category:Numbers in politics, Category:Numbers in pop culture, Category:Numbers in religion, Category:Numbers in science, Category:Numbers in sports, Category:Roman numeral-related numbers and their subcategories (but not all numbers in those categories are that kind of numbers). I think every category above can be replaced by one article page. &#123;hyp/^,cos&#125; (talk) 00:14, January 3, 2018 (UTC)


 * I agree.Edwin Shade (talk) 00:38, January 3, 2018 (UTC)

And some of the articles, such as this one, are "good" enough to keep. &#123;hyp/^,cos&#125; (talk) 02:12, January 3, 2018 (UTC)

Too many small categories
There are too many small categories, say, which contain lessequal to 3, or even 1 entries (well, this "3" is just a random choice of a "too small" category), such as Category:Cryptography-related numbers, Category:Song-Related Numbers, Category:Chess, Category:Powers of 10 with exponent bigger than 1,000,000, Category:Powers of 23, 122, Category:Numbers with radical 33, 74, 82, 174, 238, 266, 462, 798, 1919190, 4072530, Category:10^12, Category:PI Numbers, Category:Bigger than ℵ o, etc. (not a full list) Among them, the first three still seem OK; the powers and radicals go too far for categorization; and the final two are nonsense I think. &#123;hyp/^,cos&#125; (talk) 15:57, December 28, 2017 (UTC)

I think most of them should be removed. I don't think anyone wants to know about numbers that has the same radical. The last two are nonsense. I think there should be a rule saying something like categories must have at least 5 articles. Rpakr (talk) 16:30, December 28, 2017 (UTC)

Pop the balloons ( googology challenge )
Capital letters are balloons, except for P which is a pin, and S which is the score, it begins at 0

At each step, gently push the balloons to the left, so whichever balloon is in contact with the pin "pops" or disappears, what happens next depends on the color of the balloon

Red ballons R = Increase S by 1

Blue balloons B = Spawns S Red Balloons where they pop

Green balloons G = Spawns S Blue Balloons where they pop

Yellow balloons Y = Spawns S Green Balloons where they pop

Note: Spawning ballons will push existing balloons to the right, if there are any, to make enough room for themselves

Example run:

S=0

PRRRG

S=1

PRRG

S=2

PRG

S=3

PG

S=3

PBBB

S=3

PRRRBB

S=4

PRRBB

S=5

PRBB

S=6

PBB

S=6

PRRRRRRB

... (6 steps later)

S=12

PB

S=12

PRRRRRRRRRRRR

... (12 steps later)

S=24

P

The challenge is this: how much score would you gain from clearing the following wave of ballons?

PRBGY

If you give up scroll down for the answer

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Filler text for scrolling

Its, in fgh:

$$f_3(8)$$

Or, bound using tetration

$$^{10}2 < \ S < \ ^{11}2$$


 * Cool idea. Just a minor correction: f3(8) is between 2↑↑10 and 2↑↑11:


 * 2↑↑10 ~ E4.3#7


 * f3(8) ~ E619#7


 * 2↑↑11 ~ E19728#7 PsiCubed2 (talk)

... corrected!, thanks psi Chronolegends (talk) 16:23, February 15, 2017 (UTC)12:14, February 15, 2017 (UTC)

How was it originally?Boboris02 (talk) 18:12, February 25, 2017 (UTC)

@boboris: i mistakenly bound at 2↑↑9 < f3(8) < 2↑↑10Chronolegends (talk) 18:35, February 25, 2017 (UTC)


 * I will give the explanation why that wave have the score \(f\_3(8)\).
 * First, there's a pin in the first one.
 * Each step, you have to pop the balloon after the pin.
 * If there's red balloon after the pin then it increases the score by 1. \(\alpha \rightarrow f_0(\alpha)\)
 * If there's blue balloon after the pin then it doubles the score. \(\alpha \rightarrow f_1(\alpha)\)
 * If there's green balloon after the pin then it doubles the score, repeating depends how many score you get. \(\alpha \rightarrow f_2(\alpha)\)
 * If there's yellow balloon after the pin then it doubles the score, 'repeating depends how many score you get' twice. \(\alpha \rightarrow f_3(\alpha)\)
 * We start at 0.
 * First we pop the red balloon, to set the score by 1.
 * Then we pop the blue balloon, to set the score by 2 (by doubling 1).
 * Then we pop the green balloon, to set the score by 8 (by doubling 2 twice).
 * Then we pop the yellow balloon, to set the score by \(f_3(8)\). Googleaarex (talk) 18:50, February 25, 2017 (UTC)

Pop the balloons: Harder Edition
Like Pop The Balloons, this is harder. We start the wave (including the characters below) with the score (\(S\)) = 0:
 * P: The Pin. Each step, you have to pop the balloon after the pin.
 * R (Red balloon): Increase the score by 1.
 * G (Green balloon): Copy the string in the right of the balloon (excluding the green/blue/yellow balloons) and paste \(S\) times into \(\alpha\). Then it would spawn red balloons \(S\) times and put \(\alpha\) into after the newly balloons.
 * B (Blue balloon): Copy the string in the right of the balloon (excluding the blue/yellow balloons) and paste \(S\) times into \(\alpha\). Then it would spawn green balloons \(S\) times and put \(\alpha\) into after the newly balloons.
 * Y (Yellow balloon): Copy the string in the right of the balloon (excluding the yellow balloons) and paste \(S\) times into \(\alpha\). Then it would spawn blue balloons \(S\) times and put \(\alpha\) into after the newly balloons.

Can you think the score with the wave? PRGBY Googleaarex (talk) 18:58, February 25, 2017 (UTC)


 * What do you mean by "and spawn the \(\alpha\) next"? Deedlit11 (talk) 19:02, February 25, 2017 (UTC)
 * To paste \(\alpha\) on the right of these same color of balloons. Googleaarex (talk) 19:07, February 25, 2017 (UTC)
 * The score is still f3(8). Nishada 01:16, October 9, 2017 (UTC)
 * We can solve it by copy-pasting the "increase S by 1" part to every G, B, Y.

List of Googolosisms
Recently, I see the List of googologisms/Class 2 list getting longer and longer. So I want to ask you, are you going to add ALL THE NUMBERS whose page exists in this Wiki to the corresponding page according to its size? --Nayuta Ito (talk) 10:02, September 18, 2017 (UTC)

I did it. We can browse all numbers ordered by alphabet. Then I think we need another thing to browse all numbers ordered by size. &#123;hyp/^,cos&#125; (talk) 10:46, September 18, 2017 (UTC)

About the ExE number stubs
So, recently, we have been making pages for Saibian's regiments. At the same time, another user (you know who you are) has been editing all the pages to fix the category system.

I, personally, don't see the point in doing this. The pages will be gone soon anyway, so why bother correcting the categories?

So, I'd like to ask everyone for their views on this. Which option do you think we should do about the current Exe number pages which are effectively stubs?

What should happen to the ExE stubs? Keep the stubs as redirects, and change the categories to the new system. Keep the stubs as redirects, but remove all the categories Delete all the stubs and current redirects

Username5243 (talk) 16:07, September 10, 2017 (UTC)

The logic goes in the following way: -- By "another user" &#123;hyp/^,cos&#125; (talk) 17:04, September 10, 2017 (UTC)
 * 1) People discussed about new size classes of numbers.
 * 2) People discussed about article stubs.
 * 3) One solution of 2. is replacing article stubs with regiment pages. Article stubs themselves would be "clean", but there may be different ways of "clean".
 * 4) Some people started to make regiment pages.
 * 5) One solution of 1. is moving all numbers in old classes out. This is because
 * 6) All number pages should show in new classes in future. (Not the key reason)
 * 7) All old classes should be fully cleaned (i.e. no numbers remain in the category) in future. A category doesn't show when someone type a part of its name, iff the category is empty.
 * 8) Someone started to move numbers into new classes.
 * 9) People think it annoying when many edits happen frequently.
 * 10) Someone started to redirect article stubs (with old classes) to regiment pages, and change the classification into the new one. (Here the new classification project and the regiment project combined)
 * 11) Some people think that the article stubs should be deleted in future.
 * 12) Different ways of "clean" in 3. lead to different views about Hyp cos' edits.
 * 13) In case 8., those edits on stubs are necessary to achieve 5., because redirects also remain in categories of old classes.
 * 14) In case 9., those edits on stubs are unnecessary to achieve 5., and should not be done because of 7.
 * 15) All numbers by Sbiis Saibian were moved into new classes.
 * 16) A poll started.

I propose as solution to delete individual articles for googologisms and have tables in pages for classes/regiments instead. Chronolegends (talk) 06:14, September 12, 2017 (UTC)

Double Approximations
Suggestion: In pages of numbers, use double approximations (i.e. with a lower bound and an upper bound) instead of single approximations (i.e. with either a lower bound or an upper bound, but not both).

For instance, gugold is approximated to \(100\uparrow^{100}101\) in up-arrow notation, which is a lower bound. But it also has an upper bound \(101\uparrow^{100}101\). And the double approximation of gugold is \(100\uparrow^{100}101<\text{gugold}<101\uparrow^{100}101\).

Currently we use single approximation, in which we need to choose one of the two ends (lower or upper bound). However, it's not easy to determine the closer end of the two, because they're "googological". The "middle" of 3 and 7625597484987 can be 3812798742495 (in linear scale), 4782969 (in logarithmic scale) or 27 (in anti-tetrational scale). Then what's the "middle" of \(100\uparrow^{100}101\) and \(101\uparrow^{100}101\)?

And using double approximations is an option to avoid these kind of problems. &#123;hyp/^,cos&#125; (talk) 09:36, June 18, 2017 (UTC)

Decimal Diagonalization
Ok first we start with

$$X^0=1$$

Then

$$X^1=10$$

$$X^2=100$$

And so on, generally X to the n means "1 at the n+1th place" with zeroes filling the rest of the places

You can make more complex numbers like X^2+X^2 which is 200

To save space you can add a new notation like

a(X^b) = X^b+X^b+X^b for a X^b's

so for example 3(X^3) = X^3+X^3+X^3

or even chains with different powers, for example 3(X^3)+X^2 is 3100

Now you might say, even though X^100 = googol, the notation isn't too strong, but check this out

$$X^{X^{100}}$$

That was googolplex, thats right, the powers can also be expressed in terms of X's

Limit to the notation?

$$X^{X^{...}}$$

Who knows?. Maybe i'll add to it later, feel free to suggest and comment Chronolegends (talk) 17:35, May 7, 2017 (UTC)

Googology video
Watch this 20-minute video: What is the Largest Number?

There's nothing new there for any of us, but note in particular that he couldn't find a good layman's explanation of TREE(3). Maybe that's something we should shoot for. FB100Z &bull; talk &bull; contribs 10:19, March 25, 2014 (UTC)
 * TREE(n) (and hence TREE(3)) function isn't easy to understand. Even for now I operate with these trees quite awkwardly. Ikosarakt1 (talk ^ contribs) 12:35, March 25, 2014 (UTC)
 * For a layman to get a full understanding of the TREE sequence, it seems that we need to explain ordinals, wellorderings and order types, proof theory, tree data structures, Kruskal's tree theorem, and probably a lot more. Each of those alone is worth at least a 5-minute video! FB100Z &bull; talk &bull; contribs 16:38, March 25, 2014 (UTC)
 * The problem is that people look at the Wikipedia page for Graham's Number and say "What is this gobbledygook? Can't you explain it so a layman can understand?" If we have trouble explaining even Knuth Up-arrow Notation to the average person, how can we expect to explain the fast-growing hierarchy beyond the Small Veblen Ordinal?
 * I think I could give some sort of illustration of why the TREE function grows so big, but I would need to be able to draw trees easily, and I don't know how to do that. (I'm not well-versed with computers.)Deedlit11 (talk) 05:24, March 27, 2014 (UTC)
 * Try writing it using brackets, and have someone else do the illustration. FB100Z &bull; talk &bull; contribs 05:26, March 27, 2014 (UTC)

Weird, but meh. He didn't go deeper to stuff like the Busy Beaver function and Rayo's number (if that is useful). King2218 (talk) 11:01, March 25, 2014 (UTC)
 * That stuff (particularly Rayo's function) is really complicated. I spent about 2-3 months to get how Turing machine work from different sources, and still have no idea how to work with Rayo(n). Ikosarakt1 (talk ^ contribs) 12:42, March 25, 2014 (UTC)
 * It's kind of hard to cover all that in a 20-minute overview of googology. Although I am disappointed that he didn't mention BB(n). FB100Z &bull; talk &bull; contribs 16:38, March 25, 2014 (UTC)
 * Тhis was actually the video that got me into Googology.Boboris02 (talk) 18:16, February 25, 2017 (UTC)

Shrinking hierarchy
A hierarchy that shrinks instead of growing!

$$f_{-1}(n)=n/2$$

$$f_{(\lambda+2)(-1)}(n)=f^n_{(\lambda+1)(-1)}(n)$$

Comparisons:

$$f_{-1}(n) = n/2 $$

$$f_{-1}(f_{-1}(n)) = n/4 $$

$$f_{-2}(n) = n/2^n $$

$$f_{-2}(f_{-2}(n)) = n/2^{n}*2^{n} = n/4^{n}  $$

$$f_{-3}(n) = {n/(2^n)^{n}} = n/2^{n^2} $$

$$f_{-3}(f_{-3}(n)) = n/2^{n^2*2} $$

$$f_{-4}(n) = n/2^{n^3} $$

$$f^k_{(\lambda+1)(-1)}(n) = n/2^{(n^\lambda)*k} $$

It only works up to $$\lambda < \omega$$ : since theres no such thing as a negative limit ordinal... or is there?

Chronolegends (talk) 09:22, February 11, 2017 (UTC)


 * There is no such thing as a negative ordinal at all. But what stops one from defining \(f_{-\omega}(n)=f_{-n}(n)\)? We can write \(f_{-\alpha}\) for any ordinal \(\alpha\) (as long as we have fundamental sequences). We don't need to define what \(-\alpha\) means, we just need to define what the expression \(f_{-\alpha}\) means. LittlePeng9 (talk) 09:38, February 11, 2017 (UTC)


 * Other than that, your functions are not well-defined, since we quickly reach noninteger inputs - we should have \(f_{-3}(3)=f_{-2}(f_{-2}(f_{-2}(3)))=f_{-2}(f_{-2}(3/8))\), but what is \(f_{-2}(3/8)\)? We can't iterate a function fractional number of times. LittlePeng9 (talk) 09:41, February 11, 2017 (UTC)


 * I personally think this is a cool idea,but I see no use in it,Googological or not,and if you want to solve the problems with it,I have a few suggestions.
 * 1.You could write \(f_{(\lambda +1)(-1)}(n) = f^{k}_{(\lambda)(-1)}(n)\) if \(n\) is a fraction and \(k\) is in \(n = {a\over k}\) such that \(f_{(\lambda)(-1)}(a) = n\).Boboris02 (talk) 11:46, February 11, 2017 (UTC)

Splitting up some categories
Here's a more permanent discussion place about splitting number size-related categories. The categories I propose to divide is copied from this blog post for convenience (with the numbers updated accordingly):


 * Category:Up-arrow notation level (1,384 pages as of this writing)
 * First divide the pentation level and hexational level numbers to their own categories. We can divide the rest if it's not enough.
 * Category:Tetrational array notation level (2,220 pages as of this writing)
 * Currently smaller than the above category, but will grow as articles for E^ numbers are created. A similar approach might be needed: divide the superdimensional array notation level numbers (\(\omega^{\omega^\omega}\) ~ \(\omega^{\omega^{\omega^\omega}}\)) to their own category, and divide the rest if not enough.
 * Category:Higher array notation level (840 pages as of this writing)
 * Now this is a tough one. BEAF above tetrational arrays has always been controversial about their well-definedness. Therefore, I suggest dividing it to ordinal-based categories. Here's how I think this category should be divided:
 * Category:Epsilon level (\(\varepsilon_0\) ~ \(\zeta_0\))
 * Category:Binary-phi level (\(\zeta_0\) ~ \(\Gamma_0\))
 * Category:Finite-length phi level (\(\Gamma_0\) ~ SVO)
 * Category:Higher computable numbers (SVO and above) (There hasn't been many googolisms beyond SVO so far)
 * In fact, let's merge Category:Legiattic Array Notation Level and Category:Beyond Legiattic Array Notation Level into Category:Higher computable numbers, because the well-definedness of legiattic arrays and beyond is even more controversial.

-- ☁ I want more clouds! ⛅ 16:13, September 17, 2016 (UTC)
 * Yes, we need more categories. AarexWikia04 - 16:46, September 17, 2016 (UTC)

Revisit
Now that the pages for some of the smaller xE^ numbers were created (over a thousand of them), I think it's time to revisit this topic. I think Higher array notation level should be split into more categories somehow, but I couldn't think of the names. Any suggestions? -- ☁ I want more clouds! ⛅ 15:24, January 8, 2017 (UTC)

MathJax on Mobile devices
It seems that recently more and more people are browzing web from mobile devices, and in that case MathJax is not available. Is there a way to make it available on mobile devices?

For checking how it appears from mobile devices, just add ?useskin=wikiamobile to the URL. For example: http://googology.wikia.com/wiki/Arrow_notation?useskin=wikiamobile

If we just select VIEW FULL SITE at the bottom, everything is OK. But many people who first comes to this wiki just don't realize it, so at least we have to tell them. In Japanese googology wiki, I added a description at the top page like this, because it is only I can do now.


 * Mobile version of this site does not support math equations. To view equations correctly, please select VIEW FULL SITE at the bottom of the page.

If we can show this message at every page of the mobile site automatically, it would be better, but I don't know if it is available.

The best way is of course to make mathjax available on the mobile page. However, http://googology.wikia.com/wiki/MediaWiki:Common.js is not loaded on mobile mode, and http://googology.wikia.com/wiki/MediaWiki:Wikiamobile-menu cannot be edited even by admins, so there seems to be no way to make MathJax available on mobile mode.

Any ideas? 🐟 Fish fish fish ... 🐠 19:02, January 3, 2017 (UTC)

Curated / improved beaf
i was wondering if anyone has created a curated or improved version of beaf, by improved i mean cleared up the well defined parts and patched the missing parts of it that cause the notation to be ill-defined.

I know BAN was meant to build "upon" BEAF and could replace it as a well defined array notation, but i was wondering if anyone has done work on "solidifying" the pure BEAF

I remember stumbling on one such website in the past, but my memory has failed me in the attempt to remember the link or the authors name.

help would be greatly appreciated, thanks in advance!


 * This should be of your interest. LittlePeng9 (talk) 21:14, December 8, 2016 (UTC)

Finally: a formal definition for higher BEAF!
Sorry about the irrelevant clickbait title. This is about what we're going to do with ill-defined notations and numbers on the wiki. There are two extreme points of view: 0) allow anything and everything on the wiki provided that it is properly sourced, or 1) forbid all numbers and notations that have not been formally verified as well-defined.

As I see it this wiki is at an 0.3 or so. I would like us to get to 0.7. Specifically, I propose the following:


 * If a notation is ill-defined and has no redeeming features, such as Aarex hydra, it should be removed from the wiki.
 * If a notation is ill-defined but has some significance otherwise, such as BEAF beyond its tetrational part, it should remain on the wiki but:
 * It should not appear in List of googologisms or List of googological functions.
 * The article should state as clearly as possible that it is ill-defined and should not include approximations that give it the veneer of formality.

-- vel! 18:42, September 8, 2015 (UTC)

How do you determine that something is "completely ill-defined" or just "ill-defined"? Ikosarakt1 (talk ^ contribs) 03:31, September 10, 2015 (UTC)
 * The "completely" was unnecessary. I removed it. -- vel! 06:03, September 10, 2015 (UTC)

I have some positive responses and no negative responses, so I'm just going to go ahead with this. -- vel! 09:17, September 13, 2015 (UTC)

What about blog posts, like some of Aarex's notations? -- From the googol and beyond -- 23:20, September 22, 2015 (UTC)

The fonts make me uncomfortable
Some days ago, the font in the pages was as big as the font in the edit mode. Now, the font in the pages get bigger while the font in the edit mode get smaller. That's the problem. What do you think about it? &#123;hyp/^,cos&#125; (talk) 08:21, June 22, 2015 (UTC)


 * I have not noticed any changes in the font size. Are you sure it's not an issue on your side? LittlePeng9 (talk) 16:39, June 22, 2015 (UTC)
 * idk what you're talking about either. are you on a mobile device? Cookiefonster (talk) 18:45, June 22, 2015 (UTC)
 * So, on your devices is the font in the pages as big as in the edit mode? &#123;hyp/^,cos&#125; (talk) 02:49, June 23, 2015 (UTC)


 * Yes, I remember that the font size has been changed. But now I got used to it. -- ☁ I want more clouds! ⛅ 03:28, June 23, 2015 (UTC)

About the Korean Googology Wiki
As you may know, the Korean Googology Wiki is created at the wrong language setting, causing the templates, project pages and system messages to be in English. I first thought telling Antares (the wiki's founder) to contact Wikia Staff to change the url and the language. However, although that might solve the system message problem, the templates and project pages would still be left untranslated.

So, here's my plan: I'm going to create a new Korean Googology Wiki (구골로지 위키? 구골주의 위키? 큰 수 위키? 큰 수 연구 위키?) at the usual url "ko.googology.wikia.com", with the language actually set to Korean (ko) so we could have pre-translated templates and project pages. Then we can move the pages and their history from the existing wiki to the new wiki by using the export and import feature.

So, how do you think about it? -- ☁ I want more clouds! ⛅ 15:14, March 12, 2015 (UTC)
 * Sounds like a good idea. -- vel! 18:34, March 12, 2015 (UTC)

OK (I like 큰 수들 위키) Did you already make it? May I make it? \(\ Antares.H \) 06:53, March 13, 2015 (UTC)


 * I'll make it when I get home. -- ☁ I want more clouds! ⛅ 08:41, March 13, 2015 (UTC)


 * I've did it. Export/import going to happen soon. -- ☁ I want more clouds! ⛅ 13:39, March 13, 2015 (UTC)


 * ...and I've imported a page. (Actually, there appears that there is not much to import.) -- ☁ I want more clouds! ⛅ 15:07, March 13, 2015 (UTC)

Totally Not Misguidedly Placed Greeting
Sometimes, I question why I spend so much time here

I'm not sure of the average age of users here, but I'm willing to bet it's higher than 13. I'm relatively new here, but I've spent a lot of time here and places like Saibian's and Cookie Fonster's sites learning everything I can. I've hit a point where I'm pretty sure I wouldn't be considered an idiot here. I just wanted to introduce myself, as the-probably-not-but-possibly the youngest quasiregular here.

TL;DR, I'm a 13-year-old who's nerdy enough to love this place.

''Side note: I can't Wikia yet. I literally made this account JUST for this.'' All of the time ever! (talk) 13:59, February 3, 2015 (UTC)


 * Welcome to the wiki! For your information, average age of users is pretty much above 13, but there still are some users younger than you (I won't point them out, because they might not be willing me to do so). Enjoy your stay! If you have any specific question, let me know on my or someone else's talk page. LittlePeng9 (talk) 20:45, February 2, 2015 (UTC)
 * ::hello, welcome to the wiki! also, please sign your comments with four tildes (~ ~ ~ ~ without spaces). Cookiefonster (talk) 22:50, February 2, 2015 (UTC)

Featured blog
Many other wikis have a "featured article" process whereby high-quality articles are selected and voted to be featured on the main page. How about making a similar process for quality blog posts? it's vel time 09:25, October 24, 2014 (UTC)


 * Agreed, I like the idea. How about one every week?
 * I do too. -- A Large Number Googologist -- 12:35, October 24, 2014 (UTC)


 * That might be fun! LittlePeng9 (talk) 12:37, October 24, 2014 (UTC)
 * Yeah. -- A Large Number Googologist -- 13:04, October 24, 2014 (UTC)
 * Good idea. Wythagoras (talk) 17:59, October 24, 2014 (UTC)
 * @CF I think it'd be better to have featured blogs added to a pool, and have a random one displayed on the main page. (This is the approach taken by RationalWiki.) That way we don't have to worry about sticking to a regular schedule.
 * Anyways, let's start nominating some blog posts now, yeah? Say whether you agree or disagree in the appropriate section. it's vel time 19:09, October 24, 2014 (UTC)
 * How about including userpages in the selection? Wythagoras (talk) 18:47, October 26, 2014 (UTC)
 * Sure thing. it's vel time 02:36, October 27, 2014 (UTC)

How many will we need? I've seen a few others, though they are not as good as the other four. Wythagoras (talk) 19:10, October 26, 2014 (UTC)

Addition is commutative by LittlePeng9
Nominated by Vel!. Although not directly googological, this is impressive work, and it's also pretty amusing that it takes a proof this long to establish such a simple fact. Discuss below whether you agree or disagree. it's vel time 19:09, October 24, 2014 (UTC)


 * Agreed! Wythagoras (talk) 18:47, October 26, 2014 (UTC)

Googology101, Part I by Sbiis Saibian
Nominated by Vel!. Well-written and informative, although we may want to hold off featuring this until Sbiis feels it's ready. it's vel time 19:09, October 24, 2014 (UTC)


 * I agree. This is excellent for newcomers and is something everyone should see, including old wiki users. LittlePeng9 (talk) 11:59, October 26, 2014 (UTC)
 * Agreed! Wythagoras (talk) 18:47, October 26, 2014 (UTC)
 * definitely this one Cookiefonster (talk) 02:33, October 27, 2014 (UTC)

Random Turing machines by LittlePeng9
Nominated by Wythagoras. Personally I think this is more impressive than the commutative addition proof. Wythagoras (talk) 18:47, October 26, 2014 (UTC)
 * Not so sure about this one, mainly because the content doesn't appear that interesting to a casual viewer, whereas his commutativity proof is provides a powerful visual whose purpose is easy to understand.
 * Maybe making graphs of the TM transition rules would be more attention-grabbing — an image that makes readers think, "Whoa, that's a cool graphic. What's the math behind it?" it's vel time 02:44, October 27, 2014 (UTC)

Ordinal notations by Deedlit11
Nominated by Wythagoras. This series of six blog posts about the ordinal notations are well-written and informative, and brings a good definition of ordinal notations. Wythagoras (talk) 19:00, October 26, 2014 (UTC)

Proving the bound for S(7) by Cloudy176
Nominated by Wythagoras. This is a very nice proof, and a good example of a small but powerful TM. Wythagoras (talk) 19:13, October 26, 2014 (UTC)

Introduction pages
I think there should be more introduction pages. There are only three by now. For newcomer-friendliness, introduction pages that is easier to grasp are required.

I'm a newcomer, and I joined at the end of September. Up arrow notation is fine, and linear array notations are, too. But some pages like Ordinal Collapsing Function or fundamental sequences (is there even a pages for it?) doesn't gives out enough examples.

my concept about fundamental sequences and Ordinal Collapsing Function beyond BH Ordinal is still vague, please help me out.
 * a lot of people on this wiki are badly abusing FS's and OCF's, so the confusion is understandable... it's vel time 19:41, October 5, 2014 (UTC)

Revive the number navigator
The number navigators were all deleted some time before, because we use templates instead. However, the templates are only for some kinds of googologisms, not all googologisms. For example, how can I jump from TREE(3) to Destrubixul? Currently, the only way is back to the full list (and better sublists), find TREE(3), and then go to the number immediately after it - the destrubixul. That's troublesome.

My idea is, "embedding" the full list into number pages by using number navigators, so that we can walk over all googologisms (not only some kinds of them) from page to page. hyp$hyp?cos&#38;cos (talk) 02:04, July 25, 2014 (UTC)
 * The number navigator was eliminated not just because nav templates can replace it. See the first two bullet points in the proposal to delete it. you're.so.pretty! 05:17, July 25, 2014 (UTC)
 * Hyp cos, you can jump to arbitrary number from arbitrary page using just googology.wikia.com/wiki/ in the URL. Ikosarakt1 (talk ^ contribs) 05:58, July 25, 2014 (UTC)

I want them back, as I liked them. But the community didn't want them as there were some problems. So, here's a radical solution:

On the number navigator template, we can create a subpage that lists the numbers that are linked to each other. Then, using Lua magic, make it that adding the numnav on an article automatically gives the prev/next links using the name of the number.

We could go further, and make separate lists for separate navigations, such as the numbers made by the same author or the same notation, as well as non-numbers like functions and prefixes.

For example, the subpage could contain:

... ... ...
 * Default list
 * Bird's number
 * TREE(3)
 * Destrubixul
 * By Lawrence Hollom
 * Bigreat Trigrand Destruxul
 * Destrubixul
 * Kilodestrubixul

Then adding the navigator to the TREE(3) page (like  or  ) will automatically give the links to Bird's number and destrubixul. Also, adding the code  on the Destrubixul article will give the links to Bigreat Trigrand Destruxul and Kilodestrubixul.

With this proposal, we can update all number navigators by a single edit, and we can provide general navigation by size order which is hard to do before (in most cases, there is no link on number pages to the corresponding List of googologisms subpage) and there might be numbers which LoG doesn't include yet.

In this way, this proposal fixes the first and third issue in the proposal to delete the navigator. However, the second issue, when we are not sure about how to order the numbers, appears to be the hardest to fix. My suggestion is to simply exclude them from the main size-order list, and just place them on a separate list as mentioned above.

So, what are your thoughts? -- ☁ I want more clouds! ⛅ 13:16, July 25, 2014 (UTC)
 * It sounds good. We can order numbers by the subpages of the full list, and make an "automatical" templete to search the full list and add the neighbors of the page calling it. But the number navigator of the same kind can be replaced by nav templetes.

Everyone, there're not only us who are so familiar to the googologisms on this wiki, but also someone green yet interested to googology. This can be big convenience for them. hyp$hyp?cos&#38;cos (talk) 14:41, July 25, 2014 (UTC)

I support Cloudy's idea, if we know it is possible to implement. you're.so.pretty! 16:10, July 25, 2014 (UTC)

Xkcd forums
Our current policy forbids forums. I'm open to upholding this for the xkcd "My Number is Bigger!" thread, although I'm kind of neutral on the subject for now. What do you think? you're.so.pretty! 06:12, July 25, 2014 (UTC)
 * Alternative idea: how about one big article summarizing the thread's contents? you're.so.pretty! 06:15, July 25, 2014 (UTC)


 * that actually sounds interesting. Deedlit11 (talk) 07:04, July 25, 2014 (UTC)
 * I think there are some entries that worth its own article. Wythagoras (talk) 07:06, July 25, 2014 (UTC)

ExE articles: a radical solution
We are missing a lot of articles on Sbiis' latest googologisms. I'm here to propose a quick solution to the void:


 * We create a template that generalizes all the pages that we have on these numbers. This template would provide slots for the name of the number, its value in ExE, its etymology, a citation, and space for any additional comments.
 * With a lot of grunt work, we assemble a computer-readable table of all the names of the numbers and their values.
 * A bot generates all the articles from the table, using the template.

The template has the advantage that we can modify all the numbers' pages at once. In addition, it would be nice to have this table anyway, just because Sbiis' work is a "canonical" googology text and this is sort of a way of preserving the names in a raw format.

While I'm at it, I would like to question the value of the "approximations in other notations" tables. Do we need them on EVERY article? Why not just on the milestone numbers, like pentacthulhum?

you're.so.pretty! 18:59, May 8, 2014 (UTC)
 * Probably we must ignore most of these names expect the most famous like "grangol" or "grand godgahlahgong". Words like "pentacthulhutetripso-gralgathor" are hard to pronounce, memorize and write down. They had been shown only for the connection between naming roots and Hyper-E expressions. Ikosarakt1 (talk ^ contribs) 11:57, May 9, 2014 (UTC)
 * It's sort of been our tradition here to make a new article for every googologism. It...uh... artificially inflates our article count. you're.so.pretty! 14:51, May 9, 2014 (UTC)
 * Yeah. They be like "OMG THIS WIKI HAS MOAR ARTICLES THAN THAT MATH WIKI" and then they find out that almost half of the articles are about Saibianisms :P King2218 (talk) 15:17, May 9, 2014 (UTC)
 * There is actually some logic to it, aside from making it seem like we have more content than we do. We're trying to document every single Saibianism ever coined, so we might as well make separate articles for each one. It's the same amount of content as a redirect to a milestone page :P you're.so.pretty! 16:31, May 9, 2014 (UTC)
 * Impossible, there are too many of them (\(>10^{1000000}\) below tethrathoth). Wythagoras (talk) 19:45, May 9, 2014 (UTC)

Digitization project started at Project:ExE numbers. you're.so.pretty! 16:49, May 9, 2014 (UTC)
 * Well, the current content of this page says nothing about etymology and comparisons with other notations. I promise to make a separated page for each Saibianism. Ikosarakt1 (talk ^ contribs) 17:13, May 9, 2014 (UTC)
 * I've started adding etymologies to the page, and as above, I'm proposing removing the "Approximations" section for all numbers except milestones.
 * Please don't waste your time on making these articles while we're still deciding on whether to carry this through. you're.so.pretty! 19:12, May 9, 2014 (UTC)
 * Why remove? We can just keep where it is and just don't add it to new pages. Also, I believe its not needed to create all that are in Saibian's pages. About 600 to 1000 numbers (eliminate the most trivial) would be enough, in my opinion. Wythagoras (talk) 19:43, May 9, 2014 (UTC)
 * Okay, well. Fire away, I guess. We'll decide later whether we'll automate article creation. you're.so.pretty! 20:55, May 9, 2014 (UTC)

BOX M~
I only found out about this a few minutes ago.

http://www.scribd.com/doc/77714896/The-largest-number-ever-il-numero-dei-record

The author claims to have found "an upper bound on the useful natural numbers concerning every possible 'human problem'. It implies a pluridimensional approach and includes the definition of Graham's number plus Conway's chained arrow notation [sic]", based on his edit to the Wikipedia page on large numbers.

Sadly, the paper is in Italian, a language I don't speak. FB100Z &bull; talk &bull; contribs 23:40, February 2, 2013 (UTC)
 * I found it months ago. I haven't understood all yet, but this number looks like an Italian salad. &mdash; I want more clouds! 13:16, February 11, 2013 (UTC)
 * Definitely a salad, but it's sort of amusing how the author tries to create all this hype. FB100Z &bull; talk &bull; contribs 01:07, February 14, 2013 (UTC)

This guy founded Italy's most exclusive high IQ society and wrote a book about hyperoperators, and all he can come up with is a salad? Sheesh. Random dudes from Texas have done better.

Anyways, the article's up at BOX_M~, and I'd much appreciate it if you guys would check my work. FB100Z &bull; talk &bull; contribs 02:53, February 14, 2013 (UTC)


 * I know this is from the Triassic Era, but yeah, this is pretty damn pathetic. Salad numbers are a disgrace to googology. WikiRigbyDude (talk) 23:20, April 23, 2014 (UTC)

Let's invade Cantor's Attic!
But, you know, the good kind of invade. Cantor's Attic is a ghost town. It hasn't had any edits for quite a while, and it's not complete &mdash; many articles are stubs or completely empty. I'm proposing that we all sign up for the wiki to continue where the past editors left off. FB100Z &bull; talk &bull; contribs 23:22, December 29, 2013 (UTC)

Bobbyyoo
Hi. I've seen Bobby Yoo's large numbers site before, and I've also been seeing it used as a source on this site. It seems that nearly all of the numbers on that page are copied from other sources (I'm seeing a lot of Bowers' googologisms there). Since we can cite the original sources directly, that's not much of a problem.

However, some of the googologisms on the site are copied from sources that I would consider unreliable. The one that disturbed me in particular is meegol. Its only source is from Yoo's site, which itself cites a source: a defunct Wiktionary page. (By the way, what does the definition even mean?) FB100Z &bull; talk &bull; contribs 05:03, August 26, 2012 (UTC)

Okay, I think I found the missing piece of the puzzle. This is a long story:

I was lurking around at the forbidden realms of Googology Wiki (aka deletion archive), and I found a page named "List of large numbers". I looked at the first revision by User:SpaceGuy (the others are him/her blanking the page), and found a very long list of large numbers. Looking at the page, it was obvious that the page was copied from Wiktionary.

The deleted page included internal links on (nearly) every number on the page, which includes meegol. At the top of the page, it said "... but please do not add articles for them. That being the case most links to these words should remain red." so the links are supposed to be redlinks. This explains why there was a Wiktionary page existed for meegol; somebody followed that internal link and created the page, and Yoo used that page as a source.

Anyway, I went to Wiktionary and searched for the term "Names of large numbers", and found this page, which discussed the deletion of the corresponding page. The talk page was moved after the deietion of its corresponding page; the corresponding page was called "Wiktionary:List of protologisms/large numbers". Looking at the deletion log, I found that someone has restored the page (albeit the page was deleted again after 36 minutes) with the following log:


 * 14:11, 21 December 2009 Conrad.Irwin (Talk | contribs) restored page Wiktionary:List of protologisms/large numbers (110 revisions restored: WT:LOP is supposed to be tosh :p)

Note that, the deleted revision I mentioned above was created on 14:28, December 21, 2009! So it's very likely that SpaceGuy is Conrad.Irwin, a Wiktionary administrator, and the deleted revision on this wiki is the latest revision of that page before getting deleted in Wiktionary.

Back to the list, I think that list is where we got most of the made-up numbers on this wiki (except for those defined using Hyper-E notation :P ). Since the original page on Wiktionary was deleted, it is unknown who submitted the entry "meegol" and what he/she intended for its meaning (for now, anyway).

So that's the long story about the Wiktionary's unfortunate list of large numbers. :'( -- ☁ I want more clouds! ⛅ 14:01, December 3, 2013 (UTC)

Kill the number navigator
I propose to destroy the Number Navigator:


 * It highlights the disadvantages of a ; every time a new number is added, two more pages have to be updated.
 * It imposes a strict ordering of googologisms whose sizes we aren't sure about yet. Ex. if Wythagoras decides to publish his dollar function off-site and we enter everything back over here, we have to go over all these comparisons against his numbers and BEAF. Every time we correct ourselves, we have to go through the whole process of relinking the Navigator.
 * The Number Navigator's purpose is to show orderings of numbers and create a "guided tour" throughout googologisms. But navigational boxes already do that, and we have List of googologisms when all else fails.

In general, it's ugly, unnecessary, and adds way more work than we need. Any thoughts? FB100Z &bull; talk &bull; contribs 23:14, September 28, 2013 (UTC)
 * Yes, probably you right. But then we have to edit thousands of pages when the Number Navigator already appears. Ikosarakt1 (talk ^ contribs) 08:07, September 29, 2013 (UTC)


 * I think second dot is best point why to delete navigator. Can't we make a bot which would do that? LittlePeng9 (talk) 09:50, September 29, 2013 (UTC)
 * Yes, I'll have GoogolBot do this if enough people agree. FB100Z &bull; talk &bull; contribs 19:51, September 29, 2013 (UTC)
 * About publishing Dollar Function off-site: No, I'm not going to do that, at least not in the near future. So that shouldn't be the reason why to remove the number navigator. Wythagoras (talk) 18:40, September 29, 2013 (UTC)
 * It was just an example. FB100Z &bull; talk &bull; contribs 19:51, September 29, 2013 (UTC)
 * Oh, when you remove all number navigators, please don't delete the template itself, as it has been used on many pages (1,596 as of 07:37, September 29, 2013 UTC) and should left as a historical reference. Add a note that it is no longer being used. (Edit: Also, unprotect it.) -- ☁ I want more clouds! ⛅ 00:37, September 30, 2013 (UTC)

By the way, List of googologisms needs to be updated. I'll do it when I have free time. -- ☁ I want more clouds! ⛅ 00:32, September 30, 2013 (UTC)

MathJax + GoogolBot
Heya folks. I implemented MathJax onto the wiki. Below is an example of MathJax and the built-in LaTeX implementation:

MathJax: \(f(x) = \sqrt{x + 1}\)

LaTeX: $$f(x) = \sqrt{x + 1}$$

Edit the source to see how they're different.

You can see a complete example at the Factorial article.


 * Pros
 * MathJax takes less characters to open and close formulas.
 * MathJax is easy to customize by editing the site JS. We can change fonts and font sizes as we wish &mdash; we can even change the formula delimiters to  or something.
 * No code conversion is needed &mdash; MathJax uses LaTeX, so we don't need to rewrite any formulas.
 * MathJax coexists happily with the built-in math. Features don't break.


 * Cons
 * MathJax breaks when JS is turned off.
 * MathJax doesn't work in Preview mode. I think it's possible to fix this, but I haven't tried.
 * Some MathJax formulas can trigger the MediaWiki parser, and need to be wrapped in.
 * I'm seeing a bug in Chrome: For some weird reason, MathJax occasionally produces buggy output when moving. It gets corrected by refreshing. (I'm not sure how to fix this ATM; I might contact Wikia Central or something.)

An important stem in implementing this is changing the  delimiters into. To do this, I will need to use a bot. I already have a bot account, GoogolBot, but to get the bot flag on it, I need to get it approved by all of you guys. Then I'll ask Wikia.

FB100Z &bull; talk &bull; contribs 20:13, September 13, 2012 (UTC)


 * It's quite cool. It needs less downloading of information. However for some reason it doesn't work on mobile Wikia. Also, you can use the  tag when previewing, and change the delimiters to MathJax ones after that. --Cloudy176 (talk) 05:53, November 10, 2012 (UTC)
 * If we load both, we still have to download the  tags... :/ FB100Z &bull; talk &bull; contribs 19:18, November 10, 2012 (UTC)
 * It would be nice if the bot put down categories to articles, and other routine edits, too many time spend at this. Ikosarakt1 (talk) 10:16, November 10, 2012 (UTC)

Are we over-using MathJax? Although it looks better, it has been used on simple formulas such as E100#googol, which it looks like it doesn't need MathJax at all. Plus, I use the mobile version of this wiki frequently, and it doesn't work on there, rendering articles hard to read. (Maybe there's a way to fix this?) &mdash; I want more clouds! 13:22, February 11, 2013 (UTC)

In that case, we can use MathJax only on complicated expressions such as \(\sqrt{(2^{\text{googol}-10})}^{f_2}; f_2=10\). Ikosarakt1 (talk) 20:23, February 11, 2013 (UTC)
 * Yeah, although let's be consistent within articles :) Also, don't go changing all the pages, it's not a big deal.
 * The mobile issue is...not really something I can fix. Mobile Wikia doesn't load MediaWiki:Common.js, which is the only way I can get MathJax onto the page. FB100Z &bull; talk &bull; contribs 05:12, February 12, 2013 (UTC)

If peoples cannot view properly, zoom. $Jiawhein$\(a\)\(l\)\(t\) 08:43, March 26, 2013 (UTC)
 * Or right-click on the formula. FB100Z &bull; talk &bull; contribs 02:49, April 20, 2013 (UTC)

Zooming hotkey: \(\boxed{ctrl} + \boxed{+}\). $Jiawhein$\(a\)\(l\)\(t\) 00:57, April 26, 2013 (UTC)

All codes: http://www.onemathematicalcat.org/MathJaxDocumentation/TeXSyntax.htm $Jiawhein$\(a\)\(l\)\(t\) 00:17, May 20, 2013 (UTC)

There is a problem: MathJax is slow loading, and when its usage is vast, the loading gets long and inconvenient. Ikosarakt1 (talk ^ contribs) 10:30, June 30, 2013 (UTC)

Quotes
We need more quotes. Any suggestions? FB100Z &bull; talk &bull; contribs 22:43, April 30, 2013 (UTC)

How about Strong laws of small numbers? First of them somewhat indirectly shows that large numbers are sometimes necessary. LittlePeng9 (talk) 08:16, May 1, 2013 (UTC)

French/German system
This is an English wiki. Since terms like milliard are no longer standard English terms &mdash; they only have parallels in French and German &mdash; should we delete/redirect them? FB100Z &bull; talk &bull; contribs 05:27, November 12, 2012 (UTC)

Googol
The googol is equal to 10^100.

Aarex 21:45, June 27, 2012 (UTC)
 * Fashinatin', I tell ya'! FB100Z &bull; talk &bull; contribs 01:02, August 29, 2012 (UTC)

Sigma project
We should create a central hub page (in the project namespace probably) for the ongoing project of evaluating Sigma(5). (As a sidenote, I think we should devise a more concise TM notation that doesn't take up, like, 20 lines.) Where is the most up-to-date content on this subject? it's vel time 02:48, September 28, 2014 (UTC)


 * I think I'd be a good idea to make sections here about proofs. There is no up-to-date content, all proofs are invalid. Wythagoras (talk) 08:27, September 28, 2014 (UTC)

HNR #3
Simulated up to 81.8 billion steps. Let me know today if you need to know anything about simulation. Also, there is no reason why the machine's name is 5-state BB (4098), I dunno how it came there. Wythagoras (talk) 10:44, September 28, 2014 (UTC)

Confession (and proofs for 14 HNRs)
On around November of last year, I picked Sigma(5) as the subject of an R&E project for my school. I have proven 14 of the 42 HNRs to loop using Heiner Marxen's AWK script, and on this year I used the result for my graduation paper. The professor suggested to post the results here, which I hesitated to do until now.

The proven HNRs are #02, #05, #06, #08, #11, #14, #18, #20, #21, #22, #25, #27, #30 and #38. Download the results here.

Let me know if you need to know more about this. (I might release it as a blog post.) -- ☁ I want more clouds! ⛅ 11:17, September 28, 2014 (UTC)
 * Wow, great job! Wythagoras (talk) 11:26, September 28, 2014 (UTC)


 * This is really awesome! Finally a result about which we can be sure. Just curious, how much time did the evaluation take for all these machines? LittlePeng9 (talk) 11:28, September 28, 2014 (UTC)
 * Daniel Briggs also proved some machines are non-halting or gave clues to prove some. Machines #02-#13, #15, #17, #18, #21, #23, #25-36, #38 and #39 are proven or given some clues. Also, #01 was almost completely proven (see below). Daniel Briggs and univerz prove some machines are non-halt. So, we left with #16, #19, #24, #37 and #40-42 (43?). Under ten machines are holdouts now. Tetramur (talk) 09:56, May 5, 2019 (UTC)

Observation for HNR#1
I've tried to simulate it for different inputs - it exhibits quite weird behavior for a while, then it either halts or falls to simple loop with states 4-1-3-4-2-3-4-2-3-0-4 and quickly appending ones to the right. The latter situation happens if we have ..._111 at the tape, where head is place at the last 1 and state is 3 or 4.

Also, from my experiments, it halts only when head is placed to the left of all sequences of 1's.

Ikosarakt1 (talk ^ contribs) 04:35, February 2, 2015 (UTC)


 * This machine also halts if ran on input __1 with initial state 0. Note that at one point machine is in state 0 between some 1's, so observation from your experiments seems to be incorrect. LittlePeng9 (talk) 11:27, February 2, 2015 (UTC)

Questions in googology
Hello, I am a mathematics student at the University of California – Los Angeles. I have always had a soft spot in my heart for googology and so I have begun to write a rigorous introduction to the main concepts. My hope is to come up with something that will both motivate its study to the mathematically trained – who generally tend to regard it with disdain – and also suggest new directions of research for those it catches on.

But I need questions. This request is twofold: I need questions I can answer with proofs in order to demonstrate the existence of interesting googology-related mathematics, and I need to provide questions for further research. So I am asking all of you: what googology-related questions would you like answered? They can be philosophical or rigorous questions, as long as they would be interesting to a mathematically trained individual. Also, I know it can be hard to classify a question as "interesting" or not, so just err on giving me your question if you are unsure. Finally, don't hesitate to give me multiple questions or ask more about my background/this project!

Examples of questions:
 * Do busy beavers have strongly connected graphs?
 * Can we come up with a "reasonable" logic stronger than set theory that still allows us to formalize the idea of a number and extract large numbers out of it?
 * Is there a fast algorithm which, given two numbers in Conway Chained Arrow Notation, returns which one is larger (or if they're equal). See for example https://mathoverflow.net/questions/119453/polynomial-time-algorithm-to-compare-numbers-in-conway-chained-arrow-notation.
 * How quickly does the representation of an ordinal increase if we take an element from its fundamental sequence? For example, upon taking the 3rd element of its fundamental hierarchy the ordinal ω3becomes ω2×3, and when we do this again we get ω2×2+ω×3. If you try this with random ordinals and take the kth element of the fundamental sequence it seems they never increase in size by more than a factor of k. Can we prove this?
 * For certain choices of "reasonable" ordinal representation systems (say, Cantor normal form, which works up to ε_0), does the busy beaver function "effectively" beat the fast growing hierarchy for all ordinals in this representation system? That is, is the lowest number N for which BB(n)>f_α(n) for all n>N at most f_α(C) for some reasonably small value of C? The motivation for this is that if not, then of course BB(n) still dominates f_α(n), but it's not practical for naming larger numbers than f_α(n) since the size of numbers we must plug in to get a bigger output from BB(n) is too large for us to express.

Exfret (talk) 11:44, April 1, 2019 (UTC)


 * Personally, the most interesting question for me among your questions is the second one, i.e. a reasonable logic. Choosing a reasonable (mainly recursive and \(\Sigma_1\)-sound) formal theory containing arithmetic is deeply related to both computable and uncomputable large numbers.
 * By the way, there are several googological open problems which are very difficult even for mathematicians to solve such as the termination of specific versions of BMS, the totality of the function of Laver's table, how to construct OCFs with large cardinals beyond those by Rathjen, and so on. If you get interested in them, then it is good to try them.
 * p-adic 11:59, April 1, 2019 (UTC)

The prime blocks of arrays above dimensional
Let's see the superdimensional array of BEAF - the same as dulatri of Bowers' - but of all 3s have converted in 1s, except the first two and the last.

What's the prime block of it? (The hypercube of six dimensions or what?)

How I can start to compute it? —Preceding unsigned comment added by Tetramur (talk • contribs) 11:06, February 23, 2019 (UTC)

Fast-growing function came up in a game
Fooling around with Magic: The Gathering cards, I accidentally came across something of googological interest.

I will try and explain things in terms for people who don't know the game well, but for those interested, the relevant cards are Doubling Season, Clone Legion, and Opalescence.

We start with 4 cards out, they all have this effect: "When you make a copy of one of your cards, double the number of copies made."

This effect stacks geometrically, so having 4 copies means we get a multiplier of 2^4 = 16

We also have an ability, "Make a copy of every card in play."

Using this ability once, we produce 2^4 copies each for our 4 cards, for a total of 64, adding to our existing 4 for 68 cards in play.

Now we have 68 cards, bringing our multiplier to 2^68 = 295,147,905,179,352,825,856

The second time we use the ability, we make 2^68 copies each of our 68 cards, bringing us to a total 20,070,057,552,195,992,158,276 cards.

With this many cards, our multiplier becomes 2^20,070,057,552,195,992,158,276, which is number of about 6*10^21 digits.

The third time we use the ability, we produce approximately 10^(10^21.7) more cards. You see where this is going!

It is possible within the game rules to use this ability an arbitrary number of times, so we can produce some numbers that are infeasible to compute.

I guess our function is...

f(0) = 4

f(n) = f(n-1) + f(n-1) * 2^f(n-1)

In the grand scheme of googology this is not an astoundingly powerful function, but it was interesting to find it in an unexpected place. Ndril (talk) 14:23, February 23, 2019 (UTC)
 * This blog might interest you: User blog:Deedlit11/Googology in Magic: the Gathering -- ☁ I want more clouds! ⛅ 17:01, February 23, 2019 (UTC)

Centimillillion
10^3,303 and 10^300,003 are both called the same thing, Centimillillion. I believe this to be a major oversight, and have come up with a suggestion.

The most common large number system is the -illion system, which uses various prefixes from around the world to denote how many zeros after the first three there are (for the short scale, anyway. Long scale counts groups of six zeros). For any number k^n, where K is 1,000, and N is the exponent, the exponent is usually written with the ones place first, then the tens, then the hundreds, and so on. For example, k^4,322 (10^12,966) is Un-viginti-trecenti-quadrimillillion. I don't have a problem with this, but I have recently discovered the fact that k^1,101, named Centi-millillion, and k^100,001, named Centimillillion, have the same name. Thus, I propose that k^100,001 be renamed Lakillion/Lakhillion, using the Indian word lakh, which means 100,000. I also propose that this, and the Myril- prefix, be attached to all the metric and greek prefixes above (k^10,000,001 is Myrimicrillion, k^100,000,000,000,001 is Lakipicillion, k^(10*k^20+1) is Myrilicosillion, etc.) —Preceding unsigned comment added by Noobly Walker (talk • contribs) 02:50, October 31, 2018 (UTC)
 * If you want to do this then feel free. Actually, it's always been odd to me too how the prefixes for the illions aren't written in the order of the exponent they are denoting.


 * You could redo the entire illions system preserving order and see how it turns out, it could be nice for you!


 * Another solution would be to use a hyphen as you did to distinguish between identical names with the larger number getting a hyphen; and the hyphen can be pronounced as a click with the tongue. Complexity of a Zyzzyva (talk) 23:06, November 11, 2018 (UTC)

What is 1 Millillion devided by 100?
Hello guys, I want to know what is 1 Millillion devided by 100? —Preceding unsigned comment added by The Evolution Map. (talk • contribs) 21:12, October 12, 2018 (UTC)

According to https://googology.wikia.com/wiki/Millillion, 1 Millillion = 10^3003 so the answer would be 10^3001 ? Or is this some in-joke I'm not following? Reddal (talk) 14:55, October 16, 2018 (UTC)

Can you help me answer my 12 year old daughters googolology question?
Hi,

Recently my 12 year old daughter was asking me about big numbers. I've always had a fascination myself - but am very much a layman by the standards of people on this site.

She knew about googolplex and wanted to know if there were bigger named numbers. I told her grahams number was much much bigger and tried to explain the construction. She got a bit lost with the multiple up-arrow notation but I wanted to encourage her interest so I set her a challenge - if she could describe a way to construct a number bigger than grahams number I would buy her a new phone! This seemed a safe bet for me as I don't think its an easy task from where she is starting - but it got her interest. Below is what she came up (she did it with pictures and hand waving - not this notation - but I've tried to formalise it for her) :

Her first idea was power towers, ie her first number was power tower of googolplexes, googolplex high, ie


 * A = googolplex ↑↑ googolplex

I explained this didn't even begin to get to the first step of grahams number. She then tried:


 * B = A ↑↑ A

Again I explained that wasn't big enough. Then she came up with a way of iterating the process :


 * f(0) = B


 * f(n) = f(n-1) ↑↑ f(n-1)

and her third number was:


 * C = f(B)

I said it still wasn't enough (though she was getting further than I'd imagined and I realised I was going to have trouble justifying that!). Finally she iterated again:


 * D = f(f(f(f(... f(C))...) with C nested f's.

I told her I still didn't think it was big enough - but I wasn't so confident. I said it was probably bigger than G1 (3 ↑↑↑↑ 3) but still nowhere near G64?

Can anyone help me understand how far she got? I need to be able to justify to her that I don't owe her a new phone!

thanks - reddal


 * Her cool number is bounded by \(G_2\) in the following way:

\begin{eqnarray*} (n+2) \uparrow^2 (n+2) & = & (n+2) \uparrow^3 2 < 2 \uparrow^3 (n+2) \\ A & = & 10^{10^{100}} < (2^4)^{(2^4)^{(2^4)^2}} = 2^{2^{2^{10}+2}} < 2^{2^{2^{2^{2^2}}}} = 2 \uparrow^2 6 \\ & < & 6 \uparrow^2 6 < 2 \uparrow^3 6 \\ B & = & A \uparrow^2 A < 2 \uparrow^3 (2 \uparrow^3 6) = 2 \uparrow^3 7 \\ f(0) & = & B < 2 \uparrow^3 7 \\ f(1) & = & f(0) \uparrow^2 f(0) < 2 \uparrow^3 (2 \uparrow^3 7) = 2 \uparrow^3 8 \\ \vdots & \vdots & \vdots \\ f(n) & < & 2 \uparrow^3 (n+7) \\ C & = & f(B) < 2 \uparrow^3 ((2 \uparrow^3 7)+7) < 7 \uparrow^3 (7 \uparrow^3 7) = 7 \uparrow^4 3 \\ f(C) & < & 2 \uparrow^3 ((7 \uparrow^4 3) + 7) < 7 \uparrow^3 (7 \uparrow^3 (7 \uparrow^4 3)) = 7 \uparrow^4 5 \\ \vdots & \vdots & \vdots \\ f^n(C) & < & 7 \uparrow^4 (n+3) \\ D & = & f^C(C) < 7 \uparrow^4 ((7 \uparrow^4 3)+3) < 7 \uparrow^4 (7 \uparrow^4 (7 \uparrow^4 7)) = 7 \uparrow^5 4 \\ & < & (3 \uparrow 3) \uparrow^5 (3 \uparrow 3) \ll 3 \uparrow^{G_1} 3 = G_2 \end{eqnarray*}
 * p-adic 15:33, October 5, 2018 (UTC)

Fantastic! Thanks so much.

- reddal

Prime Factorization Challlenge
118.216.61.125 23:15, June 14, 2018 (UTC) Hi. I am a person who is interested in large numbers. I have watched a few videos on big numbers and I have found that prime numbers are used in today's encryption (RSA specifically, typically 1024 bit and 2048 bit keys are used, 2048 is now considered secure) because they are fairly easy to generate and easy to multiply, but become exponentially harder to factor on classical computers because the possibilities increases exponentially. The world factoring record for prime numbers is 768 bits, which means that 768 bit RSA cannot be trusted to keep data safe anymore. This factorization was done on multiple computers in 2009. Although the official RSA prime factorization challenge was withdrawn, I am starting up a new factorization challenge starting with 288 bit numbers because of this withdrawal. Here is a 288 bit number:

71895062549397919715474927550730899351286349067725793336509356451831786196043

118.216.61.125 23:17, June 14, 2018 (UTC) I accidentally misspelled the word challenge. Sorry.

Which is bigger?
Hi all. Never posted before but great website and I'm regularly staggered by the intelligence of the major contributors.

Anyway, I struggle comprehending notation after Bachman-Howard and am interested in knowing the size comparisons between:

1. SCG(13) 2. The ordinal where the SGH catches up to the FGH. 3. The Takeuti-Feferman-Buchholz ordinal?

Thanks! —Preceding unsigned comment added by Si187 (talk • contribs) 16:53, June 8, 2018 (UTC)

The ordinal where the SGH catches up to the FGH is \(\psi(\Omega_\omega)\) and the TFB is \(\psi(\varepsilon_{\Omega_\omega+1}\), so TFB is bigger. SCG(13) is a number so it's even smaller than \(\omega\) making 1<2<3. If you mean the ordinal when you approximate SCG(13), it is actually unknown, but it is proven to be bigger than or equal to \(\psi(\Omega_\omega)\) and smaller than or equal to TFB. Rpakr (talk) 19:51, June 8, 2018 (UTC)

Thats so cool, thanks.

So does that mean the SCG function grows equal to or faster than the function where the SGH keeps pace with the FGH (heard it called the Tau function)?

That seems absolutely incredible if so. Thought SCG and TREE were at least in the same ballpark in googological terms but sounds like they aren't even on the same planet. —Preceding unsigned comment added by Si187 (talk • contribs) 20:58, June 8, 2018 (UTC)

Googology with Real Numbers
So I've been thinking about this for a while, and I can't come up with any kind of formula or anything to describe the hyperoperators. Why would I want a formula to describe the hyperoperators? For stuff like {a,b,1.5} and {a,b,-5} which I feel are ignored. I'm just looking for some help for this stuff.

Ubersketch (talk) 13:29, June 5, 2018 (UTC)

how to math


 * Extending hyperoperators in a natural way is a famously hard, and still not conclusively solved, problem. Already extending tetration to real heights ({a,b,2} for noninteger b) is difficult.
 * I'm not quite sure what you mean with a "formula" - you won't be able to express tetration, let alone higher hyperoperators, using the common functions defined for all reals, as tetration simply grows too fast - faster than anything made out of functions like exponentials. LittlePeng9 (talk) 14:11, June 5, 2018 (UTC)
 * You can define hyperoperators for real \(b>=0\) like that:

\(\{a,b,c\} = a^b\) for \(0 < b < 1\) \(\{a,b,c\} = \{a,\{a,b-1,c\},c-1\}\) Ikosarakt1 (talk ^ contribs) 18:58, June 5, 2018 (UTC)
 * Every function on (the direct product of copies of) the set of natural numbers is easily continuously extended to (the direct product of copies of) the set of non-negative real numbers by the piecewise linear extension. What do you mean by "extending hyperoperators"? Are you assuming additional non-trivial conditions? Or are you considering an analogue of the holomorphic (not only real continuous, but also complex analytic) extension of the tetration given as the unique Kneser's solution? p-adic 22:57, June 5, 2018 (UTC)

Question about Rayo's number
I have some questions about Rayo's number and FOST.
 * 1) How do we define numbers in FOST? Do we use 0={} and a+1=a∪{a}?
 * 2) How do we know what number a FOST expression defines?
 * 3) What happens if you define function f(x) in FOST such that f(x) is the smallest natural number greater than all natural numbers that can be uniquely identified by a FOST expression of at most x symbols (which is the same as the definition of Rayo's function)? Does that make Rayo's function above a certain value and Rayo's Number ill-defined because of Berry's paradox? Rpakr (talk) 20:54, April 5, 2018 (UTC)


 * 1. Yes.
 * 2. Of course, in general, there will be no way to determine that - Rayo's function is uncomputable. But in specific cases, we can work out by hand which numbers satisfy the formula. For example, if the formula is (properly formalized version of) "the set has exactly one element and this element has no elements", then we know it defines 1. For a less trivial example, we can write formulas which include definitions of addition or multiplication, or some more complicated functions, and then we will know that the formula defines a value of this function.
 * 3. It's impossible to define this function in FOST, so there is no contradiction. This is the same kind of question as: what happens if we program a Turing machine which computes the Busy Beaver numbers? The thing is that it's impossible to program such a TM. Similarly we can't define your function in FOST. LittlePeng9 (talk) 21:13, April 5, 2018 (UTC)


 * 3. But why isn't possible to define such function of FOST? "Because it leads to contradiction" should not be the reason because that assumes FOST and Rayo are well-defined. That does not eliminate the possibility of FOST being ill-defined because of Berry's paradox.
 * I think you can encode FOST symbol sequence to a natural number and define g(A) where A is a FOST expression and g(A) is the smallest number that satisfies A by making a set that contains all natural numbers and check one by one if n satisfies A or not. Rpakr (talk) 22:58, April 5, 2018 (UTC)


 * You cannot define this g(A), because it is not possible to define "n satisfies A", where A is any FOST statement, in FOST. This follows from Tarski's undefinability theorem, which says that any sufficiently strong theory cannot define truth in that theory.
 * Of course, in any discussion of whether a function is definable in a given theory, we must assume the well-definedness of that theory. But, you have not given a good reason why we should doubt the well-definedness of FOST, because you have not given a good reason why your g(A) should be definable. Deedlit11 (talk) 00:26, April 6, 2018 (UTC)


 * Can't you define g(A) by:
 * 1. N is the set of natural numbers
 * 2. Make the FOST statement into reverse Polish notation and encode it to a natural number
 * 3. Define f(n,A_2,A_3,A_4,...A_k) where k is the number of variable in the expression as: let 1 be natural number n and m be set A_m. If the expression is true return 1, if not return 0 (To do that you go to the statement, decode it to the RPN version, if you find variable 1 put n, if variable p (p>1) put A_p, if you find AB∈ put true if A∈B else put false, if you find AB= put true if A=B else put false, etc. and reduce it to a "true" or a "false")
 * 4. Define g(A) as: if there exists set n, A_2, A_3, A_4,...A_k that satisfies (n∈N, f(n,A_2,A_3,A_4,...A_k) is true) then return true, else return false Rpakr (talk) 00:59, April 6, 2018 (UTC)


 * You have defined g(A), but using English. You cannot define g(A) using the language of FOST, because there is no expression for "A is true".  Which is as it should be, because truth is not something we natively associate with languages and theories; only when we attach a model to a theory do we talk about truth, within the model.  For Rayo's function, the model we are talking about is V, the cumulative hierarchy. Deedlit11 (talk) 01:17, April 6, 2018 (UTC)


 * What do you mean by "there is no expression for "A is true""? What do you mean by "attach a model to a theory"? Rpakr (talk) 01:23, April 6, 2018 (UTC)


 * I'm not sure how to say it better - there is no symbol in FOST that denotes truth. Let's say we define a theory FOSTT, which is FOST, but we have a predicate T(X), such that T(X) is true if and only if the FOSTT statement A that X encodes is true.  Then, we can make the standard diagonal argument to make an expression that basically says "This expression is false", and we have a contradiction.  So FOSTT is not well-defined.
 * But, FOST does not have such a truth predicate, so we cannot make such an argument, nor can we make your argument.
 * I was being casual with "attach a model to a theory"; I just meant a model that satisfies the theory. Normally we talk about truth in a model, not a theory.  Of course, we have been talking about truth in a theory here, so we are assuming there is an intended model that the theory was meant to satisfy.  I'm just saying this is not standard practice. Deedlit11 (talk) 01:44, April 6, 2018 (UTC)


 * "we can make the standard diagonal argument to make an expression that basically says "This expression is false"" I don't know what is "standard diagonal argument". Rpakr (talk) 11:27, April 6, 2018 (UTC)


 * The truth predicates, Rayo's function (your f) and Rayo's number can all be formally defined in second-order set theory (indeed, Rayo himself did that originally). And in second-order set theory (more specifically, in a theory like Morse-Kelley) we can prove that this function is well-defined, and that it cannot be defined in FOST.
 * As for the 'there is no expression for "A is true"', this is the content of, which states that in any formal language which can talk about natural numbers, there is no predicate T(n) such that, for every formula x with its Godel number g(x), x is equivalent to T(g(x)). LittlePeng9 (talk) 07:57, April 6, 2018 (UTC)


 * So do you mean that it is impossible to make a FOST statement that defines function h(A) where h(A) is equal to true is A is true and h(A) is equal to false when A is false for any FOST statement A? Rpakr (talk) 11:27, April 6, 2018 (UTC)


 * Basically, yes. To be specific, we need to consider h(#A), where #A is the Godel number of statement A. LittlePeng9 (talk) 12:58, April 6, 2018 (UTC)

Question about Inaccessibles
I was wondering, is there another way to write \(\psi(\epsilon_{I+1})\) with the psi function? Does this equal \(\psi_{\Omega_{I+1}}(0)\) ? I know that \(\psi(\epsilon_{\Omega_x+1})\) = \(\psi(\psi_{x}(0))\) for all finite x, and I was wondering if this is true for all x, even when x is inaccessible.

My Googology Website
This is my googology website that I made using WordPress, and it includes numbers that I have recently coined. https://plantstarslargenumbers.wordpress.com/ Sure, I may only have a couple hundred googolisms coined as of now, but I hope to add much more in the future. —Preceding unsigned comment added by PlantStarAlpineer0 (talk • contribs) 04:22, February 22, 2018 (UTC)

Measure the strength of a theory
I've read about different ways to measure the computable strength of a theory (number theory or set theory). e.g. Then, what's the relationships among PTO(T), the growth rate of BBT, and \(\kappa_T\)? &#123;hyp/^,cos&#125; (talk) 08:34, February 14, 2018 (UTC)
 * 1) The proof-theoretic ordinal of theory T. That's an ordinal.
 * 2) Provable "busy beavers". For positive integer n, take every proof in T with less than n symbols, which shows that a 2-color Turing machine with less than n states halts. Then BBT(n) is the maximum numbers of 1's that can be written by that set of Turing machines. Use function BBT to indicate the strength of T.
 * 3) Growth rate limit. Let \(\kappa_T\) be the least ordinal \(\alpha\) such that \(f_\alpha\) eventually dominates all functions provably total in T. That's an ordinal.


 * If an ordinal \(\alpha\) is smaller than PTO(T), then there is a recursive order of order type \(\alpha\) which T proves is well-founded. Using this order, we can define fundamental sequences for all ordinal up to and including \(\alpha\) in such a way that T proves \(f_\alpha\) total, and it follows that \(f_\alpha(n)<BB_T(2n)\) for large enough \(n\). This is I think all we can say without formal definitions of growth rate and without rambling about FSes. LittlePeng9 (talk) 11:03, February 14, 2018 (UTC)


 * Thank God we aren't rambling about fundamental sequences!Boboris02 (talk) 12:05, February 14, 2018 (UTC)

I think those three have some differences. For example, primitive recursion arithmetic has PTO \(\omega^\omega\), so it can prove all ordinal \(\alpha<\omega^\omega\) well-founded. However, it can't prove \(f_\alpha\) (\(\alpha<\omega^\omega\)) total. It can only prove \(f_\alpha\) (\(\alpha<\omega\)) total. In another word, it can only prove \(H_\alpha\) (\(\alpha<\omega^\omega\)) total. Here my question is: why \(\kappa_{\text{PRA}}\neq\text{PTO(PRA)}\)? Why it's Hardy hierarchy instead of fast-growing hierarchy whose "induction level" is compatible with the PTO? &#123;hyp/^,cos&#125; (talk) 12:30, February 14, 2018 (UTC)

Can anyone help me about constructible hierarchy?
Constructible hierarchy is defined as: It can also be defined using Gödel operations as: But I'm not clear about how it works. Here are some questions I have: &#123;hyp/^,cos&#125; (talk) 07:13, February 12, 2018 (UTC)
 * \(L_0=\{\}\)
 * \(L_{\alpha+1}=\{\{x\in L_\alpha|(L_\alpha,\in)\models\Phi(x,y_1,\cdots,y_n)\}|\Phi\text{ is first-order formula and }y_1,\cdots,y_n\in L_\alpha\}\)
 * For limit \(\alpha\), \(L_\alpha=\bigcup_{\beta<\alpha}L_\beta\)
 * \(L_0=\{\}\)
 * \(C_0(A)=A\)
 * \(C_{n+1}(A)=\{\{X,Y\}|X,Y\in C_n(A)\}\\ \cup\{X\times Y|X,Y\in C_n(A)\}\\ \cup\{\{(x,y)|x\in X\wedge y\in Y\wedge x\in y\}|X,Y\in C_n(A)\}\\ \cup\{X-Y|X,Y\in C_n(A)\}\\ \cup\{X\cap Y|X,Y\in C_n(A)\}\\ \cup\{\cup X|X\in C_n(A)\}\\ \cup\{\{x|\exists y(x,y)\in X\}|X\in C_n(A)\}\\ \cup\{\{(x,y)|(y,x)\in X\}|X\in C_n(A)\}\\ \cup\{\{((x,y),z)|((x,z),y)\in X\}|X\in C_n(A)\}\\ \cup\{\{((x,y),z)|((y,z),x)\in X\}|X\in C_n(A)\}\)
 * \(L_{\alpha+1}=P(L_\alpha)\cap\bigcup_{n<\omega}C_n(L_\alpha\cup\{L_\alpha\})\)
 * For limit \(\alpha\), \(L_\alpha=\bigcup_{\beta<\alpha}L_\beta\)
 * 1) There is a theorem saying "\(L_\omega=V_\omega\)". Then, how to determine \(L_{\omega+1}\)?
 * 2) What kind of things belong to \(V_{\omega+1}\) but don't belong to \(L_{\omega+1}\)? Can we take an example of them?
 * 3) What does \(L_{\omega+1}\) look like?
 * \(|V_{\omega+1}|=\beth_1\) is the least uncountable term in \(V_\alpha\). For comparison, what's the least ordinal \(\alpha\) such that \(|L_\alpha|>\aleph_0\)?


 * Important here is just the idea: \(L_{\alpha+1}\) is the set of subsets of \(L_\alpha\) definable in it with parameters. Just that lets us answer the last question. Indeed, we can do something stronger: we have \(|L_\alpha|=|\alpha|\) for \(\alpha\geq\omega\). The only tricky part is showing \(|L_{\alpha+1}|=|L_\alpha|\). Why is that? Well, how many subsets of \(L_\alpha\) can be defined with parameters? There are countably many formulas, and there are \(|L_\alpha|+|L_\alpha|^2+\dots=|L_\alpha|\) choices of finitely many parameters (the latter is elementary cardinal arithmetic), so there are at most \(\aleph_0\cdot|L_\alpha|=|L_\alpha|\) elements of \(|L_{\alpha+1}|\).
 * The other questions are a bit trickier, since it's not clear what a satisfactory answer would be. It may be helpful to note that every element of \(L_\omega\) is definable without parameters, so we don't need to add parameters into the definitions. There is a definable bijection between \(\omega\) and \(L_\omega\), so we may as well ask about subsets of \(\omega\) definable in it - those are called arithmetical subsets. So the answer to 1 and 3 would be that \(L_{\omega+1}\) is, essentially, the set of all "arithmetical sets". Hence for 2 it's enough to take some non-arithmetical subset of \(\omega\). An example is the set of Godel numbers of true sentences in the language of arithmetic. It is clearly contained in \(V_{\omega+1}\), but not in \(L_{\omega+1}\) (but it is in \(L_{\omega+2}\)). LittlePeng9 (talk) 10:58, February 12, 2018 (UTC)
 * Arithmetical sets... To determine whether a set belongs to \(L_{\omega+1}\) is already a quite "hard" problem.&#123;hyp/^,cos&#125; (talk) 08:15, February 14, 2018 (UTC)

How does the Xi collapsing function work?
Disclaimer: This post is not about Adam Goucher's Xi function!

I have seen the Xi function rarely around the wiki as a sort of ordinal collapsing function that uses K (the first weakly compact cardinal) as a diagonalizer. How exactly does this function work? And how does it relate to the \(\psi\) and \(\chi\) functions?

One more sidenote, I don't know how to put actual greek letters into a post like this (instead of english words like "psi"), can someone please fill me in? —Preceding unsigned comment added by 2001:56A:71A8:5F00:A0CF:7B1F:691C:3770 (talk • contribs) 22:03, February 11, 2018 (UTC)

Did you read this blog post ? I don't understand Xi function but maybe you can understand it. Also, to put greek letters you write \ ( \ alpha \ ) but without the spaces. Rpakr (talk) 23:15, February 11, 2018 (UTC)

Is the Xi function ITTM-computable?
it's vel time 06:56, October 4, 2014 (UTC)

Algorithm for bounded ranks
I believe I have figured out an algorithm which does the following: suppose we are given SKIO tree T and ordinal \(\alpha\) coded in some standard way. Then, if T has rank at most \(\alpha\), we will reduce T and check if it halts. If, however, it turns out that T has higher rank (or doesn't have one) then we will return an error. The algorithm is as follows:


 * If combinator is S, K or I then reduce in obvious way
 * If combinator is O and \(\alpha=0\), return error
 * If combinator is O and \(\alpha>0\), then:
 * "Guess" the value \(\beta<\alpha\)
 * Do the whole algorithm for ordinal \(\beta\) and tree T' which is x in oracle query
 * If subroutine returned "yes", reduce according to y in oracle query
 * If subroutine returned "no", reduce according to z in oracle query
 * If subroutine returned an error and not all guesses were used, repeat the above subroutine with different value of \(\beta\)
 * If subrouting returned an error and we ran out of guesses, return an error
 * Repeat from the first step
 * If T has reduced to I, return "yes"
 * If T reduced to something other than I, or infinitely many reductions were made, return "no"

First I'll clarify what "guessing" is - coding of an ordinal induces a mapping \(\omega\mapsto\alpha\), and we actually try out ordinals \(\beta\) one-by-one with help of this mapping.

Now we have to check that this algorithm is sound and always terminates. We do this by transfinite recursion on \(\alpha\). If \(\alpha=0\), then if rank of tree is \(>0\), we must have made oracle query. But second step tells us that it will then return an error. If no queries were made, then we simply simulate S, K, I combinators to see if it returns I.

If \(\alpha>0\), then, if oracle query happens, we will have to make at most \(\omega\) guesses, and for each of them subroutine terminates (from inductive assumption), and we have at most \(\omega\) steps of whole reduction to make. ITTM is capable of this. Now, if tree has rank at most \(\alpha\), then each of its queried trees will have rank less than \(\alpha\), so some guesses \(\beta\) will "get it right", and will return sound result (ind. assumption). If however tree has rank \(>\alpha\), one of its queries will have rank \(\geq\alpha\), so no guess will be able to "get it right". Thus we will constantly get an error, and when we run out of guesses, we will return an error. So the algorithm is sound.

Now suppose we have ordinal \(\alpha\) which is upper bound of possible ranks of SKIO tree. Then putting any well-founded tree into algorithm along with \(\alpha\) will give us the result. If tree is ill-founded, we will still get an error. So, given this ordinal \(\alpha\), we can even solve the problem of well-foundedness. Thus we can compute Xi function by sieving out ill-founded and non-terminating trees.

Thanks to this algorithm, only thing which we now have to figure out is the size of this ordinal \(\alpha\). If it's writable, we are home. LittlePeng9 (talk) 19:40, October 4, 2014 (UTC)

Trees with ranks up to CK ordinal
Deedlit has asked me for this, so I'll show here that \(\omega_1^\text{CK}\) is a lower bound on supremum of possible ranks. My argument shows the following: for every Turing machine coding a well-order of order type \(\alpha\), we can find an SKIO tree of rank \(\alpha\). Induction base is at 0, this is quite trivial. We can just map machine coding 0 to any SKI tree. Suppose we are given Turing machine \(T\) which codes a well-order \(\prec\) on \(\Bbb N\) of order type \(\alpha\). Now we let \(T_n\) be a similar machine, except that it "skips" over the numbers which are \(\prec\)-greater than \(n\). We can make it so that it actually codes a well-order. Now here is an important thing: \(T_n\) can be computed from description of \(T\) and \(n\). I'm not going to go into details of such computation, but it's relatively easy to see. Now, another point - if in \(T\) ordering number \(n\) represents ordinal \(\alpha_n\), then machine \(T_n\) codes ordinal \(\alpha_n\). It's again quite easy to realize. So, when we go through all \(n\), machines \(T_n\) will code all possible ordinals \(<\alpha\). Now we are getting to trees - we set up a tree, based on description of \(T\), which first computes descriptions of machine \(T_0\), constructs corresponding tree of rank \(\alpha_0\) (by transfinite induction), and then apply oracle combinator to this tree. Then we do the same thing for \(T_1\) and \(\alpha_1\), \(T_2\) and \(\alpha_2\) and so on. Note that oracle combinator is applied only to trees previously built by transfinite induction, so by parallel such argument we can see that everything else can be done with SKI combinators alone. So the rank of resulting tree is the least ordinal greater than all \(\alpha_n\), which is exactly \(\alpha\), so this is our rank of the tree.

Because Turing machines can code orderings of any order type \(<\omega_1^\text{CK}\) we get that there are trees of any recursive rank. LittlePeng9 (talk) 20:31, October 8, 2014 (UTC)


 * Thanks, Wojowu. Forgive my SKIO ignorance, but how do you create a finite tree that applies oracle combinators to infinitely many trees? Deedlit11 (talk) 21:54, October 8, 2014 (UTC)


 * To be honest, I have no idea. This is one of these things where you just have to believe that, since the construction I gave doesn't require anything stronger than TM, we can also make an SKI tree which will do the thing for us. This is why I remarked that \(T_n\) is computable by \(T\) and \(n\). LittlePeng9 (talk) 05:01, October 9, 2014 (UTC)

Upper bound of complexity of SKIO
I was able to show that SKIO calculus is \(\Delta^1_2\), from which it follows that they are computable by Koepke's ordinal Turing machines (OTMs). To be precise, the problem "given SKIO tree T, does it reduce to some irreducible tree?" can be equivalently stated "there exists ordinal \(\alpha\) such that algorithm on the beginning, ran on input T and \(\alpha\), halts with the tree in irreducible form". Because the algorithm is executible on ITTM, then problem of asking if it returns given answer is \(\Delta_2^1\) problem, by one result of Hamkins & Lewis. So question if there exists an ordinal such that this holds is \(\Sigma_2^1\). Alternatively, we can say that tree becomes irreducible if "for every ordinal \(\alpha\), either the algorithm returns that tree didn't reduce, or returns an error when running on input \(\alpha\), T". This in turn is \(\Pi_2^1\) statement. So question of T reducing to some irreducible form is in \(\Sigma_2^1\cap\Pi_2^1=\Delta_2^1\). Thus, by a result of (I believe) Koepke, which states that OTM-decidable problems are exactly \(\Delta_2^1\) problems, we get that OTMs can determine if SKIO tree has a normal form or not. As a corollary, Xi function is OTM computable. LittlePeng9 (talk) 05:37, October 11, 2014 (UTC)


 * Okay, a simplier proof: we can modify algorithm from the beginning so that we won't be working with ordinal coded as a real, but rather we will work with a single marking far on the ordinal tape. So, for example, if we are given tree T and ordinal \(\omega^2\), we just have tree T written as usual and a single marking on square number \(\omega^2\). When we are running a subroutine, we have to copy the ordinal, but this isn't that hard - we can from the beginning code the tree on every second cell of the tape, then we can use empty squares to know how big the ordinal is and copy it. The problem of trying out all smaller ranks also simplifies - we can just try out transfinitely many of them, one by one. And now, we can just try out all possible initial ordinals looking for the rank of tree T. This only works for trees which are well-founded, because we won't find an answer when tree doesn't have a rank. Thankfully, first proof seems to still be flawless. LittlePeng9 (talk) 05:57, October 11, 2014 (UTC)

Lower bound of complexity of SKIOO2
Okay, this time it's not about SKIO, but about SKIOO2 - it's not hyperarithmetical. Recall that the problem of deciding if a Turing machine codes a well-order is an \(\Pi_1^1\)-complete problem (it follows that it's not \(\Sigma_1^1\) and thus not \(\Delta_1^1\), which is class of hyperarithmetical things). For a given TM, we will build an SKIO tree T which is well-founded iff TM indeed codes a well-order. T will work in "turns" - on odd-numbered turns, it will check if TM codes a total order - it's a simple syntactic property to prove, if it happens to fail, we will force our tree into an ill-founded one - if P is paradoxal combinator, we will just ask oracle about it. Now, on even numbered turns, we will try to build an infinite descending sequence. We will keep checking for pairs \((a_1,a_2)\) to see if \(a_1\succ a_2\) in ordering induced by machine. If we find such pair, then we create a similar subtree (described in next sentence) and apply oracle combinator to it, and then continue looking for such pairs. The new tree will check for pairs \((a_2,a_3)\) with \(a_2\succ a_3\), and if it finds one, it will repeat the construction and application. Now I claim that the resulting SKIO tree is well-founded (has rank) iff the ordering which the TM gives is well-order.

Suppose that the ordering isn't well. Then we can actually find an infinite descending sequence \(a_1\succ a_2\succ...\), so we can find an infinite series of nested oracle queries. If the tree had a rank, then every queried tree would have a rank too, and they would have smaller ranks. Indeed, they would form a decreasing sequence of ordinals, but it cannot be infinite. So the tree is ill-founded.

Now suppose that the tree is ill-founded. Thus it doesn't have a rank. If all of its queried trees had ranks, then it would itself have a rank which is impossible. So, for some pair \(a_1\succ a_2\) it makes a query which is ill-founded. So the queried tree doesn't have a rank either, so for some pair \(a_2\succ a_3\) it would make an ill-founded query, and so on. This forms an infinite descending sequence \(a_1\succ a_2\succ a_3\succ...\), thus the ordering induced by machine isn't well.

So, because by definition of \(\Omega_2\) combinator it can resolve the question of well-foundedness of a tree, it's not hyperarithmetical, and we can deduce that it can solve all problems in \(\Sigma_1^1\cup\Pi_1^1\). LittlePeng9 (talk) 11:26, October 11, 2014 (UTC)

Here is an attempt to construct an SKIOO2 tree of rank \(\omega_1^\text{CK}\) and no lower. I'll remark that, when calculating a rank, I don't take into consideration queries made by \(\Omega_2\) combinator, because then it gives us no power because queries to ill-founded trees would be ill-founded.

I'm not entirely sure if this is right, but here we go: first, note that my construction in second section can be executed by a TM. So we can set up a single tree which, for every TM, first checks if the order it codes is well, using construction above, and if it is, then we can use the construction from second section to make tree of rank being ordinal which that machine codes, and then makes \(\Omega\) query for that tree. This way, the resulting tree will have rank at least as high as any ordinal which TM can code, that is - \(\omega_1^\text{CK}\). LittlePeng9 (talk) 19:32, October 12, 2014 (UTC)


 * I'm not sure, but I think that the fact that we can effectively compute well-order of order type CK, as well as solve any \(\Pi_1^1\) problem, is enough to prove that we can have trees of ranks up to \(\omega_2^\text{CK}\). LittlePeng9 (talk) 19:34, October 12, 2014 (UTC)


 * Perhaps the best way to show that is to use Sacks' result that the supremum of ordinals codeable by an oracle Turing machine is always an admissible ordinal (Did I state that right?). So with an oracle for \(\omega_\alpha^\text{CK}\), we can code all ordinals up to \(\omega_{\alpha+1}^\text{CK}\). However, I'm not sure how to convert from "we can code ordinal \(\alpha\)" to "there exists a tree of rank \(\alpha\)".


 * Assuming your arguments are correct, it seems then that with an \(\Omega_3\) combinator can can make a tree of rank \(\omega_2^\text{CK}\), by a similar argument. Then if we can argue that the limit of any oracle combinator system must be an admissible ordinal, then we know that SKIOO2O3 must be able to construct all trees of rank up to \(\omega_3^\text{CK}\). Similarly, we would have that with oracle combinator \(\Omega_\alpha\), we can construct trees with all ranks up to \(\omega_\alpha^\text{CK}\). Deedlit11 (talk) 20:15, October 12, 2014 (UTC)


 * I believe this is indeed correct. Below I explain this in a bit more detail. LittlePeng9 (talk) 20:33, October 12, 2014 (UTC)

Okay, this is a bit of work around, but I'm not a specialist in \(\alpha\)- recursion theory, and I don't know what's obvious and what isn't there :P

Suppose that we have an oracle Turing machine with access to oracle A saying which SKIO trees are well founded and which are not. As we have seen in section above, this is enough to decide if a TM codes a well-ordering. With similar trick as there, we can make a machine which codes well-ordering of order type CK. Let us list all the TMs which code a well-order. Then we make the following ordering: \((a,b)\prec(c,d)\) if \(a<c\) or \(a=c\) and \(b\prec d\) in ordering induced by \(a\)th machine on the list. This is union of all possible recursive orderings, so has the desired order type.

Now, by theorem of Sacks, the supremum of A-recursive ordinals is admissible, and, because it's greater than \(\omega_1^\text{CK}\), it must be at least \(\omega_2^\text{CK}\).

Because SKIOO2 is at least as strong as oracle A, we can actually compute that oracle and, by construction similar to one in second section, I believe that we can build trees of any rank below \(\omega_2^\text{CK}\). LittlePeng9 (talk) 20:33, October 12, 2014 (UTC)

Problem solved
A couple of days ago I've managed to answer the question in positive - Goucher's xi function is ITTM-computable. This follows from the algorithm for bounded ranks as soon as we prove that the ranks are bounded by \(\omega_1^\mathrm{CK}\). This bound follows from the following three facts. Fix some well-founded SKIO combinator.


 * Using a straightforward way of encoding SKIO reductions which includes reductions of all the oracle-called combinators, let \(C\) be the encoding of reduction of the combinator. Then \(\{C\}\) is \(\Sigma^1_1\) (indeed, it's something like \(\Pi^0_2\)). This is because the encoding is characterized by the fact that all the steps are valid, which most importantly includes checking that, upon an oracle call, the halting judgement is valid, and this can be encoded in an arithmetical formula.
 * Gandy basis theorem (theorem 2.9 in [www.math.cornell.edu/~shore/papers/ps/Sigma1116.ps this paper]) tells us that any nonempty \(\Sigma^1_1\) set contains an element \(A\) such that every ordinal computable from \(A\) is computable. Applying this to \(\{C\}\), every ordinal computable from \(C\) is computable.
 * The rank of the combinator is upper-bounded by an ordinal computable from \(C\). This follows almost immediately from the fact that the of a tree is computable from the tree and, when the tree is well-founded, it's a well-order of length at least the rank of the tree. Applying this to the tree built from oracle calls, which is well-founded since we assume the combinator is, we get the bound.

Clearly, the above imply the rank of a well-founded SKIO tree is computable.

By the way, I have also that the method of the last section above can be generalized to compute iterated hyperjumps, so the upper bound for the ranks of SKIOO2 combinators seems to be a decently large countable ordinal - my guess is the first recursively inaccessible ordinal. From there, my wild guess is that for SKI augmented by n+1 oracle combinators the ordinal is going to be the first recursively n-inaccessible ordinal. As of now I am unable to prove any of that though.

There appears to be quite some literature about computability from functionals which remind me of the oracles in how they can, in a way, apply the functionals to themselves, and the methods used there might be applicable here, but I don't have enough time to work on that. Maybe one day someone else will pick up on that.

As a teaser, the results from there suggest that if we were to introduce transfinite oracles, and then an oracle whose order would depend on a rank of a combinator it has as an input, then the ordinals computable would go up to the least \(\alpha\) which is recursively \(\alpha\)-inaccessible (i.e. recursively hyperinaccessible).

Also, if all those ordinals are even close to being correct, then all of the functions are ITTM-computable, so that's a nice result. LittlePeng9 (talk) 16:23, February 2, 2018 (UTC)

Laver tables
The latest post from our favorite blog alerted me to an interesting function based on s. I'll repeat the definition here:


 * For nonnegative integers \(n\), define \(L_n(a, 1) = a + 1 \pmod{2^n}\) and \(L_n(a, L_n(b, c)) = L_n(L_n(a, b), L_n(a, c))\). \(p(n)\) is period of the function \(k \mapsto L_n(1, k)\).

The first few values of \(p(n)\) are 1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, 16, 16, ... Since we're googologists, we'll invert this for a fast-growing function: let \(p^{-1}\) enumerate the values of \(n\) where \(p(n) > p(n - 1)\), starting with \(p^{-1}(0) = 0\).

Apparently \(p\) is divergent (and therefore \(p^{-1}\) is total) iff if there exists a rank-into-rank cardinal, a class of large cardinal of truly gigantic proportions. Any thoughts? FB100Z &bull; talk &bull; contribs 07:38, December 19, 2013 (UTC)

The "iff" part you wrote isn't quite correct, we don't know if divergence of p implies In axioms. My platonistic point of view suggests me this is can't be eventually constant. Same way goes my inner googologist, which hopes this gives us recursive function unprovable to bo total in ZFC+I0. LittlePeng9 (talk) 17:14, December 19, 2013 (UTC)
 * I spent like two minutes contemplating on whether to use "if," "only if," or "iff."
 * I agree that \(p\) should be divergent, and that \(p^{-1}\) is total and unprovable in ZFC+I0. I suspect that \(p^{-1}\) is incredibly powerful &mdash; far greater than TREE, SCG, Buchholz, or Loader.
 * Looking ahead, the next step would be a function based on Reinhardt cardinals. FB100Z &bull; talk &bull; contribs 22:29, December 19, 2013 (UTC)
 * Sadly, no. Reinhardt cardinals are proved to be incompatible with ZFC. 80.98.179.160 14:29, January 25, 2018 (UTC)
 * But not with ZF. LittlePeng9 (talk) 14:38, January 25, 2018 (UTC)

Ultimate Ordinal Notation
Let start a basics.

The 1st set, contains a number of rules up to \(\epsilon_0\).

The 2nd set, contains a number of new rules up to \(\Gamma_0\).

The 3rd set, contains a number of new rules up to SVO.

4th one, contains a number of new rules up to LVO.

And so on.

First I need a function: O

At 1st set:

1. O(A,0) = \(\omega\)A

2. O(-1,A) = A

3. O(A,B) = \(\omega\)A+B

And 2nd set:

Let @ has anything

4. O(@ \(\Omega\)A+1(B+1),C)[D+1] O(@ \(\Omega\)A+1B+\(\Omega\)A*O(@ \(\Omega\)A+1(B+1),C)[D],C)

5. O(@ \(\Omega\)A+1(B+1),C)[1] O(@ \(\Omega\)A+1B,C)

Anyone make a 3rd set of rules up to SVO? AarexTiao 18:14, September 8, 2013 (UTC)


 * To be honest, I see nothing special or new in this notation. FB100Z &bull; talk &bull; contribs 19:40, September 12, 2013 (UTC)
 * You silly, Aarex gave it a new name! LittlePeng9 (talk) 20:43, September 12, 2013 (UTC)
 * Granted, about 30% of googology is all about having fun with names. FB100Z &bull; talk &bull; contribs 07:19, September 14, 2013 (UTC)
 * How do you calculated that namely 30%? Ikosarakt1 (talk ^ contribs) 07:46, September 14, 2013 (UTC)


 * One third is inventing notation, one third is finding value, one third is giving it name. These can actually be a bit lower, because of ordinal notations, unnamed numbers etc. LittlePeng9 (talk) 10:00, September 14, 2013 (UTC)
 * As we invented a notation, the number already got the name. Even, say, 218959382 has a name, as I already named it in language of decimal digits. Then we can pronounce it as "two one eight nine five nine three eight two", and as we can write infinitely many numbers using decimal notation, we can name infinitely many numbers. Even the number which seems to be unnamed, \(f_{67}(43)\), will get a special name (although it will be so long that it wouldn't be fit in the Observable Universe)! Ikosarakt1 (talk ^ contribs) 11:23, September 14, 2013 (UTC)
 * Ikosarakt, I hope you know my post was saying that jokingly :) LittlePeng9 (talk) 11:56, September 14, 2013 (UTC)
 * I know, but sometimes I understand things more seriously than they are. Ikosarakt1 (talk ^ contribs) 13:09, September 14, 2013 (UTC)
 * So what is UON? AarexTiao 23:34, December 10, 2013 (UTC)
 * Nothing new. FB100Z &bull; talk &bull; contribs 01:06, December 11, 2013 (UTC)
 * Why? AarexTiao 00:14, December 20, 2013 (UTC)

Testing for Edwin Shade
Seems to work. —Succeeding unsigned comment added by Deedlit11 (talk • contribs) 21:49, September 9, 2017 (UTC)

Wait, I forgot. Did I post this or did someone else? Edwin Shade (talk) 01:48, October 6, 2017 (UTC)

Reddit for Googology
Has anyone ever thought about creating a subreddit for Googolology?

Fargoniac (talk) 22:26, March 22, 2015 (UTC)

it exists. also it's googology not googolology. Cookiefonster (talk) 22:56, March 22, 2015 (UTC)

"Really Big Numbers" book
Today I have read a review of book "Really Big Numbers" by Richard Evan Schwartz. I'm not sure how large numbers author explains in it, but from the review I can guess he goes at least as far as Steinhaus-Moser notation. I don't think he goes to the level of uncomputable functions. Anyone in here interested in getting the book and sharing his thoughts? LittlePeng9 (talk) 06:07, June 1, 2014 (UTC)
 * Checking out Amazon's "Look Inside" feature, the illustrations look very crude, flat, and unattractive. I really wouldn't want to dish out US$25 for this. you're.so.pretty! 20:02, June 1, 2014 (UTC)
 * Well, it is a kid's book anyway but we haven't yet seen the big numbers. If they're truly large, I'd buy it (Though really, I wouldn't. $25 is like P1100 or so). I'd say "One to Infinity" is at least worth buying. (If it does have a price) King2218 (talk) 23:41, June 1, 2014 (UTC)



you're.so.pretty! 22:51, June 2, 2014 (UTC)

... ummmm King2218 (talk) 15:32, June 6, 2014 (UTC)

My number
Biggest number?

I was wondering how my number stacked to other numbers so here it is:

((10^10^10^10^10^10^10^10^10^10^10)->->->->->->->->->->(10^10^10^10^10^10^10^10^10^10^10))!!!!!!!!!!=x

((x^x^x^x^x^x^x^x^x^x^x)->->->->->->->->->->(x^x^x^x^x^x^x^x^x^x^x))!!!!!!!!!!=y

Repeat process, after z wrap to a.

Once back to x, it's now x+.

Ater x+, every time you get to a new letter add 1 power ( to both letters,) 1 chain arrow, and one factorial.

Repeat this process an x++++++++++ amount of times. —Preceding unsigned comment added by 74.129.98.214 (talk • contribs)


 * Yes, yes, your number is the biggest. you won! our community surprised by your talent!--Konkhra (talk) 21:11, August 22, 2014 (UTC)
 * Salad, AND ill-defined. Who here can show him? WikiRigbyDude (talk) 21:14, August 22, 2014 (UTC)

But this number bigger!

x+++...+++ where there is x+++...+++ +'s, where there is x+++...+++ +'s, where there is x+++...+++ +'s, where there is x+++...+++ +'s, where there is x+++...+++ +'s, where there is x+++...+++ +'s, where there is x+++...+++ +'s, where there is x+++...+++ +'s, where there is x+++...+++ +'s, where there is x +'s.

Did I win? AarexTiaokhiao 21:53, August 22, 2014 (UTC)


 * No, Aarex. After the process is repeated x++++++++++ times, the number of +'s after x becomes so huge that iterating the number +'s won't even compare. WikiRigbyDude (talk) 22:21, August 22, 2014 (UTC)

yo
Hey, folks! A reminder to please be respectful and polite to newcomers.

Anyways, here's my analysis. My first question is, what does the "->" symbol mean? It understand that it's chained arrow, but chained arrows always look like a->b->c->d and not a->->->->d. It seems you are using a variant of chained arrows that you've written yourself. That's fine, but a very important part of googology is to clarify all notations used.

I don't understand the bit about "every time you get a new letter add 1 power to both letters..." Can you specify this explicitly and formally?

Finally, what process are we repeating x++++++++++ times?

you're.so.pretty! 23:24, August 22, 2014 (UTC)
 * The a->->...(x times)...->b is a->->...(x-1 times)...->a...(b times)...a->->...(x-1 times)->a. And "add 1 power" means add a 10 to the power tower. And the process to get x+, x++, x+++, x++++,... is repeated x++++++++++ times. 80.98.179.160 18:14, December 30, 2017 (UTC)

Practical BEAF
Is BEAF calculable after tetration array? For example, how should we calculate \(X \uparrow\uparrow\uparrow X\) with base three? Even though we can point out base, prime etc, but are there any writing systems for the next step? --Nayuta Ito (talk) 09:38, October 25, 2015 (UTC)


 * no, BEAF is ill-defined beyond tetrational arrays
 * PS: why "practical" BEAF? Fluoroantimonic Acid (talk) 09:59, October 25, 2015 (UTC)
 * No, it isn't possible. BEAF is ill-defined beyond tetrational arrays. Many googologists have tried to define BEAF beyond tetrational arrays, but these are just unnoficial, as the googologists say. In a future, i might make a ruleset for BEAF using rulesets of notations and extensions that make up BEAF, so it's more official-like, but i don't know when i will make those. Also, Jonathan Bowers is currently attempting to define BEAF, maybe in a future we will see BEAF well-defined beyond tetrational arrays. -- From the googol and beyond -- 15:17, October 25, 2015 (UTC)
 * OK, I understand. PS: Because notations of BEAF is a practical problem when calculated.--Nayuta Ito (talk) 00:21, November 3, 2015 (UTC)
 * Sorry, it is well-defined due to ω↑↑↑ω=2φ0, ω{n}ω=(n-1)φ0 using the φ as a "Veblen operator" and n<ω. But plugging in n=ω, we get ωφ0, and plugging in n=ω+1, we get Γ0 (1φ0φ0). 80.98.179.160 14:01, December 27, 2017 (UTC)

Xi(8) and higher
On the wiki page for the Xi function, it only shows the small values 1-7. I have found a stronger lower bound for \(\Xi(8)\) to be \(\Xi(8)\) ) \(\geq\) \(500\). I want you to solve (or try to solve) \(\Xi(8)\) and higher values of \(\Xi(n)\) (if you want to) and put them here. (The Fibonacci sequence bound is 96+) 107.77.161.5 15:51, October 12, 2017 (UTC)

WaxPlanck (talk) 13:40, October 14, 2017 (UTC)I have been trying to compute \(\Xi(8)\) and I got a much bigger lower bound than the previous user got: \(\Xi(8)\) > \(2 \uparrow\uparrow 7\) (I'll increase it over time.)

Could you show us the combinator that reaches that length? Tomtom2357 (talk) 09:07, October 23, 2017 (UTC)

WaxPlanck (talk) 22:28, December 3, 2017 (UTC) I retracted the claim a while back because the combinator hit infinite length. (It was SII(SSI(SO)).) But, I found \(\Xi(8)\) > \(87\) using the combinator SISSSSSO.

Last digits of tree(3)
TREE(3) is an enormous number, proven to be even larger than Graham's Number. While there is no way to calculate the last decimal digits, it is possible to calculate the last binary digits. This can be done by recording the total number of tree "branches" on every odd number in binary digits. (We start with 0, not 1) —Preceding unsigned comment added by 107.77.161.10 (talk • contribs) 10:59, September 17, 2017 (UTC)


 * The problem with trying to calculate TREE(3) is that it is the maximal sequence for TREE(3) is not given by any specific procedure; rather, it is the longest sequence that satisfies certain requirements, so it is hard to imagine how you could find it other than to search over all possible such sequences.
 * If you indeed have a way to calculate the last binary digits, could you please explain further? You haven't given nearly enough for us to understand what you are talking about.
 * I will say that I would be willing to bet that TREE(3) is odd, as the same for any TREE(n). This will be true if the maximum sequence is a sequence of nonpaths followed by a sequence of paths, or if it is a sequence of trees of height greater than 1 followed by a sequence of trees of height 1. (Both ending on the root tree, of course) The problem is, I don't know how one would prove that we can't have a path or a tree of height 1 come up earlier in the sequence. (It may be possible.) Deedlit11 (talk) 23:54, September 20, 2017 (UTC)


 * Here is how to do it:


 * Let n be the "branch height", or how far down the branch goes. Also, let d be the number of digits, or the number of nonnegative odd numbers at or below n. Starting with n = 1 and any odd value n, we begin this process:


 * 1. We use the binary number 11 and keep the last digit (1) because d = 1.
 * 2. Next, we go down to n = 3. Here, we have the number of dots at 1011. Now, we keep 11 and those are now that last binary digits.
 * 3. To further prove my point, let's change n to 5. Now, we have the number 111 ​11.
 * 4. Continue the pattern if you want.


 * That's how we calculate the last binary digits of TREE(3) —Preceding unsigned comment added by 222.97.145.127 (talk • contribs) 00:03, October 12, 2017 (UTC)


 * The branch height of what? We don't know what the specific trees are in the longest possible sequence. Also, you say "let d be the number of digits, or the number of nonnegative odd numbers at or below n". Which one is it? Those are two different things. Then you say "Starting with n = 1 and any odd value n" - is n equal to 1 or any odd value? Please be more clear. Deedlit11 (talk) 00:25, October 12, 2017 (UTC)


 * Here is a link for a picture to explain it: http://108.50.218.20/tree3digits.jpg —Preceding unsigned comment added by 107.77.161.12 (talk • contribs) 19:25, October 12, 2017 (UTC)


 * User Deedlit11 just said that he wants to know what the branch height is. and what I mean "let d be the number of digits, or the number of nonnegative odd numbers at or below n". I mean with the next question with "Starting with n = 1 and any odd value n" that n is equal to any odd value, including 1. I will reiterate what I have said before in a more clear and thought-out way (I'm re-explaining it almost from scratch):


 * 1. Use the argument as the variable d and accept any d > 0.
 * 2. Assign the variable n for n branches or sticks down as (2d + 1). (the branches equal the height of the tree.)
 * 3. Go down the tree to a height of n (the tree we are using is n high).
 * 4. Count the number of points or connectors (the dots).
 * 5. Convert the number of points/dots to binary.
 * 6. Only use the last d digits.
 * 7. Those are your last binary digits of TREE(3).
 * —Preceding unsigned comment added by 222.97.145.127 (talk • contribs) 22:08, October 13, 2017 (UTC)


 * WaxPlanck (talk) 13:44, October 28, 2017 (UTC) The other user didn't understand how to do this, but here is how the right way works:


 * 1. Start with 3 dots. (we'll call them "seeds".)
 * 2. Make the dots red, green, and blue. (This gives you the digit of 1.)
 * 3. Make the red dot have nothing on it. (it dies.)
 * 4. Draw a line from the green dot down to the green dot. (It dies after that.)
 * 5. Now, draw a red and green dot down from the blue dot. (this doesn't count as a provable digit, all of the branches in this "forest" after this will.)
 * 6. Draw 3 lines down from the red with red, green, and blue and 4 lines down from the green with red, green, blue, and red. (This gives you the digits 11.)
 * 7. Draw more and more down from each branch, each branch being 1 more tree than the last.
 * 8. Keep going, repeating step 7.
 * NOTE: If your tree has the same start and end colors, this means that the tree DIES.)
 * (By using this method, I managed to compute the last 3 binary digits: 011.)


 * How do you know that this method gives you the digits of TREE(3)? LittlePeng9 (talk) 14:07, October 28, 2017 (UTC)


 * WaxPlanck (talk) 15:51, October 28, 2017 (UTC) Because those digits repeat later.

Did Taranovsky actually reach ZFC?
Remember this ordinal notation (by Dmytro Taranovsky) you tried hard to understand?

A few days ago, Taranovsky updated that page, writing that he found that his notation may go beyond second order arithmetic, and even ZFC. He found that the ordinal system in "Degrees of Reflection" is not identical to the on in the n=2 system in "Ordinal Notation System for Second Order Arithmetic", and it appears that the latter is much stronger.

Using the C function at the "Ordinal Notation System for Second Order Arithmetic" section, he said that if \(d=C(\Omega_2,C(\Omega_2 2))\), then \(C(\Omega_2+d,0)\) is the least recursively inaccessible ordinal, \(C(\Omega_2+d^2,0)\) is the least recursively Mahlo, and in the same way as the notation in "Degrees of Reflection".

\(C(\Omega_2 2,0)\) would be beyond all that, and using a certain working system(?), \(C(\Omega_2^2,0)\) corresponds to the first uncountable ordinal, and this is followed by ordinals with even larger cardinalities. And this way, this ordinal system may extend beyond ZFC, exceeding the strength of Loader's function and maybe even Friedman's finite promise games.

So what do you think about it? Do you think Taranovsky actually reached ZFC with his notation? -- ☁ I want more clouds! ⛅ 02:27, January 15, 2014 (UTC)

Oh, I don't understand how that notation works. Is it defined by recusion? Or it means some ordinals that "fit certain properties" (or by combinatorics)? hyp$hyp?cos&#38;cos (talk) 03:34, January 15, 2014 (UTC)

\(C(\Omega_2,C(\Omega_2 2))\) might and will be replaced by \(C(\Omega_2+C(\Omega_2,C(\Omega_2,C(\Omega_2 2))),0)\), isn't it?--KurohaKafka (talk) 10:28, October 26, 2017 (UTC)

Triangular Number Notation
Introduction to Repeated Triangular Number Notation xTy = triangular number of x repeated y times xTT1 = xTx xTT2 = xTxTx xTTx = x then Tx x times from right to left xTTT1 = xTTx xTTT2 = xTTxTTx x{y}z = x then T y times then z x times from right to left Example: 3{3}3 = 3TTT3TTT3TTT3 3{2}3 = 3TT3TT3TT3 3{3}2 = 3TTT3TTT3 2{3}3 = 2TTT2TTT2TTT2

—Preceding unsigned comment added by 45.26.79.90 (talk • contribs) 17:59, July 6, 2017 (UTC)

Additions to the "Xappol series"
Xappolgong- {10, 10000 (2) 2}

Xappolbong- {10, 10000000 (2) 2}

Xappoltrong- {10, 10000000000 (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xappolquadrong- {10, 10000000000000 (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Etc…

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xappolplexigong- {10, xappolgong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xappolplexibong- {10, xappolbong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xappolplexitrong- {10, xappoltrong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xappolplexiquadrong- {10, xappolquadrong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Etc…

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippol- {xappol, xappol (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippolgong- {xappol, xappolgong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippolbong- {xappol, xappolbong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippoltrong- {xappol, xappoltrong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippolquadrong- {xappol, xappolquadrong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Etc…

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippolplex- {xappol, xippol (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippolplexigong- {xappol, xippolgong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippolplexibong- {xappol, xippolbong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippolplexitrong- {xappol, xippoltrong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xippolplexiquadrong- {xappol, xippolquadrong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Etc…

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppol- {xippol, xippol (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppolgong- {xippol, xippolgong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppolbong- {xippol, xippolbong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppoltrong- {xippol, xippoltrong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppolquadrong- {xippol, xippolquadrong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Etc…

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppolplex- {xippol, xoppol (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppolplexigong- {xippol, xoppolgong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppolplexibong- {xippol, xoppolbong (2) 2}

<p dir="ltr" style="line-height:1.38;margin-top:0pt;margin-bottom:0pt;"><span style="font-size:12pt;font-family:'TimesNewRoman';color:#000000;font-weight:400;font-variant:normal;text-decoration:none;vertical-align:baseline;white-space:pre-wrap;">Xoppolplexitrong- {xippol, xoppoltrong (2) 2}

<span style="font-size:12pt;font-family:"TimesNewRoman";color:rgb(0,0,0);font-weight:400;white-space:pre-wrap;">Xoppolplexiquadrong- {xippol, xoppoltrong (2) 2}

<span style="font-size:12pt;font-family:"TimesNewRoman";color:rgb(0,0,0);font-weight:400;white-space:pre-wrap;">Fezzie2 (talk) 01:28, February 18, 2017 (UTC)

Why don't we have a page about Taranovsky's notation
I was just wondering,because I wanted to know more about it but searching it up on Google won't give you much information outside of pages it this wiki,that have more to do with how far it reaches rather than how it works.Thanks.Boboris02 (talk) 16:57, January 27, 2017 (UTC)


 * We have a single page about all ordinal notations; no single notation has its own article afaik. If we decide it is worth making one for Taranovsky's notation, then I'd think we would have to make such article for other notations as well. LittlePeng9 (talk) 19:11, January 27, 2017 (UTC)


 * I'm personally all for it, but I'm too lazy to do it myself at the moment. :P Deedlit11 (talk) 07:02, February 3, 2017 (UTC)

Idea for a function
So I was watching this video: https://www.youtube.com/watch?v=wymmCdLdPvM

And I came up with this idea. Please keep in mind that I have no real higher math education so I might not be phrasing this right, but this is my best attempt.

For all X which meet the following conditions:


 * There exist integers a, b, and c such that a^3 + b^3 + c^3 = X


 * a, b, and c can include repeats of the same integer (a can equal b can equal c)


 * At least one of a, b, and c is > X

Define FX(n) as the nth smallest positive value of a^3, b^3, or c^3 (whichever is larger) such that a^3 + b^3 + c^3 = X

So for example, using some of the numbers they addressed in the video:

F29(1) = 27

F30(1) = 2,220,422,932^3

So we have a family of functions that grow at quite different rates.

Now define Y as the value of X for which FX(n) grows fastest in where Y <= Z, and Z >= 1

So now we have the function FY(Z), which gives an X. For example, FY(100) would give you the integer X <= 100 for which FX(n) grows fastest.

Now define F(Z, n) = FX(n) where X = FY(Z)

Does this make sense?

No one?

How fast does this function grow?
Okay, define the F(x) as the largest finite number computed by any enumerating non-halting Turing Machine of X or less states.

How would a NON-halting machine generate a LARGEST + FINITE number?Chronolegends (talk) 03:50, July 6, 2016 (UTC)

The largest number it generates before it starts repeating in a predictable pattern


 * For this function to be well-defined you need to specify what you mean exactly by "repeating in a predictable pattern". Good luck finding a definition of that which makes all non-halting TMs repeat in a predictable pattern. LittlePeng9 (talk) 07:35, July 6, 2016 (UTC)
 * You could ask about how long it takes for configurations of a TM given blank input to become periodic (where a configuration is defined as head position, state, and tape). However, there are many non-halting TMs that do not end up periodic. -- vel! 04:41, July 7, 2016 (UTC)


 * Are you sure? Because according to Wikipedia:


 * https://en.wikipedia.org/wiki/Halting_problem


 * <p style="margin-top:0.5em;line-height:22.4px;color:rgb(37,37,37);font-family:sans-serif;font-size:14px;font-weight:normal;">A machine with finite memory has a finite number of states, and thus any deterministic program on it must eventually either halt or repeat a previous state:


 * ...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern. The duration of this repeating pattern cannot exceed the number of internal states of the machine... (italics in original, Minsky 1967, p. 24)


 * But Turing machines are not finite-state, since they have an infinite tape. Note that this is very important for the Busy Beaver function to be fast-growing, since a machine with at most n states will either halt within n steps or fall into an infinite repeating pattern, so the Busy Beaver number could not be all that big. Deedlit11 (talk) 06:16, July 7, 2016 (UTC)
 * Be careful to distinguish states from configurations. Turing machines have finitely many states but infinitely many possible configurations. It is especially confusing that some authors call configurations "states" and states "control states." -- vel! 15:11, July 7, 2016 (UTC)


 * Well I actually got this idea from someone else, and he claimed that F(10,000,000,000) would be way greater than Rayo's Number
 * That person is dead wrong. -- vel! 21:10, July 7, 2016 (UTC)
 * Hmmm.... F(x) is equal to BB(x). GoogleAarex2001 21:33, July 7, 2016 (UTC)


 * No it isnt. BB(x) is the champion of *HALTING* TM's of x states, his F(x) is akin to saying "what is the largest finite number a machine that counts up could produce, before entering some repeating pattern" ie. doesn't exist.Chronolegends (talk) 22:55, July 7, 2016 (UTC)


 * So there's no way to define this function?


 * In order to make this function well-defined, you have to define precisely what you mean by the largest number computed by a nonhalting machine. So far you haven't provided a satisfying definition, but it doesn't mean such a definition isn't possible (even though Chronolegends seems to claim one can't define it). Admittedly there is little hope you can define it so that it works for all nonhalting TMs, but feel free to try. LittlePeng9 (talk) 07:28, July 9, 2016 (UTC)

Scott Aaronson gave us a shout-out!


So we might need \(\Sigma(7918)\) to surpass all functions provably recursive in ZFC. Maybe \(\Sigma(1000)\) isn't large enough to surpass USGDCS2(k), finite promise games, or even Loader's function. &#123;hyp/^,cos&#125; (talk) 07:40, May 15, 2016 (UTC)

You remember that \(\Sigma\) is a function to the natural numbers, not to a set of functions, right? I'm having a bit of trouble parsing your comment. ~εmli 17:36, May 15, 2016 (UTC)


 * I'm pretty sure hyp cos refers to this vague idea of a number surpassing a function if this number is larger than the output of the function on a "reasonable" input. LittlePeng9 (talk) 18:09, May 15, 2016 (UTC)
 * We might need \(\Sigma(7918)\) to get a Transcendental integer. Maybe \(\Sigma(1000)\) isn't large enough to surpass things like USGDCS2(G), FPCI(G) (where G means Graham's number), or even Loader's number. &#123;hyp/^,cos&#125; (talk) 00:26, May 16, 2016 (UTC)


 * I don't think so. 7918 states is likely way to many, given that it was created using higher order language and then compiled into a Turing machine. I imagine, if it were programmed natively as a TM, it would require maybe a couple hundred states. But I think the actual minimum is probably much lower. Scott Aaronson was asked what he thought the minimum n such that BB(n) was independent of ZFC was, and he guessed 15. Deedlit11 (talk) 16:12, May 16, 2016 (UTC)


 * In fact, Stefan O'Rear has reduced the 7918 states down to 1919. It still uses a compiler, so I'm sure programming natively will still reduce that by a lot. For reference, the 4800+ states used for the Goldbach conjecture using a compiler has been reduced to 27! Deedlit11 (talk) 16:29, May 16, 2016 (UTC)


 * By the way Hyp Cos, do you intend to continue extending your array notation, or make further comparisons with other notations? (sorry for going off-topic) Deedlit11 (talk) 17:48, May 16, 2016 (UTC)

For Peng (see comment #228): I originally posted my results for the HNR's here. Since the file linked there doesn't include the simulations for HNRs whose results were inconclusive, I've re-uploaded the simulation files on here, including the inconclusive HNRs. -- ☁ I want more clouds! ⛅ 03:00, May 16, 2016 (UTC)

Is this page fine to make?
Guys? Is this page okay to make? Because I don't want to make it if it's against the rules. Besides, if Prefix 10^3000000 (Millionth prefix) is the last page that gets its own prefix, and make the prefixes beyond that, why not put it in one large page? If it's not a good idea, then I will not do it.

http://googology.wikia.com/wiki/Prefixes_beyond_10%5E3000000

Jamiem2001 (talk) 02:11, February 14, 2016 (UTC)

Weary wombat function
According to http://goanna.cs.rmit.edu.au/~jah/busybeaver/, the weary wombat is defined as following:


 * [...] we denote this function as the weary wombat, or ww(n), which is the minimum number of state transitions made by a terminating machine which prints n 1's.

Isn't ww(n) equal to n as it is currently defined? If so, should we consider wwm(n) to be the minimum number of state transitions made by an m-state terminating machine which prints n 1's, and then define ww(n)=wwpp(n)(n)? Wythagoras (talk) 08:09, November 1, 2015 (UTC)


 * In this paper, ww(n) is defined as "the minimum number of state transitions for an n-state machine necessary to print pp(n) 1’s.". I suspect Harland meant this with pp(n) and n swapped, in which case the definition is as you state it at the end. LittlePeng9 (talk) 08:20, November 1, 2015 (UTC)

Holy crap!
It seems that the fastest growing computable functions that we can define are by explicit diagonalization out of strong theory like ZFC+I0, e.g. take f(n) to be the largest number of steps that it takes a Turing machine to halt, that can be proven to halt in ZFC+I0 using at most n steps. It seems like we can get arbitrarily strong this way by choosing stronger and stronger theories. Still, there is some interest in finding "natural" fast growing computable functions, where by "natural" I mean not obviously diagonalizing out of a particular theory or calculus.

As of now, I believe the fastest growing "natural" computable functions, at least in terms of lower bounds, come from Friedman's "Finite Promise Games", which Friedman states outgrow all provably recursive functions in SMAH (ZFC + "there exists a strongly k-Mahlo cardinal" for all k). Other contenders include witness functions from Friedman's "Finite Trees..." and "Finite Functions..." papers, where the functions derive from statements that are not provable in ZFC + "there exists a k-subtle cardinal" for all k; it seems reasonable that the witness functions will then dominate all provably recursive functions in ZFC + "there exists a k-subtle cardinal", but nowhere does Freidman state this is the case in his papers. Another contender is the obvious fast-growing function derived from Laver tables, which was proven to exist using rank-into-rank cardinals, but here we are on even shakier ground since it could well be that it could be proven in some weak theory.

Well, I just ran across a post by Friedman where he defines three functions, two of which dominate all functions provably recursive in SRP (ZFC + Stationary Ramsey Property), and one of which dominates all functions provably recursive in HUGE (ZFC + "there exists an n-huge cardinal" for all n). Huge cardinals are really huge! Strongly Mahlo cardinals are near the bottom of the large cardinal hierarchy; subtle cardinals are higher up but still relatively low. SRP is also not that high, higher than subtle I think but below measurable, which is considered the midpoint of the hierarchy, sort of. Huge is near the top. So this is a pretty big leap. Also, Friedman talks about a version for I3/I2, which is at the top, below just I0/I1. (Unfortunately he doesn't describe this version.)

The post is here:

I had a little trouble deciphering it; I'm not sure what he means by concatenation, and some of the statements start of with "if" but have no "then". :( Hopefully we can figure this out. Deedlit11 (talk) 03:18, September 13, 2015 (UTC)
 * The concatenation operation seems pretty clear. I have explained it more clearly in greedy clique sequence. The "if" statements worry me. It looks like the missing second halves of those sentences are what distinguish the first and second definitions. -- vel! 06:31, September 13, 2015 (UTC)
 * Aha. Correction here: http://www.cs.nyu.edu/pipermail/fom/2010-January/014285.html -- vel! 06:49, September 13, 2015 (UTC)


 * Friedman mentions a W(k) function in his original uncorrected post. Anyone know how it compares to the three aforementioned functions? --Ixfd64 (talk) 18:25, October 14, 2015 (UTC)


 * If I understand correctly, then W(k) you mention isn't any single function - Friedman wants to define a witness function for each theorem in general. This is a single placeholder for three functions mentioned in theorem 8. LittlePeng9 (talk) 18:31, October 14, 2015 (UTC)

Buchholz's FS system
For a while I thought that the Wainer hierarchy (\(\mu = \varepsilon_0 + 1\)) was the largest fundamental sequence system known to be increasing in FGH -- that is, for all \(\alpha < \beta < \mu\), \(f_\alpha\) is eventually dominated by \(f_\beta\). Wojowu pointed out to me that, in the paper introducing his hydra, Buchholz created a fundamental sequence system with \(\mu = \text{TFB}\) and proved that it is increasing (lemma 3.6d).

A few questions remain before this hierarchy is useful to us. For example, is it an extension of the Wainer hierarchy? -- vel! 18:35, August 17, 2015 (UTC)


 * I didn't get into details of all the definitions, but Buchholz's FGH is indexed not with ordinals themselves, but rather terms, which are just notations for ordinals. One thing which I'm not sure about is whether one ordinal can be represented with multiple different terms, and if the answer is yes, whether choosing different term for the same ordinal might lead to a different function. LittlePeng9 (talk) 19:17, August 17, 2015 (UTC)
 * Right you are. At this point that we're all lazily waiting for one of us to work through the paper. -- vel! 04:33, August 19, 2015 (UTC)


 * We can convert from terms to ordinals by restricting terms to "normal forms". For that we need to do two things: we require that any term of the form \((\alpha_1, \alpha_2, \ldots, \alpha_n)\) satisfies \(\alpha_1 \ge \alpha_2 \ge \ldots \ge \alpha_n\). This also means we need to change the notion of addition from concatenation to the normal definition of addition for ordinals. Secondly, we need to restrict which terms of the form \(D_u v\) are allowed, so that no ordinal occurs in proper form more than once. I believe this is reasonably doable, but I would need to think about exactly how to do it.


 * Fortunately, there is no problem for ordinals up to \(\varepsilon_0\), since for that we can use just \(D_0\), and then we just need the first rule above to get normal forms. So we do get fundamental sequences; they aren't the same as the FSes in what we call the Wainer hierarchy, because Buchholz uses the rule \(D_u(b+1)[n] = D_u(b) (n+1)\), whereas we have defined the Wainer hierarchy using the rule \(\omega^{b+1}[n] = \omega^b n\). But if we replace the "n+1" in the former rule with "n", then we do indeed get the same FSes as the Wainer hierarchy.


 * For higher ordinals, there is an interesting quirk: if we have \(D_v (b)\) where \(dom(b) = T_u\) and \(v \le u < \omega\), then \(dom(D_v(b)) = \mathbb{N}\) and \(D_v(b)[n] = D_v (b[D_u(b[1])])\). But \( D_v (b[D_u(b[1])])\) is a constant ordinal, so that isn't a fundamental sequence at all! There are two things we could do to deal with this: one is to treat \( D_v (b[D_u(b[1])])\) as an intermediate calculation, and to consider the actual fundamental sequence of \(D_v(b)\) to be the fundamental sequence of \( D_v (b[D_u(b[1])])\). This will get us proper increasing sequences, but the limits of these increasing sequences are less than what we would expect the starting ordinal to be. For example, \(D_0(D_1(0))\) would naturally be the smallest ordinal not describable by \(D_0\) alone, which would be \(\varepsilon_0\), but using the above rule we get \(D_0(D_1(0))[n] = D_0(D_1(0)[D_0(D_1(0)[1])]) = D_0(D_0(1)) = \omega^\omega\). So we have to set certain terms equal to smaller ordinals than we would expect; however, since Buchholz proves that the Hardy hierarchy for his system goes beyond the provably recursive functions of \(\Pi^1_1-CA+BI\), we would expect that the ordinals still go up to \(\psi_0(\varepsilon_{\Omega_\omega+1})\).


 * Alternatively, we could redefine the fundamental sequence for the case above to by the following rule: set \(a_0 = b[0]\) (or b[1] or anything reasonable) and \(a_{n+1} = b[D_u(a_n)]\). Then set \(D_v(b)[n] = D_v(a_n)\). This will give us a proper fundamental sequence, and this time the limit is what we expect; for example, we get \(D_0(D_1(0))[n] = D_0(D_0(\ldots(D_0(0))\ldots)) = \omega \uparrow \uparrow n\). Deedlit11 (talk) 06:44, August 19, 2015 (UTC)


 * By the way, we can insure that FGH is always increasing for any system of FSes by making a small change to the definition: instead of \(f_\alpha(n) = f_{\alpha[n]}(n)\) for limit ordinals \(\alpha\), we define \(f_\alpha(n) = \sup_{m \le n} (f_{\alpha[m]}(n))\). One can prove that if we use this rule, then if \(\alpha < \beta\) then \(f_\alpha\) is eventually dominated by \(f_\beta\). Deedlit11 (talk) 22:55, August 24, 2015 (UTC)

Is my notation as powerful as array notation?
Hey. Newb here. I wanted to see how far I could go by developing my own notation from scratch. Not to set any records, just for fun. I kept adding more and more levels of recursion, then realized it could all be described by a simple set of rules. The rules actually seem very similar to array notation... I think it's about as powerful, but I'm not totally sure. Anyone want to weigh in? I call it "bar notation", because it makes heavy use of vertical bars (|||). I've also defined a more powerful extended notation, but I wanted to see how this notation stacks up to the others first. The rules are like this:

A chain is a list of 1 or more nonnegative integers (terms) seperated by bars, with a leading bar, ex: |2|0|71|9,  |4,   |3|68, etc.

Any leading zeros in a chain with length > 1 are immediately discarded (unless the zero is the chain's only term), ex: |0|0|5|0|1 = |5|0|1

Chains can be assigned to variables- a chain variable is prefixed with a tilde (~), ex:  ~x,   ~a,   ~thisIsAChain,   etc

Chains can be decremented with the -- operator. Decrementing a chain will subtract one from the farthest right nonzero element of the chain.

ex: |2|0|0-- = |1|0|0,  |3|4-- = |3|3,   if we let   ~x = |2|2   then   ~x-- = |2|1,   etc.

Every chain can also be used as a function, ex. |3|6|3(9,1),  ~n(2,6)

You could say chains ARE functions. When used as a function, a chain takes exactly 2 arguments.

The result of the function is specified by these rules:

|0(x,y) = x+1 (if a chain only contains a zero, then it returns it's first argument + 1)

|a|0|0|0...{e "|0"s}...|0|0 (x,y) = |a-1|x|x|x...{e "|x"s}...|x|x(x,x) (If a chain is made of a leading nonzero term followed by all zeros, the 0's are replaced by x and the leading term is decremented, then this new chain is applied as a function to the arguments (x,x) )

if neither of the above cases hold,

~n(x,1) = ~n--(x,x)

~n(x,y) = ~n(~n--(x,x), y-1)

where ~n is any arbitrary chain.

That's enough to fully describe my notatation. But I didn't develop it by just sitting down and writing out the rules. I started with |0(x,y), then defined |1,   |2,   and others recursively in terms of it. Only after I made it all the way to |d|c|b(x,y) did I realize that it could all be described by a general rule.

To better understand how the notation works, it helps to forget about the general ruleset, and start with |0 and define all others in terms of it, as I did originally (if you already understand it, you don't have to read anything after this).

 |0(x,y): x+1

|1 can be defined recursively in terms of |0  (the // at the end of a line indicates a comment describing the function in terms of standard arithmetic operators, and has nothing to do with the function definition)

|1(x,1): |0(x,x) //x+1

|1(x,a): |1(|0(x,x), a-1) //x+a

|2 can be defined in terms of |1

|2(x,1): |1(x,x) //x*2

|2(x,a): |2(|1(x,x), a-1) //x*2^a

...

and so on for any |b. The general rule is:

|0(x,y): x+1

|b(x,1): |b-1(x,x) 

|b(x,a): |b(|b-1(x,x), a-1)

Now we'll define |1|0

|1|0(x,y): |x(x,x) //I believe this is roughly equivalent to the Ackermann function

Similar to |b, we can define |1|b for any b, using the general rule:

|1|b(x,1): |1|b-1(x,x) 

|1|b(x,a): |1|b(|1|b-1(x,x), a-1) 

Then we could go on to define |2|b from |1|b, in the same way |1|b was defined from |b. Then |3|b, and so on. But instead of defining each one individially, we'll make a general rule for any |c|b:

|c|0(x,y): |c-1|x(x,x)

|c|b(x,1): |c|b-1(x,x)

|c|b(x,a): |c|b(|c|b-1(x,x), a-1)

Then we can define |d|c|b. If we write out a set of rules for every case. it would look like this:

|d|0|0(x,y): |d-1|x|(x,x)

|d|0|b(x,1): |d|0|b-1(x,x)

|d|0|b(x,a): |d|0|b(|d|0|b-1(x,x), a-1)

|d|c|0(x,1): |d|c-1|0(x,x)

|d|c|0(x,a): |d|c|0(|d|c-1|0(x,x), a-1)

|d|c|b(x,1): |d|c|b-1(x,x)

|d|c|b(x,a): |d|c|b(|d|c|b-1(x,x), a-1)

But instead of writing all those rules out, it's easier to note that whenever c or b are nonzero, we just carry over d unchanged, and the rules otherwise work the same as before. The only time d changes is when c and b are both zero. We could keep going for |e|d|c|b, |f|e|d|c|b, and so on, but at this point I realized there was a general rule for all chains, which is the ruleset I described towards the top of my post.

KevinRSP (talk) 19:57, August 22, 2015 (UTC)


 * Yes, your bar notation does grow at the same rate as linear array notation. In fact, your notation is quite similar to the fast-growing hierarchy; we have \(|a(x,x) = f_a(x)\) and \(|b|a(x,x) = f_{\omega b + a}(x)\). After that, the two notations diverge slightly since you have \(c|0|0(x,x) = c-1|x|x(x,x)\) rather than \(c|0|0(x,x) = c-1|x|0(x,x)\), but they still stay quite close to each other. Deedlit11 (talk) 04:55, August 23, 2015 (UTC)
 * You know, you could have wrote this in a blog post. -- From the googol and beyond -- 18:43, August 24, 2015 (UTC)

Friedman's EKT
I know Friedman has derived an extended Kruskal theorem that is closely related to the Robertson-Seymour theorem, so I'm pretty sure the growth rate of EKT(k) would be on the order of \(\psi_0(\Omega_{\omega})\). Anyone know how it would compare to SCG(k)? --Ixfd64 (talk) 21:04, August 16, 2015 (UTC)


 * I believe EKT(k) likely grows at the rate of approximately \(f_{\psi_0(\Omega_{\omega})}\), as there is a natural correspondence of labelled trees with ordinals up to \(\psi_0(\Omega_{\omega})\). However, Friedman has implied that Planar SCG(k) grows faster than any provably recursive function of \(\Pi^1_1 - CA_0\), which means that PSCG(k) is at least on the order of \(f_{\psi_0(\Omega_{\omega})}\). SCG(k) will then grow faster than PSCG(k), and therefore EKT(k). Deedlit11 (talk) 23:02, August 16, 2015 (UTC)

How is the omega fixed point equal to psi I(0)?
This is a claim that i've seen in various places on the wiki but I have no idea where it comes from, or which psi function is being used. Can someone supply any sort of context or background on this equality? -- vel! 20:13, March 8, 2015 (UTC)
 * aha it is rathjen's psi and it comes from here http://googology.wikia.com/wiki/User_blog:Deedlit11/Ordinal_Notations_IV:_Up_to_a_weakly_inaccessible_cardinal -- vel! 20:15, March 8, 2015 (UTC)

Word of God about pentational arrays and beyond
Bowers' arrays are one of the most well-known advanced notations in googology. However the ambiguity in the notation, particularly beyond tetrational arrays, made many of the contributors of this wiki discussing about how to solve these arrays, and how far they reach. There were many theories about these, but these weren't been confirmed by Bowers himself so how the arrays are intended to be solved remained unknown.

Eventually the contributors decided to stop dealing with these, and the higher comparisons in the Fast-growing hierarchy article got removed, as in Vel!'s words, "it is now well-known that BEAF is nonfunctional above e0".

But yesterday, I found a link on Cookiefonster's site to a forum thread where Bowers himself (under the screen-name Polyhedron Dude) giving clues on how to solve these arrays. In the posts on the thread, Bowers says that the pentational and higher arrays follow the Saibian-style ordinal theory (where \(\omega \uparrow\uparrow (\omega+1) = \varepsilon_\omega\) and \(\omega \uparrow\uparrow\uparrow \omega = \Gamma_0\)). Based on this theory, kungulus, a high-level pentational array, reaches \(\Gamma_0\) level.

With this discovery, we know what Bowers has in mind. Any thoughts? Discuss here! -- ☁ I want more clouds! ⛅ 13:48, January 3, 2015 (UTC)


 * Great discovery! Furthermore: Bowers says here that legion arrays should fall around the LVO, making Hyp cos' analysis/variant not the way Bowers intended it. Wythagoras (talk) 14:02, January 3, 2015 (UTC)
 * Actually, Sbiis sent me a link to that forum page a month or so ago. Still neat though. Cookiefonster (talk) 15:27, January 3, 2015 (UTC)

I have only yesterday learned about how climbing method works for ordinals, so I don't have much intuition built about it yet, but one thing I don't really understand is - why do we expect same method to work Bowers's X structures? I don't see how the line of thought leading to result like \(\omega\uparrow\uparrow(\omega+1)=\varepsilon_\omega\) would lead us to results about strength of corresponding BEAF structure. Sbiis once on the chat said that, to some extent, there is no difference in using ordinals and X structures because both of these are well-ordered. I could agree if this was true, but Sbiis never gave an argument why X structures are well-ordered, nor even how we are ordering them (it was discussed when discussing well-definedness of BEAF, but the argument could be applied here too). LittlePeng9 (talk) 14:40, January 3, 2015 (UTC)

I think we should ask Bowers how to resolve simpler things like 2^X*(X+1) & 3 or even 1+X & 3. Ikosarakt1 (talk ^ contribs) 17:25, January 3, 2015 (UTC)


 * Hm, that's interesting as well, but not quite as "of interest" as things like hints on pentational arrays. Cookiefonster (talk) 21:55, January 3, 2015 (UTC)


 * also, vell said on the irc:
 * i smacked my head when Triacontaract [that's Ikosarakt1] suggested contacting bowers
 * why didnt we do that all along
 * thats what we do for every other notation that people post to the wiki
 * Cookiefonster (talk) 21:58, January 3, 2015 (UTC)

In addition to what CF quoted, as nitpicky as this sounds, the fact that these clarifications are not on bowers website makes me much less inclined to take them seriously -- vel! 19:19, January 5, 2015 (UTC)

Interesting find. When people started analyzing Bowers' notation, there was no evidence that he was aware of the ordinal-based hierarchies. --Ixfd64 (talk) 18:19, January 7, 2015 (UTC)

Wiki ocf standard
as i discussed with sbiis on irc, instead of a million theta/psi functions, the wiki should adopt some kind of ocf standard to use, one that matches as best as it can with whatever notations are used on the wiki pages. same for fund sequences. any thoughts on that? Cookiefonster (talk) 02:35, December 9, 2014 (UTC)

if you're interested in some specifics, an idea is to use a specific set of fundamental sequences for veblen's multi-argument phi function, then a binary theta function where theta(a,0) = theta(a), and then a one-argument psi function. Cookiefonster (talk) 02:39, December 9, 2014 (UTC)


 * if this has to do with statements of "comparable with" and approximations thingies then why bother. burn 'em all down. it's vel time 07:07, December 9, 2014 (UTC)
 * what?! the fgh IS the benchmark for googological functions, so why forget about it? it's important for that kind of approximation Cookiefonster (talk) 13:10, December 9, 2014 (UTC)
 * But they are entirely informal and there is no good quantative way of measuring how well one value approximates another. I guess that, according to most of usual measures, even guessing 2 would be better approximation in half of the cases than approximations given in articles. LittlePeng9 (talk) 16:19, December 9, 2014 (UTC)
 * i get what you're all saying about "fgh comparability is an informal concept", but that's both besides-the-point and an annoying argument, showing more concern with technicalities than ... the point. it's important to have some kind of measuring stick for googological funcitons. Cookiefonster (talk) 13:40, December 10, 2014 (UTC)
 * ...so the fact that claims littered throughout the wiki are fundamentally broken is just a technicality. -- Notorious V.L.E. 02:00, December 11, 2014 (UTC)

Intro to ocf's?
in my opinion the wiki really should have one, because only about three people can understand the definitions given on the article on ocf's and stuff. post your thoughts and ideas below. Cookiefonster (talk) 19:44, November 24, 2014 (UTC)
 * is there anything unclear about a specific definition, or is everyone just too lazy to work through the formal logic? im worried that if we create such an intro, people will just read that and work off intuition instead of makig an actual effort to comprehend the beasts. it's vel time 19:57, November 24, 2014 (UTC)

Proving termination of Array Notation
I have no doubts whatsoever that array notation is well-defined, but I'm curious about how we can go about proving that the rules cause termination for every valid array. it's vel time 16:46, November 19, 2014 (UTC)

Proof for linear arrays is as follows: to array \(\{a_1,a_2,...,a_{n-1},a_n\}\) we associate ordinal \(\alpha=\omega^na_n+...+\omega^2a_2+\omega a_1\). By transfinite recursion we prove that this array will terminate. For \(\alpha<\omega^3\) we have a 1- or 2-entry array which instantly resolves into exponentiation. Assume that this holds for all ordinals \(<\alpha\). If \(a_n=1\) we are in one step going from \(\omega^na_n+...+\omega^2a_2+\omega a_1\) to a smaller ordinal \(\omega^{n-1}a_{n-1}+...+\omega^2a_2+\omega a_1\). If \(a_k=...=a_{l-1}=1, a_l>1\) we go from ordinal \(\omega^na_n+...+\omega^la_l+\omega^{l-1}+...+\omega^k+...+\omega^2a_2+\omega a_1\) to an ordinal \(\omega^na_n+...+\omega^l(a_l-1)+\omega^{l-1}A+\omega^{l-2}a_1+...+\omega^ka_1+...+\omega^2a_2+\omega a_1\), where \(A=\{a_1,a_2-1,...,a_{n-1},a_n\}\) is an array with ordinal \(<\alpha\), thus terminating by inductive assumption. So ordinal \(\omega^na_n+...+\omega^l(a_l-1)+\omega^{l-1}A+\omega^{l-2}a_1+...+\omega^ka_1+...+\omega^2a_2+\omega a_1\) is smaller than \(\alpha\) too, and terminates. If all entries of an array are \(>1\) then proof goes similarly. LittlePeng9 (talk) 16:58, November 19, 2014 (UTC)


 * One can also go by simultaneous induction for a fixed number \(k\) of elements in array, and then proceed by induction on \(k\). This is virtually the same proof, but ordinal one is quite easier to present. and it's easier to extend it for multidimensional arrays. LittlePeng9 (talk) 17:02, November 19, 2014 (UTC)

FGH and PTO's
For ANY valid fundamental sequence system, does \(f_{\varepsilon_0}\) outgrow all functions provably recursive in Peano arithmetic? it's vel time 17:31, November 14, 2014 (UTC)


 * Vel conjectures this to be the case, I conjecture this to be false, yet none of us can give a solid reasoning one or another way. LittlePeng9 (talk) 17:33, November 14, 2014 (UTC)
 * How about \(\alpha[n] = 0\) for all \(\alpha\) and all n? Wythagoras (talk) 17:38, November 14, 2014 (UTC)
 * This violates definition of fundamental sequence. LittlePeng9 (talk) 17:52, November 14, 2014 (UTC)
 * Oh, yes, of course. Wythagoras (talk) 19:26, November 14, 2014 (UTC)

I wonder if community will turn out to be helpful on this one... LittlePeng9 (talk) 19:47, November 14, 2014 (UTC)
 * thank you based hamkins it's vel time 22:00, November 15, 2014 (UTC)

Now I'm very curious about whether there is a simple way to restrict FS's so that the answer to the question becomes "yes." it's vel time 03:24, November 16, 2014 (UTC)

ITTM cryptography
Call a function \(F: \mathbb{R}^i \mapsto \mathbb{R}^j\) ITTM-computable iff there is an ITTM \(T\) such that, for every \(i\)-tuple of reals \(X\), \(T\) halts with an encoding of \(F(X)\) on the tape when an encoding of \(X\) is given on the initial tape.

A public key encryption scheme is defined as a pair of ITTM-computable functions \(F,G: \mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}\) such that there exist real numbers \(p,q\) with \(F(p,G(q,m)) = m\) for all \(m \in \mathbb{R}\). We say that such a \((p,q)\) is a key pair for the scheme \((F,G)\). (For the sake of discussion, \(G\) is an encrypting function, \(F\) is a decrypting function. \(p\) is a private key, and \(q\) is a public key. The second argument of \(G\) and the output of \(F\) are called the plaintext, and the second argument of \(F\) and the output of \(G\) are called the ciphertext.)

Given a public key encryption scheme \((F,G)\), a key pair \((p,q)\) is considered secure iff there exists no ITTM-computable function \(H: \mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}\) such that for all \(m\), \(H(q,G(q,m)) = m\). (Candidate values of \(H\) are called adversaries.) A public key encryption scheme is secure iff it has a secure key pair.

Does there exist a secure public key encryption scheme?

Originally proposed by LittlePeng9 on ##googology. If the answer is "yes," then in an ITTM world, people would be able to enjoy complete privacy and construct digital signatures impervious to forgery :) it's vel time 09:23, October 26, 2014 (UTC)

In simpler terms, can we construct some encoding system, which consists of two functions, G (encoding function) and F (decoding function), and two keys, p (private decryption key) and q (public encryption key), such that we cannot encode decoded message given access to public key and encryption method, but it's possible using private key? LittlePeng9 (talk) 09:27, October 26, 2014 (UTC

Well, the answer is "yes", but it's a bit more trivial than I expected:

As shown by Hamkins, there exists \(c\in\Bbb R\) which can be recognized by some ITTM \(T\) (which means that \(T\) returns "yes" only for input \(c\), and returns "no" otherwise), but cannot be written by any ITTM. Now we choose \(p=c\) be our private key. Public key isn't even required, we can take it to be empty string. Now we define function \(G\) as follows: for input \(x\), it first checks if \(x=c\). If it's the case, it returns empty output. Otherwise, we return \(1x\) (concatenation of single 1 and input).

Now, if adversary existed, it would have to return \(c\) given empty input, which is impossible. But we still can decode message with private key: if we recive empty string, we can just write down \(c\) to which we have access. Otherwise, we just remove first 1 and we get original message. So this coding is safe, at least by our definition. LittlePeng9 (talk) 11:53, October 26, 2014 (UTC)


 * To make it more reasonable, we may want every adversary to be correct at most countably many times. LittlePeng9 (talk) 13:18, October 26, 2014 (UTC)
 * I'm worried that we're just playing Whack-a-Mole here. The absence of probability and computational efficiency seems to be causing such unreasonable situations. it's vel time 19:38, October 26, 2014 (UTC)
 * There is a reasonable meaning of "probability" for ITTMs, namely coin-flipping measure. Few reasonable notions of "almost all" for sets of infinite binary strings are, for example, having full measure, containing all but countably many elements, being co-meager. LittlePeng9 (talk) 19:45, October 26, 2014 (UTC)

So the revised question is now: Does there exist a secure public key encryption scheme such that any plausible adversary is right at most countably many times? LittlePeng9 (talk) 19:48, October 26, 2014 (UTC)

Save the oodles?
The idea of oodles is that every collection of oodles is also an oodle, e.g. oodle of all oodles, which in this case contains itself. Unfortunately, there is nothing stopping us from taking exactly the oodles which do not contain themselves, and forming an oodle out of them. The question if this oodle contains itself leads to Russell's paradox.

The theory of oodles is equivalent to naive set theory, which is a well-known example of inconsistent theory. The way of saving that theory is making restrictions to allowed sets. However, this is a syntactic restriction which can be considered "unnatural".

Do any of you have ideas on how to modify oodles so that they become consistent? LittlePeng9 (talk) 04:58, October 20, 2014 (UTC)
 * axiom of foundation? it's vel time 09:24, October 20, 2014 (UTC)
 * It won't help. Let me explain the issue in detail: let CO denote the "collection of oodles" axiom/property: any collection of oodles is an oodle. As I explained, CO is inconsistent all by itself, because it implies that Russell's paradoxal oodle is a valid structure. Adding AF can only possibly make it worse - we still can derive contradiction from CO. It's well known fact that extending inconsistent theory can't magically make it contradiction-free. What we need is a weakening of CO or some other way around the problem. LittlePeng9 (talk) 10:00, October 20, 2014 (UTC)
 * so let's restructure oodles so they can never contain themselves it's vel time 15:36, October 20, 2014 (UTC)
 * Although not so easily, we can derive Russell-like paradox from this too: let S be oodle of all oodles A which do not contain oodle B which contains A. Now let S' be set of all singletons of elements of S. Question: Does S' contain {S'}? Of it does, then it doesn't, but if it doesn't, then it does. We would have to refine this idea a lot. LittlePeng9 (talk) 15:40, October 20, 2014 (UTC)
 * oops, i said that wrong. *let's restructure oodles so they respect AF. it's vel time 15:43, October 20, 2014 (UTC)

(removing indentation) So, as I said on the chat, if we had oodles satisfy the (analogue of) axiom of foundation, we would have naturally induced notion of rank of the oodle:


 * Empty set is its own rank.
 * If some element of the oodle has rank \(\alpha\) such that \(\alpha\) either is rank or contains ranks of all other elements of the oodle, then it has rank \(\alpha\cup\{\alpha\}\).
 * If no such \(\alpha\) exists, then rank of oodle is union of ranks of its elements.

Note that the definition avoids word "ordinal". Now it's simple to show that every oodle has rank - otherwise, let \(A_1\) be oodle without rank. If all its elements had rank, it would have a rank, so there must be oodle \(A_2\in A_1\) without rank. Continuing, we can have infinite sequence \(A_1\ni A_2\ni...\), existence of which contradicts AF.

It's also quite simple to see that oodles of rank \(\alpha\) or element of it are exactly the collections of oodles with rank being element of \(\alpha\). This way we can have natural continuation of Von Neumann's hierarchy to \(V_\alpha\) of oodles with rank \(\alpha\) or element of it. LittlePeng9 (talk) 16:54, October 20, 2014 (UTC)
 * hell yes! it's vel time 18:55, October 20, 2014 (UTC)

Gödel's first incompleteness theorem and hypercomputation
Recall that Gödel's first incompleteness theorem states that any theory capable of expressing elementary arithmetic, and having rules expressible by a Turing machine, cannot be consistent and complete. What happens if we replace "Turing machine" with "oracle Turing machine," or even ITTMs? it's vel time 03:16, October 5, 2014 (UTC)

All arithmetic truths are decidable by ITTM in time \(\omega^2\), so theory \(\text{Th}(\Bbb N)\) of all arithmetic truths, which is consistent and complete, is expressible by ITTM. LittlePeng9 (talk) 06:32, October 5, 2014 (UTC)

Even better, for any consistent theory T in countable language, axioms of which are expressible by ITTM, we can find an extension T* of T which is complete and expressible by ITTM. This is done by induction - let's enumerate all sentences. There is countably many if them, so it's not hard. Then for every sentence \(\varphi_i\) and we try to derive it from T. If we find a proof in T of \(\varphi_i\). If we find a proof, we add \(\varphi_i\) to T. Otherwise, we add \(\neg\varphi_i\). In first case, we easily see that we keep theory consistent. Second case is not so trivial. But if \(\varphi_i\) were true in all models, then by completeness theorem it would be provable. So \(\neg\varphi_i\) is false in some model, thus its consistent. After we do this for every \(i\), we are left with theory T* which is consistent (e.g. thanks to compactness theorem) and complete (because for all \(i\) we added \(\varphi_i\) or \(\neg\varphi_i\)). This can be executed on ITTM. LittlePeng9 (talk) 06:57, October 5, 2014 (UTC)

Are there ordinal hierarchies without fundamental sequences?
Define an ordinal hierarchy as \(F: \omega_1 \mapsto (\mathbb{N} \mapsto \mathbb{N})\) where for all \(\alpha > \beta\), \(F(\alpha)\) eventually outgrows \(F(\beta)\). Define a fundamental sequence system as \(S: \omega_1 \mapsto (\omega_1)^\omega\) (where \((\omega_1)^\omega\) is the set of all increasing \(\omega\)-sequences in \(\omega_1\)) such that for all limit ordinals \(\alpha\) we have \(\sup\{\beta: \beta \in S(\alpha)\} = \alpha\). If we take the axiom of choice, ordinal hierarchies and fundamental sequence systems must exist, so let's scale it back to ZF. Is there a model of ZF where ordinal hierarchies exist but fundamental sequence systems don't? it's vel time 07:21, October 4, 2014 (UTC)

I've shown that it's impossible. I'll give a proof when I'm back home. LittlePeng9 (talk) 07:27, October 4, 2014 (UTC) I take that claim back for now. LittlePeng9 (talk) 07:36, October 4, 2014 (UTC)

Just a remark, we actually need some of axiom of choice for any of these to exist. This is because we cannot prove in ZF alone that there is a subset of \(\Bbb R\) which has cardinality \(\omega_1\) (same applies to \(\Bbb N^\Bbb N\)). Ordinal hierarchy necessarilly gives us an injection from \(\omega_1\) to \(\Bbb N^\Bbb N\). For existence of fundamental sequence this is less straightforward, but we can for every \(\alpha<\omega_1\) construct a real number \(r_\alpha\) which codes \(\alpha\), by using \(S\) function. LittlePeng9 (talk) 12:22, October 4, 2014 (UTC)

Here's a partial answer. The SGH, HH, and FGH appear to have the following property: for every limit ordinal \(\alpha\), there does not exist a function f such that \(f(n) > F_{\alpha[m]}(n)\) for all \(m,n \in \Bbb N\) (here F is the ordinal hierarchy in question). We can turn this into a version without fundamental sequences: for every limit ordinal \(\alpha\) and every function f on \(\Bbb N\), there exists a \(\beta < \alpha\) such that, if \(f(n) > F_{\gamma} (n)\) for all \(n \in \Bbb N\), then \(\gamma < \beta\). We will also assume that each \(F_\alpha\) is weakly increasing. If an ordinal hierarchy F on \(\omega_1\) has these properties, then we can define fundamental sequences for all countable ordinals by letting \(\alpha[n]\) be the supremum of all ordinals \(\beta\) such that \(f(m) > F_{\beta}(m)\) for all \(m \ge n\). By the previous property, \(\alpha[n]\) will be less than \(\alpha\) (if \(\alpha[n] = \alpha\) then by setting \(f(m) = f(n)\) for \(m < n\) we can insure that f dominates \(F_\alpha\) for a cofinal subset of \(\alpha\)), and the fact that \(F_\alpha\) eventually dominates \(F_\beta\) for all \(\beta < \alpha\) insures that \(\sup \alpha[n] = \alpha\). So we get fundamental sequences if we have an F with these properties. Deedlit11 (talk) 14:16, October 4, 2014 (UTC)


 * Your first claim seems to be correct for standard hierarchies, because almost always we have that \(F_{\alpha[n]}(2)\), as a function of \(n\), is unbounded. These functions however require the fundamental sequences to be specified, so it doesn't always have to be the case. There are some trivial examples for which it doesn't hold, however: define \(F_i(n)\) to be \(0\) if \(n<i\), and set it as \(i\) otherwise. Then \(f(n)=n+1\) does upper bound all the functions. LittlePeng9 (talk) 14:41, October 4, 2014 (UTC)

Feeding FGH into FGH
I'm a newbie in googology, and I have some questions about FGH and ordinals. FGH looks like this: $$f_\alpha(n)$$ while $$\alpha$$ can be an integer or an ordinal. what I want to do is to feed it into it self: $$f_{f_\alpha(\omega)}(n)$$ but here's some problem about this:

1.how to define $$f_\alpha(\omega)$$ when $$\alpha$$ is an limit ordinal? fundamental sequence is totally useless.

2.what's the limit of this? is that the fix point of $$f_\alpha(\omega)=\alpha$$? if so, the ordinal $$\alpha$$ is probably $$\Gamma_0$$ I guess,which is not even as powerful as the original FGH!!!

3.$$\Gamma_0$$ can be expressed as $${\{\omega,\omega,1,2}\}$$ in BEAF. it seems like that it is possible to range $$\Gamma_0$$ by using $$f_\alpha(\omega)$$. what's the problem between 2 and 3?

—Preceding unsigned comment added by D57799 (talk • contribs)


 * Hey, welcome to the wiki. Feeding FGH back into itself is actually considered before, though it is a good idea anyways.


 * Defining this is indeed quite difficult, but you can see my definition here. It turns out, that, under this definition, it actually reaches \(\vartheta(\Omega_\omega)\). Defining fixed points of this using \(\Omega\), just as happens in \(\vartheta\) or \(\psi\), is actually very, very powerful.


 * The problem for defining ordinals in BEAF is that you need a definition, otherwise things will go wrong. Wythagoras (talk) 06:51, September 28, 2014 (UTC)

Thanks. It's really helpful. I think that probably exceeded any ordinal collapsing function used by now.D57799 (talk) 07:23, September 28, 2014 (UTC)


 * Wythagoras, I'm afraid your analysis is incorrect.
 * As you say, \(f_\omega(\omega 2) = \sup(f_0 (\omega 2), f_1(\omega_2),f_2(\omega 2) \ldots)\). Since \(f_n(\omega 2) < \varphi(\omega,0)\) for all finite \(n\), it is impossible for \(f_\omega(\omega 2)\) to be greater than \(\varphi(\omega, 0)\), and in fact it is equal to \(\varphi(\omega, 0)\). The same argument holds for all \(f_\omega(\alpha)\) with \(\alpha < \varphi(\omega, 0)\), so we have that for all \(2 \omega \le \alpha < \varphi(\omega,0)\), \(f_\omega(\alpha) = \varphi(\omega,0)\). More generally, for all infinite \(\alpha\), \(f_\omega(\alpha)\) will be the smallest ordinal of the form \(\varphi(\omega,\beta)\) that is larger than \(\alpha\).
 * For \(f_{\omega+1}(\alpha) = f_{\omega}^{\alpha}(\alpha)\), we start at \(\alpha\) and apply \(f_\omega \alpha\) times, and each time it goes up to the next ordinal of the form \(\varphi(\omega, \beta)\). So we will usually get \(f_{\omega+1}(\alpha) = \varphi(\omega, \alpha)\). (There is a slight difference near epsilon numbers.)
 * If you continue this analysis, you should get that \(f_{\alpha+1}(\beta)\) is about \(\varphi(\alpha, \beta)\), and so for \(\alpha, \beta < \Gamma_0, f_\alpha(\beta) < \Gamma_0\).


 * My version of the hierarchy is almost the same, but the definition is somewhat simpler:


 * \(f_0(\beta) = \beta + 1\)


 * \(f_{\alpha+1}(\beta) = f_\alpha^{\beta}(\beta)\)


 * for limit \(\alpha, f_\alpha(\beta) = \sup(\gamma < \alpha) \{f_\gamma(\beta)\}\)


 * \(f^{\lambda+1}(\beta) = f(f^{\lambda}(\beta))\)
 * for limit \(\lambda, f^\lambda(\beta) = \sup(\gamma < \lambda) \{f^\gamma(\beta)\}\)


 * No need to invoke fundamental sequences. Deedlit11 (talk) 07:35, September 28, 2014 (UTC)
 * I shall try to redefine it. Wythagoras (talk) 08:22, September 28, 2014 (UTC)


 * D57799: Actually, ordinal collapsing functions go very far beyond \(\vartheta(\Omega_\omega)\). You can see for example my blog posts "Ordinal Notations I-VI". In fact, OCF's go much further than what I describe, but I don't understand the larger ones very well. Deedlit11 (talk) 17:13, September 28, 2014 (UTC)

I got it wrong about the fix point of $$f_\alpha(\omega)=\alpha$$. I think I should be more familiar about how ordinals work.--D57799 (talk) 10:54, September 29, 2014 (UTC)

What is the exact point that SGH(n) equals FGH(n)?

it seems that $$g_\alpha(\omega)=\alpha$$ when $$\alpha$$ is an ordinal.

So the fix point of $$\alpha=f_\alpha(\omega)=f_{g_\alpha(\omega)}(\omega)$$

is actually where the $$f_\alpha(\omega)=g_\alpha(\omega)$$, the point that SGH(n) equals FGH(n).

the page of SGH shows that will be $$\psi_0(\Omega_\omega)$$

is that correct?D57799 (talk) 13:07, September 29, 2014 (UTC)

The problem with Deedlit's definition of FGH is that it's designed to work with ordinals - as you can check, for example, using Deedlit's definition \(f_\omega(2)=\omega\), which would never happen with standard FGH, where result is always finite. As you point out, in Deedlit's version the meeting point is very soon, but in standard finite version it appears much later, at \(\psi(\Omega_\omega)\). I also believe that it isn't real equality, but it just means that growth rates are comparable. LittlePeng9 (talk) 13:21, September 29, 2014 (UTC)

What is the formal way to do $$f_3(\omega)$$?

obviously it's $$f_2^\omega(\omega)$$.

but I always get $$\omega^\omega$$ instead of $$\epsilon_0$$, I get it at $$f_4(\omega)$$

Besides, $$f_2(\omega)=2^\omega\omega=\omega^2$$ right?


 * \(f_0(\alpha) = \alpha+1\)
 * \(f_1(\alpha) = \alpha*2\)
 * \(f_2(\alpha) = \alpha * 2^\alpha\)


 * \(f_2(\omega) = \omega * 2^\omega = \omega^2\)
 * \(f_2(\omega^2) = \omega^2 * 2^{\omega^2} = \omega^2 * (2^\omega)^\omega) = \omega^2 * \omega^\omega = \omega^\omega\)
 * \(f_2(\omega^\omega) = \omega^\omega * 2^{\omega^\omega} = \omega^\omega * 2^{\omega * \omega^\omega} = \omega^\omega * (2^\omega)^{\omega^\omega} = \omega^{\omega^\omega}\)
 * \(f_2(\omega^{\omega^\omega}) = \omega^{\omega^\omega} * 2^{\omega^{\omega^\omega}} = \omega^{\omega^\omega}*(2^\omega)^{\omega^{\omega^\omega}} = \omega^{\omega^{\omega^\omega}}\)


 * and so on. You can see that the limit is \(\varepsilon_0\).


 * More generally, \(f_2^\omega(\alpha) = \lim \{\alpha^{\alpha^{\alpha^\cdots}}\}\), which is the smallest epsilon number larger than \(\alpha\).
 * So, \(f_3(\omega \alpha)\) is the \(\alpha\)th epsilon number larger than \(\omega \alpha\), which will usually be \(\varepsilon_\alpha\) except near fixed points of the \(\varepsilon\) function. Deedlit11 (talk) 15:04, September 30, 2014 (UTC)


 * Oh,I see. $$f_2(\omega^2)=(\omega^2)2^{(\omega^2)}$$, not just $$(\omega^2)^2$$.D57799 (talk) 11:00, October 1, 2014 (UTC)

I'm trying to fit that into FGH. I found out that in the all-integer case, it is amazingly weak.:

$$f_{f_{f_{...}(n)}(n)}(n) >$$ $$2\uparrow^{2\uparrow^{2\uparrow^{...}n}n}n$$ Which is merely between $$f_{\omega+1}(n)$$ and $$f_\omega(n)$$, and it is much smaller than $$f_{\Gamma_0}(n)$$, which is actually around$$f_{f_{\omega+1}(\omega)}(n)$$. It is surely smaller than chained arrow's$$f_{\omega^2}(n)$$, and linear BEAF's $$f_{\omega^\omega}(n)$$, but I'm quite certain about that$$f_{f_{f_{...}(\omega)}(\omega)}(n)$$ will grow much faster than $$f_{\omega^\omega}(n)$$.

Besides, Why is the comparison between FGH and BEAF deleted a great part? It said that BEAF is "nonfunctional" after $$\varepsilon_0$$. Does that mean it's not possible to compare or something else?

D57799 (talk) 05:23, October 4, 2014 (UTC)


 * For quite a long time it was known that what Bowers has left us from the definition of BEAF is not enough to make sense out of his notation beyond e0. There have been ideas on how to extend this, I believe we had a solid idea up to at least \(\varphi(\omega,0)\), but since there turned out to be multiple ways to even go there, we decided to mark it as "nonfunctional" beyond that point. We might put it back there if we find a single way to define BEAF up to some reasonable point. LittlePeng9 (talk) 05:51, October 4, 2014 (UTC)

Iteror.org?
Has anyone visited the site iteror.org? For example, at http://iteror.org/big/Work/royal/planB.html they claim to devise a notation to beat Bird's H(n). (not sure if they are talking abou his old version or his new version) But I can't make heads or tails of their notation. What do you guys think? Deedlit11 (talk) 22:03, October 30, 2013 (UTC)
 * The use of generalized regular expressions seems interesting. I don't quite understand everything yet, but I'll be looking over it for a while. FB100Z &bull; talk &bull; contribs 22:35, October 30, 2013 (UTC)
 * Okay, so skimming through the article gives me the impression that Lelieveld has built a system about as powerful as BEAF up to legion arrays. Instead of representing things as arrays, all the manipulations occur syntactically, so you'd write 111,,11,1111 instead of {3,0,2,4}. That explains why the notation is so alien. FB100Z &bull; talk &bull; contribs 23:07, October 30, 2013 (UTC)
 * i am Necroing this thread but i want to know exactly how powerful beatrix is 65.26.80.144 22:30, March 28, 2014 (UTC)
 * I've started a section below so we can collaboratively decode the page. FB100Z &bull; talk &bull; contribs 20:49, March 29, 2014 (UTC)
 * Thanks for this. Unfortunately, I'm not sure how much I can help since the page still confuses the heck out of me. How far do you think the notation goes again? Although I can't grok the notation, it looks like the standard procedure up to \(\varepsilon_0\); however, there is the part about "dual deeps" and "serial deeps". I'm not sure how much that extends the notation, if at all.
 * Also, have you taken a look at the rest of the site? Deedlit11 (talk) 11:40, March 30, 2014 (UTC)
 * The rest of the site looks also interesting, they say they have early evidence for tetration. Also, I don't think BTRIX is really what we're expecting. It seems it reaches something around LVO and it then implents Bird's Array in a different system. The page also says that they can always beat other notations, with ther RepExp notation and such things. Wythagoras (talk) 18:53, April 11, 2014 (UTC)
 * RepExp is, of course, only a small fraction of the page. BTRIX analysis is coming soon. As for the strength of BTRIX, it really just looks like a slightly modified clone of BEAF; Lelieveld takes a lot of time to boast about how his* notation outruns the latter, but I'm skeptical about how substantial his advantage is.
 * Dunno about the rest of the site, but if it's all as cryptic as this page, I'm not sure if I want to work through it!
 * Gerard is a male name, right? FB100Z &bull; talk &bull; contribs 19:13, March 30, 2014 (UTC)
 * Yes, it is. Wythagoras (talk) 19:18, March 30, 2014 (UTC)

Huh, why do the subscripts keep automatically breaking with every edit? FB100Z &bull; talk &bull; contribs 19:22, March 30, 2014 (UTC)

Aw, what happened to this? King2218 (talk) 15:05, September 21, 2014 (UTC)

RepExp
Lelieveld has created a system called "RepExp" (short for "repeated expressions"), which he calls a meta-notation for describing recursive notations. RepExp differs from existing notations in that it operates on syntactic expressions rather than argument spaces.

No formal definition is given for RepExp, but the following examples are offered:

x.. :3 = x.. x:3 = x{3} = xxx F(..m..) :3: = F(F(F(m))) A.n1,..Z :2 = A(n1,){2}Z = An1,n1,Z aaaa..b..b.ci.. aa:0 2: :4 = aabbbc0c1c2c3 A.Li..,{5}..Rj.Z :3: = AL0L1L2,,,,,R2R1R0Z

From the first and last examples, we see that <tt>a{n}</tt> represents repetition of a n times: <tt>x{5} = xxxxx</tt>. The third example hints that this curly-brace notation by default only repeats one character, and to repeat multiple characters we use parentheses <tt>(ab){n}</tt>: <tt>(abc){3} = abcabcabc</tt>. This is no surprise to people familiar with regular expressions, which Lelieveld cites as inspiration.

But this notation is not as important as the dot notation. Dot notation strings appear to have two parts: a main body and a sort of appendix that indicates the number of repetitions. The main body contains single dots and double dots that, in some cryptic way, delimit the expression or expressions to be repeated.

Here is a pseudocode formalization of what I've figured out:


 * The main body consists of string with single and double dots interspersed.
 * The appendix consists of one or more words of the form ":n", "n:", "A:n", "n:A", or ":n:", where A is a non-empty string and n is a nonnegative integer.
 * A word of the form ":n:" (tandem repetition) is shorthand for ":n n:".
 * After having expanded all the tandem repetition words, we match up each appendix word to a successive double-dot. For a valid expression, the number of words and dot-pairs must match (again, with each tandem repetition counting as two words).
 * For each appendix word:
 * If it is ":n", start at the corresponding double-dot and walk backwards in the string until you reach a single dot or the beginning of the string. Take the string that you've walked over, trim off the delimiting double-dot, and repeat that string n times.
 * If it is "n:", start at the corresponding double-dot and walk forwards in the string until you reach a single dot or the end of the string. Take the string that you've walked over, trim off the delimiting double-dot, and repeat that string n times.
 * If it is "A:n", there must be an occurrence of string A immediately before the corresponding double-dot. (If not, the expression is invalid.) After removing the double-dots, repeat that occurrence of A n times.
 * If it is "n:A", there must be an occurrence of string A immediately after the corresponding double-dot. (If not, the expression is invalid.) After removing the double-dots, repeat that occurrence of A n times.
 * Remove any excess single dots.

For example, let's try <tt>aaaa..b..b.c.. aa:0 2: :4</tt>, which is almost the same as the fourth example. First we pair each appendix word with each pair of double-dots:

aaaa..b..b.c.. ^^ ^^  ^^   aa:0 2:   :4

First we look at <tt>aa:0</tt>. The string "aa" does appear right before the double-dot, so we take that and repeat it zero times. In other words, we delete it:

aab..b.c.. ^^  ^^    2:   :4

Note that we deleted only the second "aa", and the first remains untouched. Next is <tt>2:</tt>, which selects up to the next single-dot, selecting the second "b". Repeating that twice:

aabbb.c.. ^^       :4

<tt>:4</tt> walks backwards to the preceding single-dot, which selects the <tt>c</tt> and repeats it four times:

aabbb.cccc

Finally, we remove the extraneous single-dot to get:

aabbbcccc

RepExp + subscripts
The last two examples contain a subscript notation not covered by the above definition. So let's fix that:


 * The main body consists of a string with single and double dots, and zero or more subscripts which may be letters or numbers.
 * The appendix consists of one or more words of the form ":n", "n:", "A:n", "n:A", or ":n:", where A is a non-empty string and n is a nonnegative integer.
 * A word of the form ":n:" (tandem repetition) is shorthand for ":n n:".
 * After having expanded all the tandem repetition words, we match up each appendix word to a successive double-dot. For a valid expression, the number of words and dot-pairs must match (again, with each tandem repetition counting as two words).
 * For a string S, a nonnegative integer n, and a direction d, define repeat(S, n, d) as follows:
 * If n = 0, return an empty string.
 * In S, replace any letter subscript with a subscript n - 1 to form a string T.
 * If d = left, return repeat(S, n - 1, d) + T.
 * If d = right, return T + repeat(S, n - 1, d).
 * For each appendix word:
 * If it is ":n", start at the corresponding double-dot and walk backwards in the string until you reach a single dot or the beginning of the string. Take the string that you've walked over and call it S. Trim off the delimiting double-dot, and replace S with repeat(S, n, left).
 * If it is "n:", start at the corresponding double-dot and walk forwards in the string until you reach a single dot or the end of the string. Take the string that you've walked over and call it S. Trim off the delimiting double-dot, and replace S with repeat(S, n, right).
 * If it is "A:n", there must be an occurrence of string A immediately before the corresponding double-dot. (If not, the expression is invalid.) Call that occurrence of A S. Trim off the delimiting double-dot, and replace S with repeat(S, n, left).
 * If it is "n:A", there must be an occurrence of string A immediately after the corresponding double-dot. (If not, the expression is invalid.) Call that occurrence of A S. Trim off the delimiting double-dot, and replace S with repeat(S, n, right).
 * Remove any excess single dots.

This notation is quite complicated for something explained in two short paragraphs and five examples. However, Lelieveld's examples are comprehensive enough to understand almost all of his uses of the notation throughout the rest of the page. You actually don't need to work through this definition to get it, but I've written it up because formalize formalize formalize.

Although I was skeptical at first, I find RepExp actually quite a nice abbreviation after having worked it out. <tt>f(2, f(2, f(2, f(2, 3))))</tt> can be nicely put as <tt>f(2,..3..) :4:</tt>, for example, and I can't actually think of a more succinct way of writing it!

BTRIX: overview
It is important to note that BTRIX is a syntactically based system. The BTRIX incarnation of a <tt>{3, 4, 5, 6}</tt> linear array is


 * <tt>111,1111,11111,111111</tt>

I don't know how much of an advantage this gives, although I can see how it can create very clear rules when we get to complex nested dimensional things.

Lelieveld uses A = B to mean that string A reduces to string B in the BTRIX system, and A ≡ B to mean that string A and string B are literally the same. I have mimicked this fairly sensible notation here as well.

BTRIX: Linear arrays
In BTRIX, we have actual rules to deal with, thankfully. Here's linear array notation:


 * <tt>0.0: BTRIX(X) ≡ X</tt>
 * <tt>0.1: BTRIX(a) ≡ a = a</tt>
 * <tt>1.0: a, = [empty string]</tt>
 * <tt>1.1: a,b = b</tt>
 * <tt>2.1: a,Y, = a,Y</tt>
 * <tt>3.0: a,b,1c = a,ab,c</tt>
 * <tt>3.1: a,b,1c,d = a,ab,c,d</tt>
 * <tt>4.0: a,,,1d = a,a,,1d</tt>
 * <tt>5.0: a,b1,,1d = a,,b1,1d</tt>
 * <tt>3.2: a,b,1Z = a,ab,Z</tt>
 * <tt>4.1: a,,{k1}1Z = a,a,{k1}Z</tt>
 * <tt>5.1: a,b1,{k1},1Z = a,,{k1}1b,Z</tt>

where lowercase letters represent a string of 1's, and uppercase letters represent strings of 1s and commas.

It appears that the rules are structured so that, say, 3.2 is a generalization of 3.1, which is a generalization of 3.0. Thus we only need the following:


 * <tt>0.1: BTRIX(a) ≡ a = a</tt>
 * <tt>1.1: a,b = b</tt>
 * <tt>2.1: a,Y, = a,Y</tt>
 * <tt>3.2: a,b,1Z = a,ab,Z</tt> ("motor")
 * <tt>4.1: a,,{k1}1Z = a,a,{k1}Z</tt> ("put")
 * <tt>5.1: a,b1,{k1},1Z = a,,{k1}1b,Z</tt> ("upload")

From this, we can gather that BTRIX (so far) is a linear array notation with the following rules, expressed in Bowerslike form:


 * Zeros are default.
 * Let b (base) be theo value of first entry, and p (prime) be the value of the second entry. Define the pilot as the first nonzero entry after the prime, and the copilot as the entry before that.
 * If there is no pilot, set the base to p and the pilot to 0. Return.
 * If the pilot is the third entry, decrement it and add b to the prime. Return.
 * If the prime is 0, decrement the pilot and add b to the prime. Return.
 * Otherwise, decrement the pilot, set the copilot to p, and set the prime to 0. Return.

How powerful is this? Linear BTRIX emulates the hyperoperators exactly, as noted by the author. Up to here the power is \(f_\omega\).

BTRIX: Dimensional arrays
The BTRIX 1-dimensional separator is denoted as <tt>,[1]</tt>, the 2-dimensional separator as <tt>,[11]</tt>, etc. It is understood that <tt>,</tt> is a shorthand for <tt>,[]</tt>.


 * <tt>6.0: a,b,[1]1Z = a,a,{b}1,[1]Z</tt>
 * <tt>6.1: a,b1.,[1]..1Z :k1 = a,a.,[1]..,{b1}1,[1]Z :k</tt>
 * <tt>2.2: a,Y,[s] = a,Y</tt>
 * <tt>3.2: a,b,1Z = a,ab,Z</tt>
 * <tt>4.2: a,.,[si]..1Z :k1 = a,a.,[si]..Z :k1</tt>
 * <tt>5.2: a,b1.,[si]..,1Z :k1 = a,.,[si]..b1,Z :k1</tt>
 * <tt>6.2: a,b1$,[t1]1Z = a,a$.,[t]..1,[t1]Z :b1 where $ ≡ ,[si].. :k by 8. {sk≥t1}</tt>
 * <tt>7.1: ,[] ≡ ,</tt>
 * <tt>8.0: ,[s<t],[t] ≡ ,[t]</tt>

So the current ruleset is:


 * <tt>0.1: BTRIX(a) ≡ a = a</tt>
 * <tt>1.1: a,b = b</tt>
 * <tt>2.2: a,Y,[s] = a,Y</tt>
 * <tt>3.2: a,b,1Z = a,ab,Z</tt>
 * <tt>4.2: a,.,[si]..1Z :k1 = a,a.,[si]..Z :k1</tt>
 * <tt>5.2: a,b1.,[si]..,1Z :k1 = a,.,[si]..b1,Z :k1</tt>
 * <tt>6.2: a,b1$,[t1]1Z = a,a$.,[t]..1,[t1]Z :b1 where $ ≡ ,[si].. :k by 8. {sk≥t1}</tt>
 * <tt>7.1: ,[] ≡ ,</tt>
 * <tt>8.0: ,[s<t],[t] ≡ ,[t]</tt>


 * That's awful, is it reach the level of $$\psi(\psi_I(0))$$? Ikosarakt1 (talk ^ contribs) 14:28, March 31, 2014 (UTC)
 * It's not as bad as it looks. No idea about the strength of the entire function, and I've only covered dimensional arrays. you're.so.pretty! 06:22, April 11, 2014 (UTC)

In plain English:


 * Zeros are default.
 * Let b (base) be theo value of first entry, and p (prime) be the value of the second entry. Define the pilot as the first nonzero entry after the prime, and the copilot as the entry before that.
 * If there is no pilot, set the base to p and the pilot to 0. Return.
 * If the pilot is the third entry, decrement it and add b to the prime. Return.
 * If the prime is 0, decrement the pilot and add b to the prime. Return.
 * If there is a copilot (that is, the pilot does no sit at the beginning of a row), decrement the pilot, set the copilot to p, and set the prime to 0. Return.
 * Otherwise...
 * Decrement the pilot.
 * Let d denote the dimension of the smallest previous structure to the pilot. (d = 1 if the pilot sits on a row other than the first row of a plane, d = 2 if the pilot sits on a plane other than the first plane in a realm, etc.)
 * uh, I don't know how to do this.

Chihiro numbers
My article on them was deleted due to a sourcing problem. I admit there could've been not enough sources; I was pretty busy that evening, and didn't have time to add everything I wanted (including some sources). But there's a bigger problem here.

I'll rehash the history of these numbers: first, some guy, apparently suspected to be from Minnesota, created a Wikipedia article about them, attributing their invention to some apparently made up Japanese guy. The article got nominated to DYK, and made it all the way to a main page entry. Then, a few days later, someone finally noticed that many of the references in the article seem to be fake, and it got deleted as a blatant hoax (without even a deletion discussion). This all happened in late April this year, by the way.

Both over the few days the article existed, and apparently later, it had been disseminated over various places, both as a hoax example and as fact; I personally first encountered the name on TV Tropes (where it was a supposedly factual entry). This Reddit discussion seems to treat the name as fact, and it was started in September.

The question: what is a good primary source for such a thing? You seem to blanket-ban Wikipedia as a primary source - which leaves hardly any for this which was first and foremost a Wikipedia hoax; perhaps the best option I could find was the very Reddit discussion I linked above, and I don't really see what makes Wikipedia any worse than that, honestly. (Also, the original article proceeded with a very interesting section on "Chihiro factorials", which had not been copied anywhere I could find, so there won't be any sources for that part anyway.)

If your concern was permanency... um, I did not even link to the Wikipedia article itself, but to an Internet Archive copy of it (besides, the actual article was long since deleted, naturally). If that's not permanent enough, I don't know what is, really.

Any further ideas? Ideally, I'd prefer the article restored, but if you can explain what sources I should've used (other than the Reddit discussion), maybe it could be recreated as something more compliant with the rules :-)

--Январь Первомайский (talk) 10:35, September 19, 2014 (UTC)


 * I've restored the page, nice article. Wythagoras (talk) 12:02, September 19, 2014 (UTC)

On oracles and admissibles
For countless months (but countable, there's about 15 of them) we all have been considering higher order busy beaver functions \(\Sigma_k(n)\) to be on par with FGH extended, in some uniform way no one ever cared about, to admissible ordinal \(\omega^\text{CK}_k\). I believe I have actually been the first of us who started stating that. Argument I gave was that with halting oracle we can compute ordinal \(\omega^\text{CK}_1\), by trying out all machines, checking if they code ordinals and then adding up all the ordinals, and then we can sort of recurse through this ordinal and reach anywhere below \(\omega^\text{CK}_2\) (for higher orders it's just induction).

However, some time ago, somewhere on this wiki, I was told by Deedlit that checking if machine computes ordinal is actually a hard problem. A really hard problem. To be exact how hard it is, it's \(\Pi^1_1\)-complete problem, so it's at least as hard as any other \(\Pi^1_1\) problem, so it's far more complicated than any hyperarithmetical or arithmetical problem, including halting problem for many oracles. This means that it cannot be solved on halting oracle TM.

After a lot of time to rethink this, I came to a conclusion that it's actually impossible to, using any reasonable oracle machine, to compute CK ordinal, by following argument - we all know Kleene's O. Kleene's O can be interpreted as tree of height \(\omega^\text{CK}_1\), so we probably can, given some way of working with \(\omega^\text{CK}_1\), decide if number is part of Kleene's O, which is again \(\Pi^1_1\)-complete problem, so that's impossible.

I'm still not sure about this, but if my rethought intuition is correct, not even \(\Sigma_k(n)\) reaches \(f_{\omega^\text{CK}_1}(n)\). I wanted to ask you guys for your thoughts after reading this wall of text. LittlePeng9 (talk) 20:58, July 6, 2014 (UTC)


 * Oh, and one more thing - hyperarithmetical sets are equivalent to oracle hierarchy iterated transfinitely \(\omega^\text{CK}_1\) times, so this might be an argument against strength of \(\Sigma_k(n)\) too. LittlePeng9 (talk) 21:00, July 6, 2014 (UTC)
 * If you claim that \(\Sigma_k(n)\) doesn't reach \(f_{omega^\text{CK}_1}(n)\), you claim that \(\Sigma(n)\) doesn't reach \(f_{omega^\text{CK}_1}(n)\). The problem is that you now claim that \(\Sigma(n)\) is upper bounded by some recursive ordinal. Wythagoras (talk) 06:04, July 7, 2014 (UTC)
 * No, I now claim that \(\Sigma(n\) is strictly above all \(f_\alpha(n)\) for \(\alpha\) revursive, but still below \(f_\text{CK}(n)\). FGH nor even HH or SGH doesn't exhaust all possible growth rates. For example, where in SGH would you put \(n\log n\)? It's significantly slower than \(g_{\omega^2}(n)=n^2\) but still outgrows every linear function \(kn=g_{\omega k}(n)\). So there are functions which fit inbetween SGH. So let me repeat: I claim similar happens for \(\Sigma(n)\) and \(\omega_1^\text{CK}\). LittlePeng9 (talk) 06:16, July 7, 2014 (UTC)
 * A similiar thing happens for the Goodstein function? Wythagoras (talk) 06:47, July 7, 2014 (UTC)
 * As far as I know, Goodstein's function is unarguably at level of \(\varepsilon_0\). LittlePeng9 (talk) 07:06, July 7, 2014 (UTC)
 * Just realized this depends on definition of rate of growth, but my point still holds. LittlePeng9 (talk) 07:52, July 7, 2014 (UTC)
 * Peng, $$H_{\omega^2}(n) = f_2(n) = 2^n*n$$. Hardy hierarchy has the relationship with FGH as $$H_{\omega^\alpha}(n) = f_\alpha(n)$$, not turning $$\omega$$s to n's freely. Ikosarakt1 (talk ^ contribs) 19:47, July 7, 2014 (UTC)
 * I meant SGH there. My bad! LittlePeng9 (talk) 19:59, July 7, 2014 (UTC)


 * I believe \(\Sigma(n)\) grows at least as fast as \(f_{\omega^\text{CK}_1}(n)\). I'm assuming that the fundamental sequence of \(\omega^\text{CK}_1\) is the largest ordinal representable by a Turing machine using \(n\) states or less, or something similar to that. In that case, we can define the FGH up to \(\omega^\text{CK}_1 [n]\) using something comparable to \(n\) states. (certainly less than \(n^2\), I would think) So we will have \(\Sigma(n^2) > f_{\omega^\text{CK}_1}(n)\). Deedlit11 (talk) 18:38, July 7, 2014 (UTC)
 * I'd rather assume fundamental sequences from Kleene's O: \(\alpha[n]=\sup\{\mathcal{O}(m):m\leq n\}\). I don't really know why, but I prefer that definition. LittlePeng9 (talk) 18:50, July 7, 2014 (UTC)
 * Okay, that's fine. That will make \(\omega^\text{CK}_1 [n]\) even slower growing, as you will need the notations to go to at least \(3 \cdot 5^{(4n+4)^{2n}}\) before we reach Turing machines with more than \(n\) states. Deedlit11 (talk) 19:12, July 7, 2014 (UTC)
 * But for Kleene's O it might happen that a machine with index, say, 100, will compute some specific set of number indices which will further represent in O ordinals with upper bound being some enormous ordinal \(\alpha\), so \(\mathcal{O}(100)=\alpha\), while TMs with up to \(3\cdot 5^{100}\) states will not be able to define well-ordering with that order type (this is what I assume your definition used). Hence it isn't that obvious for me that your definition of FS is any stronger than mine. (my use of number 100 is huge underestimate for usual TM orderings, but the point might hold if we replace it with, say, \(10^{100}\) anyway. It's unlikely, but not impossible) LittlePeng9 (talk) 19:22, July 7, 2014 (UTC)
 * Hmm, I see what you are saying. So we don't have a proof that \(\Sigma(n)\) grows as fast as \(f_{\omega^\text{CK}_1}(n)\). Your conjecture clashes with my intuition, though - \(f_{\omega^\text{CK}_1}(n)\) resolves immediately to recursive functions, so for it to vastly outgrow \(\Sigma(n)\) would require a ridiculously fast growing fundamental sequence. Your fundamental sequence doesn't seem all that fast growing to me. Deedlit11 (talk) 19:40, July 7, 2014 (UTC)
 * Why can't we come up with a metric for comparing \(f_{\omega_1^\text{CK}}(n)\) and \(\Sigma(n)\) so that the "growth rate of the fundamental" sequence doesn't matter, as long as it has the right supremum? you're.so.pretty! 22:50, July 7, 2014 (UTC)


 * Well, the FGH certainly depends on fundamental sequences or norms or some sort of choice that we have to keep making as we get to higher and higher ordinals, I don't see a way around that. It would be great if we could define a canonical choice of fundamental sequences, or at least some notion of "natural" fundamental sequences - this has been an open problem for proof theory for quite some time, and no one has come up with a good solution, and I think the consensus is that there isn't any. Now for \(f_{\omega^\text{CK}_1}(n)\) we would like some notion of "first" nonrecursive function, but that doesn't seem to have a canonical definition either. Deedlit11 (talk) 00:55, July 8, 2014 (UTC)

Can anyone help me about Taranovsky's notation?
See here for the notation. I feel puzzled about the definition about "n-built from below from b".

a is n+1-built from below from b iff the standard representation of a does not use ordinals above a except in the scope of an ordinal n-built from below from b.

So, I may ask some questions: Those questions block me from understanding the notation. Can anyone help me? hyp$hyp?cos&#38;cos (talk) 02:03, July 22, 2014 (UTC)
 * 1) the standard representation of a doesn't use something - what's "use" here?
 * In my opinion, if a=C(c,d) then a uses c and d, but what if c=C(e,f) (i.e. a=C(C(e,f),d)), does a use e and f?
 * 1) What's "ordinals in the scope of an ordinal n-built from below from b"?
 * Are "a is an ordinal in the scope of an ordinal n-built from below from b" and "a is an ordinal n-built from below from b" the same thing? Or "a is an ordinal in the scope of an ordinal n-built from below from b" means, there's an ordinal c that's an ordinal n-built from below from b, with c using a? Or something else?


 * Yes, Taranovsky is definitely a little vague here.
 * I believe "use" means it appears somewhere in the standard representation, so in a = C(C(e,f),d)), a uses e and f.
 * I believe he means the same thing for "in the scope of". So if a = C(C(e,f),d)), then d,e, and f are "in the scope of" a.
 * So yes, I believe "a is an ordinal in the scope of an ordinal n-built from below from b" means there's an ordinal c using a that is is n-built from below from b. Deedlit11 (talk) 04:20, July 22, 2014 (UTC)
 * Wait, if I follow that, it'll go wrong. Think about the simplest n=1 system as follows: -
 * \(\Omega_1\) is 1-built from below from 0, so \(C(\Omega_1,0)\) is standard form.
 * Then \(C(\Omega_1,C(\Omega_1,0))\) is standard form. It uses \(C(\Omega_1,0)\), 0, and \(\Omega_1\). The former two are smaller than it, and \(\Omega_1\) is "in the scope of" \(C(\Omega_1,0)\), which is 0-built from below from \(C(\Omega_1,0)\) itself. So \(C(\Omega_1,C(\Omega_1,0))\) is 1-built from below from \(C(\Omega_1,0)\).
 * So \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\) is standard form.
 * Then \(C(C(\Omega_1,C(\Omega_1,0)),C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0)))\) is standard form. It uses 0, \(C(\Omega_1,0)\), \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\), \(\Omega_1\) and \(C(\Omega_1,C(\Omega_1,0))\). The former three are smaller than it, and the latter two are "in the scope of" \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\), which is 0-built from below from \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\) itself. So \(C(C(\Omega_1,C(\Omega_1,0)),C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0)))\) is 1-built from below from \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\).
 * So \(C(C(C(\Omega_1,C(\Omega_1,0)),C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))),C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0)))\) is standard form.
 * etc.
 * Then these things are standard form: \(C(C(0,\Omega_1),0)\), \(C(C(0,C(\Omega_1,C(\Omega_1,0))),C(\Omega_1,0))\), \(C(C(0,C(C(\Omega_1,C(\Omega_1,0)),C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0)))),C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0)))\), ... if they're written in postfix form, we get
 * 0Ω10CC > 0Ω1C0Ω1CΩ1C0CC > 0Ω1C0Ω1CΩ1CC0Ω1C0Ω1CΩ1CC0Ω1CΩ1CC0CC > ... That's an infinitely decreasing sequence, and it cannot be in a set of ordinals! hyp$hyp?cos&#38;cos (talk) 13:09, July 22, 2014 (UTC)


 * Generally speaking, \(\alpha_1=\Omega_1\), \(\beta_1=0\), \(\alpha_{k+1}=C(\alpha_k,C(\alpha_k,\beta_k))\) and \(\beta_{k+1}=C(\alpha_k,\beta_k)\). Then the decreasing sequence is \(C(C(0,\alpha_n),\beta_n)\). hyp$hyp?cos&#38;cos (talk) 23:53, July 22, 2014 (UTC)


 * Hm, I don't see the pattern of your infinite decreasing sequence. Could you give more description?
 * The alternative is that "in the scope of" means "is one of the variables of". So if a = C(C(e,f),d)), then d and C(e,f) are "in the scope of" a, but e and f are not. Does that fix the problem? Deedlit11 (talk) 21:37, July 22, 2014 (UTC)
 * That may be still wrong. If I follow what you say, \(\Omega_1\) is still in the scope of \(C(\Omega_1,0)\), then \(C(\Omega_1,C(\Omega_1,0))\) is 1-built from below from \(C(\Omega_1,0)\) too, and \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\) is standard form. It must be larger than \(C(\Omega_1,0)=\varepsilon_0\). However, if we run Taranovsky's python program here, \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))=C(\Omega_1,0)=\varepsilon_0\). hyp$hyp?cos&#38;cos (talk) 23:53, July 22, 2014 (UTC)


 * We can use the Python program to tell us exactly what he means. The relevant functions seem to be the following:

def IsBuiltFromBelow(a, b, n, d=0): """returns True iff a is n-built from below from b with counterexamples required to be >=d instead of just a""" if CompareStd(a, b) <= 0: return True if isinstance(a, str): return n >= 1 or CompareStd(a, d) == -1 if CompareStd(a, d) == -1: return IsBuiltFromBelow(a[1], b, n, d) and IsBuiltFromBelow(a[2], b, n, d) elif n == 0: return False else: return IsBuiltFromBelow(a[1], b, n-1, a) and IsBuiltFromBelow(a[2], b, n-1, a)
 * 1) for the function to do what we want, a and b should be standard for the n-th ordinal notation system

def IsStandard(a, n=0): """return True if a is in the standard form; if n > 0 checks for standard form in the nth notation system""" if a == 0: return True # note: perhaps we should disallow floating point 0.0 if isinstance(a, str): return len(a) > 0 and (n == 0 or len(a) == n) and a == 'W'*len(a) if not isinstance(a, tuple) or len(a) != 3: return False if not isinstance(a[0], int) and not isinstance(a[0], type_long) or a[0] < 1: return False if not IsStandard(a[1], n) or not IsStandard(a[2], n): return False if isinstance(a[1], str) and a[1] != 'W' and a[2] == 0: return False if isinstance(a[2], tuple) and CompareStd(a[1], a[2][1]) >= 0: return False if n == 0: if isinstance(a[2], str) and CompareStd(a[1], 'W'*(len(a[2])+1)) > 0: return False m = max(MaxOmegaUsed(a), 1) return IsBuiltFromBelow(ChangeOmegaN(a[1], m), ChangeOmegaN(a[2], m), m) else: return IsBuiltFromBelow(a[1], a[2], n)
 * 1) Return False if a has invalid syntax


 * (How do I put code in a window again?)
 * This code should tell us what is going on. BTW how can I run Python? (I'm using Linux, but I can boot Windows if necessary.) Deedlit11 (talk) 00:59, July 23, 2014 (UTC)
 * To run Python, first install Python from here, and save the text as a .py file, then you can run it. hyp$hyp?cos&#38;cos (talk) 01:22, July 23, 2014 (UTC)

But what about \(\Omega_n\)? In the page,
 * For b < \(\Omega_n\), clearly \(\Omega_n\) is 1-built from below from b but not 0-built from below from b.

Is \(\Omega_n\) 2-built from below from b? Is \(\Omega_n\) n-built from below from b? And what about "m-built from below from b" with m>n, or m<n?

We say the \(\Omega_n\) is a "constant" and it's already standard form. Does it use something? If not, then \(\Omega_n\) is m-built from below from b for any b and any m>0. hyp$hyp?cos&#38;cos (talk) 10:25, July 22, 2014 (UTC)


 * Yes, \(\Omega_n\) is m-built from below b for any b and any m > 0. Deedlit11 (talk) 20:56, July 22, 2014 (UTC)

TL;DR, but I'd recommend contacting Taranovsky if you run into ambiguity or confusion. I'm sure he'd be fine with feedback and questions. you're.so.pretty! 00:42, July 23, 2014 (UTC)


 * Okay, I've found a discrepancy. According to Taranovsky's rules for C(a,b) being in standard form, if b is of the form C(c,d) then we must have a <= c. But in the program, it seems to be required that a < c. That accounts for why \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\) is not in standard form. Deedlit11 (talk) 03:42, July 23, 2014 (UTC)
 * If C(a,C(a,...C(a,b)...)) with k a's is standard form in normal definition (not in py program), it'll be converted to C(k,a,b) in py program. In py program, a=C(a[0],a[1],a[2]) means a=C(a[1],C(a[1],...C(a[1],a[2])...)) with a[0] a[1]'s in normal definition, I guess.
 * And the problem is still in the definition of "n-built from below from b" - if we follow the definition in py program, \(C(\Omega_1,C(\Omega_1,0))\) (its standard form in the py is C(2,W,0)) is not 1-built from below from \(C(\Omega_1,0)\). hyp$hyp?cos&#38;cos (talk) 04:26, July 23, 2014 (UTC)


 * Yes, the definitions are clearly different. Can you glean the definition in the Python program from the code? Deedlit11 (talk) 04:59, July 23, 2014 (UTC)

In the py program, he avoid the "use" and "in the scope of". Instead, he uses a 4-argument function rather than 3-argument. We say, a is n-built from below from (b,d) iff at least one of those holds true: Here l(a) and r(a) are define as follows: if a is written as C(e,C(e,...C(e,f)...)), then l(a) = e and r(a) = f. And "a is n-built from below from b" in normal definition is equivalent to "a is n-built from below from (b,0)" here. hyp$hyp?cos&#38;cos (talk) 09:33, July 23, 2014 (UTC)
 * 1) \(a\leq b\)
 * 2) \(a=\Omega_m\) and \(n\geq 1\)
 * 3) \(a=\Omega_m<d\)
 * 4) Both l(a) and r(a) are n-built from below from (b,d), and a < d
 * 5) Both l(a) and r(a) are (n-1)-built from below from (b,a)


 * Yes, I figured out that much (Although I believe #5 requires that a >= d and n > 0 as well.). I was wondering if there was a more concise description of it, perhaps not using recursion or needing d. Are you agreeing now that the program matches the original description of m-built? Deedlit11 (talk) 06:40, July 24, 2014 (UTC)


 * I did write to Dmytro, and he pointed out that \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\) is not in standard form, since the first \(\Omega_1\) is larger than \(C(\Omega_1,C(\Omega_1,0))\) and \(C(\Omega_1,0)\). So we have to be careful. Deedlit11 (talk) 06:36, July 24, 2014 (UTC)

Okey now, I understand what the "n-built from below from b" is. If I use the "use", it can be expressed as follows:
 * a=C(c,d) use b iff c=b or d=b or c use b or d use b.
 * The 0 and the \(\Omega_n\) don't use anything.
 * a<=b implies that a is n-built from below from b for all n>=0.
 * a is (n+1)-built from below from b iff the standard form of a doesn't use ordinals such that they're above a and not n-built from below from b.

Corollary: a is 0-built from below from b iff a<=b.

And the definition of "standard form" is still the same as in the page. The 4-argument IsBuiltFromBelow function in py program is just an equivalence for calculation. hyp$hyp?cos&#38;cos (talk) 00:36, July 25, 2014 (UTC)


 * Hmm, really? Taranovsky is quite clear that it should be


 * a is (n+1)-built from below from b iff the standard form of a doesn't use ordinals such that they're above a and not in the scope of an ordinal n-built from below from b.


 * where in the scope of is just the opposite of uses. Are you saying that the Python program does not check this? Deedlit11 (talk) 05:53, July 25, 2014 (UTC)

Yes. I think the word "in the scope of" is just nothing, according to the Python program. And things in formal logic (shown below) may be easier to proof.

The Python program means these things, where Bu(a,b,n,d) means IsBuiltFromBelow(a,b,n,d) in Python program. In standard C expression, the only "atoms" are 0 and Ωm, so ¬∃l∃r(a=C(l,r)) is equivalent to a=Ωm∨a=0.
 * ∀a∀b∀d∀n(Bu(a,b,n,d)↔(a&lt;=b∨(a=Ωm∧n&gt;=1)∨a=Ωm&lt;d∨(∃l∃r(a=C(l,r)∧Bu(l,b,n,d)∧Bu(r,b,n,d))∧a&lt;d)∨(∃l∃r(a=C(l,r)∧Bu(l,n-1,b,a)∧Bu(r,n-1,b,a))∧n&gt;=1∧a&gt;=d)))

The "use" can be define as follows, where U(a,b) means "a uses b". What I guess the result without the last entry of Bu(a,b,n,d) is Well, it's a little different from what I've said in English. I'm not sure whether I've written the formal logic correct. hyp$hyp?cos&#38;cos (talk) 08:32, July 25, 2014 (UTC)
 * ∀a∀b(U(a,b)↔∃l∃r(a=C(l,r)∧(l=b∨r=b∨U(l,b)∨U(r,b))))
 * ∀a∀b∀n(Bu(a,b,n,0)↔(a&lt;=b∨(n&gt;=1∧(¬∃c(U(a,c)∧c&gt;=a∧(¬Bu(c,n-1,b,0)))))))

Oh, no. Forget it. Suddenly I realize that those two statements are not equivalent.
 * a is (n+1)-built from below from b iff the standard form of a doesn't use ordinals such that they're above a and not n-built from below from b.

This is not fully correct. Actually, to check whether a is (n+1)-built from below from b, we needn't check all the ordinals that a uses. More accurate, if a uses c, which is n-built from below from b, then we don't check the ordinals c uses.

So that's what "in the scope of" means. And, why \(C(C(\Omega_1,C(\Omega_1,0)),C(\Omega_1,0))\) goes wrong is that I've not consider the context of the "in the scope of".

Okey now, I understand what the "n-built from below from b" is:
 * a use b iff there exists c and d such that a=C(c,d) and (c=b or d=b or c use b or d use b).
 * a is 0-built from below from b iff a<=b.
 * a is (n+1)-built from below from b iff the standard form of a only use ordinals (c) fit at least one of the follows:
 * c < a
 * c is n-built from below from b
 * there exists an ordinal d such that (a use d or a=d) and d use c and d is n-built from below from b, in the context of a

That's it! hyp$hyp?cos&#38;cos (talk) 11:04, July 25, 2014 (UTC)


 * What does in the context of a mean? The definition from Taranovsky's main page seems to match what you've written with "in the context of a" left out. Also, \(C(\Omega_1,C(\Omega_1,0))\) is not 1-built from below from \(C(\Omega_1,0)\) even with "in the context of a" left out. Deedlit11 (talk) 05:11, July 29, 2014 (UTC)

Imagine this notation according to the standard form - tree form. Use an ordered binary tree with every leaf labeled 0 or Ωn to represent the standard form. e.g. the tree form of \(C(C(C(\Omega_2,C(\Omega_2,0)),C(\Omega_2,0)),C(\Omega_2,C(\Omega_2,0)))\) is \((((\Omega_2(\Omega_20))(\Omega_20))(\Omega_2(\Omega_20)))\)

Then Is it clear now? hyp$hyp?cos&#38;cos (talk) 08:09, July 29, 2014 (UTC)
 * The subtree of a vertix x contains x and the subtree of the children of x.
 * Tree a is subtree of b iff b has such vertix that a is its subtree.
 * a is (n+1)-built from below from b iff a doesn't have such subtree c that
 * c > a
 * there isn't such d that c is subtree of d and d is subtree of a and d is n-built from below from b.


 * This seems to be the same as Taranovsky's original definition. Was there something wrong with that? Deedlit11 (talk) 08:43, July 29, 2014 (UTC)
 * Nothing wrong. Now I get it. hyp$hyp?cos&#38;cos (talk) 12:46, July 29, 2014 (UTC)


 * Good deal. Could you compare, say, Taranovsky's "Degrees of Reflection" notation with your own? Deedlit11 (talk) 11:16, July 30, 2014 (UTC)
 * What? "Degrees of Reflection"?! But what we've talked about is "Ordinal Notation System for Second Order Arithmetic". I didn't get how "Degrees of Reflection" work (and there aren't codes for help), so I can't compare it. But I may compare the other.
 * But not so soon. Currently I'm making catching function comparisons. Maybe I'll start it when I finish catching function comparisons up to BEAF. hyp$hyp?cos&#38;cos (talk) 11:42, July 30, 2014 (UTC)
 * Ah okay. I just thought of starting with the "Degrees of Reflection" notation because I figured it would be easier to analyze. But comparing with the stronger notation would be good too. Deedlit11 (talk) 13:00, July 30, 2014 (UTC)

Limit of Mahlos
Is the limit of $$M_n$$ equal to $$M_\omega$$? Or we should take it as something like $$\psi_{\chi_{M_\omega}(0,0)}(0)$$? Ikosarakt1 (talk ^ contribs) 11:19, July 19, 2014 (UTC)
 * That is true by definition of M subscripting &mdash; \(\alpha \mapsto M_\alpha\) is the enumerating function of the Mahlo cardinals. you're.so.pretty! 16:51, July 19, 2014 (UTC)
 * Actually, from cardinal point of view, limit of \(M_n,n<\omega\) is not Mahlo. LittlePeng9 (talk) 16:59, July 19, 2014 (UTC)
 * Yes. that's right. But if we use the \(M_\alpha\) notation not just for Mahlo cardinals but also for limits of Mahlo cardinals, \(M_\alpha\) need not be Mahlo. In my opinion, this method gives a more concise representation of ordinals/cardinals. -- ☁ I want more clouds! ⛅ 05:56, July 22, 2014 (UTC)


 * The limit of $$M_n$$ is not $$M_\omega$$, since as LittlePeng said the limit is not Mahlo. I believe $$\psi_{\chi(M_\omega,0)}(0)$$ would indeed get us the limit, assuming we extended the Mahlo notation up to $$M_\omega$$. Deedlit11 (talk) 04:44, July 22, 2014 (UTC)

Calculus of Inductive Constructions
So, this is the only book I've found on CiC:

Download (It's in .djvu, if you want .djvu, click this link, if not, click Peng's link below)

So, yeah, discussion, perhaps? (cool how every two consecutive words are separated by a comma) King2218 (talk) 16:53, July 18, 2014 (UTC)


 * PDF version, ready for downloading. It's also possible to view it online. LittlePeng9 (talk) 17:29, July 18, 2014 (UTC)
 * Nice. :) King2218 (talk) 18:00, July 18, 2014 (UTC)

Intro to BEAF review
I have been working hard on our Introduction to BEAF article, and it's coming along excellently. Any comments on the ordinal versions? I'm fairly convinced that legion arrays have BHO power, but I might be wrong. FB100Z &bull; talk &bull; contribs 23:41, December 6, 2013 (UTC)

BHO is only \(X\uparrow\uparrow X\text{&}X\text{&}n\). The limit is \(\psi(\Omega_\omega)\). Wythagoras (talk) 07:52, December 7, 2013 (UTC)
 * I believe you, but what's the proof of this? (Then again, I've only managed to prove \(\varepsilon_0 = X \uparrow\uparrow X\) myself.) FB100Z &bull; talk &bull; contribs 20:27, December 7, 2013 (UTC)
 * Anything? FB100Z &bull; talk &bull; contribs 22:03, December 9, 2013 (UTC)
 * Sorry for not responding earlier.
 * First, we shall prove \(X\uparrow\uparrow X2 = \varepsilon_1\).
 * We know that \(X\uparrow\uparrow X2 = lim(X\uparrow\uparrow X,(X\uparrow\uparrow X)^{X\uparrow\uparrow X},(X\uparrow\uparrow X)^{(X\uparrow\uparrow X)^{X\uparrow\uparrow X}}...)\) and \(\varepsilon_1 = lim(\varepsilon_0,\varepsilon_0^{\varepsilon_0},\varepsilon_0^{\varepsilon_0^{\varepsilon_0}}...)\).


 * Since \(\varepsilon_0 = X \uparrow\uparrow X\), \(\varepsilon_1 = X \uparrow\uparrow X2\). It is easy to extend this proof to \(\varepsilon_k = X \uparrow\uparrow X(k+1)\), and then proving that \(X\uparrow\uparrow\uparrow X = \zeta_0\) shouldn't be that hard too.


 * But after that, it gets somewhat complicated, but with a proof \(\{X,X,n+1\}\text{&}n=\varphi(n,0)\) I can do it. Wythagoras (talk) 06:27, December 10, 2013 (UTC)
 * Hold up. In the article, I defined \(\omega \uparrow\uparrow (\omega + 1) = \varepsilon_1\). Can you give me a reason why \(\omega \uparrow\uparrow \omega 2\) is better? FB100Z &bull; talk &bull; contribs 06:31, December 10, 2013 (UTC)
 * \(\omega\uparrow\uparrow(\omega+1)=\omega^{\omega\uparrow\uparrow\omega+1}=\omega^{\varepsilon_0+1}\). \(\omega\uparrow\uparrow(\omega+n)\) are given by adding more \(\omega\)'s, and in the limit they give \(\varepsilon_1=\omega \uparrow\uparrow \omega 2\). LittlePeng9 (talk) 14:34, December 10, 2013 (UTC)

Sorry for being stubborn, but I'm still not convinced. It seems that there are multiple interpretations of what arrow notation and BEAF does for transfinites &mdash; one is \(\varepsilon_1 = \omega \uparrow\uparrow (\omega + 1)\), and the other is \(\varepsilon_1 = \omega \uparrow\uparrow \omega 2\). FB100Z &bull; talk &bull; contribs 23:00, December 10, 2013 (UTC)
 * And by Saibian's variant, $$\varepsilon_1 = E(\omega)2\#\omega$$. By his definition, $$\omega \uparrow\uparrow (\omega+n) = \varepsilon_{\omega \uparrow\uparrow n}$$ and unlike other notations, $$X \uparrow (X+1)$$ has exactly $$p \uparrow (p+1)$$ entries. Ikosarakt1 (talk ^ contribs) 20:34, July 14, 2014 (UTC)

SOA's ordinal
Is it correct that fundamental sequence for SOA's ordinal supposed to be $$\alpha[0] = 1$$ and $$C(\alpha[n+1],0) = C(\Omega^{\alpha[n]},0)$$ in Taranovsky's notation? If so, I guess that natural continuation of $$\psi(\Omega), \psi(M), \psi(K), \cdots$$ indeed might reach SOA (alternative guess is $$C(\Omega^{\Omega^\omega},0)$$).

Also, how $$\alpha$$-order logic compares to $$\alpha$$-order arithmetic? Ikosarakt1 (talk ^ contribs) 11:56, June 16, 2014 (UTC)


 * First, I believe correct FS would be \(\alpha[n]=C(\Omega_n,0)\). Second, I don't really think there is any "natural continuation" of \(\Omega,I,M,K\). One could then use indescribable cardinals, but there is large hierarchy of these, so I don't know how one would go about connecting these into sequence of length \(\omega\).
 * About the other question, I recently came to a conclusion that speaking about ordinals of n-th order logic doesn't really make sense, because logics don't prove anything - being n-th order is just a property of system we are working with. So, for example, while SOA can prove well-foundedness of \(\omega\), asking if SOL can prove this is quite meaningless. Grasping all of this is still quite a challenge for me, though. LittlePeng9 (talk) 13:05, June 16, 2014 (UTC)


 * Wait, if this notation can be extended for $$\Omega_n$$, then why not extend it up to $$\Omega_\alpha$$ for arbitrary $$\alpha$$, I, M and K? By the way, there might be the ordinal so that $$C(\alpha,0) = \psi(\alpha)$$ (catching ordinal) and we can define catching function for this (through probably it won't work, FS's for all intermediate ordinals have to be defined.) Ikosarakt1 (talk ^ contribs) 14:25, June 16, 2014 (UTC)
 * This "catching function" sounds nice. Have y'all's come up with any formal proofs demonstrating its strength? you're.so.pretty! 14:49, June 16, 2014 (UTC)
 * First, can we define a single, natural FS for arbitrary recursive ordinal? If yes, then catching function is at least total and would be defined for all $$\alpha$$. Look at SGH-catching-FGH function. If we'd define FS's only for ordinal up to BHO, it would be undefined at all as there are no catches. Ikosarakt1 (talk ^ contribs) 15:16, June 16, 2014 (UTC)
 * Depends on what you mean by "natural"... LittlePeng9 (talk) 16:33, June 16, 2014 (UTC)
 * Due to the very nature of recursive ordinals, it doesn't seem that your "natural" condition can be satisfied. However, Kleene's O can give us FS's. Suppose \(\alpha = \mathcal{O}(3 \cdot 5^i)\) for minimal \(i\); then \(\alpha[k] = \mathcal{O}(f_i(k))\). you're.so.pretty! 05:56, June 17, 2014 (UTC)
 * What this definition says for FS of, say, $$\psi(\psi_{\Omega_{\omega+1}}(0))$$? Is it possible to determine it for any reasonable time? Ikosarakt1 (talk ^ contribs) 06:01, June 17, 2014 (UTC)
 * @Iko No, given constraint that i is minimal this is uncomputable. LittlePeng9 (talk) 06:11, June 17, 2014 (UTC)
 * Peng is right; although theoretically it may be possible to find a working value of i and to prove that it is optimal, it is very difficult in the same sense as trying to compute \(\Sigma(10)\) (say). At this level of mathematics, we cannot rely on explicit examples because these entities are by nature inaccessible to us. you're.so.pretty! 07:07, June 17, 2014 (UTC)
 * But it isn't good because we won't know the power of catching function. Can we define it easier-to-determine by human mind (but still uncomputable)? Ikosarakt1 (talk ^ contribs) 07:45, June 17, 2014 (UTC)
 * Don't know the power of the catching function? Then find it. Figure it out. Using proofs! 'Cause that's what you do at this point.
 * Let me clarify further that you've reached a fundamental restriction on our physical universe by the Church-Turing thesis. We can't work by intuition and examples anymore. you're.so.pretty! 14:07, June 17, 2014 (UTC)
 * Oops. For some reason this sounded a lot meaner than I intended it to be. Sorry about that. you're.so.pretty! 15:32, June 17, 2014 (UTC)
 * It's the same as when we ask "Can we define easier/better/whatever Busy Beaver function?" The answer is - we can, but there are limitations. For example, we can create a made-up function by setting \(F(n)=n+1\) for \(n<10^6\) and \(F(n)=BB(n)\) for \(n\geq 10^6\). Result? We get function easy to determine for quite a bit of n's, but still catching up with Sigma function. But it's weird to define such functions. "Natural" examples of uncomputable functions take unpredictable values quickly. LittlePeng9 (talk) 14:50, June 17, 2014 (UTC)

extending the notation past \(\Omega_n\) to transfinite \(\Omega_\alpha\) would be the obvious extension to make. The notation for \(\Omega_n\) depends on the term "n-built from below". To me, the obvious definition of "c-built from below" for arbitrary ordinals c would be:

1. a is 0-built from below from b if a <= b.

2. a is c+1-built from below from b iff the standard representation of a does not use ordinals above a except in the scope of an ordinal c-built from below from b.

3. if c is a limit ordinal, a is c-built from below from b iff a is d-built from below from b for some d < c.

I don't know if this works or not... perhaps the notation is no longer well-founded once you go past \(\omega\). Deedlit11 (talk) 17:47, July 3, 2014 (UTC)

The largest computable number spec
Hi! I'm not a regular contributor to this Wiki and didn't think it appropriate to edit directly.

My contribution to XKCD's "My number is bigger" thread was acknowledged by some other posters as having probably won the thread; the thread rules specified that number specifications had to be computable (no Busy Beavers or beyond) and could not make use of a previously defined number. I also gave an argument for why my number was the largest kind of computable specification for a finite integer that could be constructed using current mathematical technology, at least until someone invents more powerful large cardinal axioms or a different approach.

http://forums.xkcd.com/viewtopic.php?p=3254229#p3254229

Throwing it here so somebody can put it on the Wiki if they consider that interesting. -- Eliezer Yudkowsky


 * Hey Eliezer, thanks for your contribution!
 * It's clear that one can vary initial value of P, number of repetitions or used function (you used 2^^P), but idea stays the same.
 * Argument you give on why this number is probably the biggest computable works, expect for a minor/major difficulty - it requires theory of ZFC+I0 to be sound, so that if it proves that a machine halts, then that machine indeed halts.
 * As you probably know, soundness is quite strong property, and thus we don't know if ZFC+I0 is sound (for the matter of fact, I don't think even ZF was proven sound).
 * Also, it might be a case that Morse-Kelley theory analogue of this will be larger, as MK can prove consistence of ZFC along with few of it extensions. LittlePeng9 (talk) 21:33, May 23, 2014 (UTC)
 * What about Reinhardt cardinals? I'm not sure about what inconsistency with ZFC implies, however. you're.so.pretty! 22:26, May 23, 2014 (UTC)


 * Let's assume contradiction is provable in theory T with a reasonable length of proof (say, 10^^^^^10, it is the case for ZFC+Reinhardt cardinal exists). Let M be a trivial non-halting machine. Because contradiction can be proven reasonably fast, so can be any statement, including "M halts". But this means T is unsound, and proves false statement.
 * Let machine P be as in XKCD post above, except it works with T. After wirking for a bit, P will find a proof that M halts and will try to find halting time of it. But in reality M does not halt, so P will work forever, not computing any number. LittlePeng9 (talk) 04:32, May 24, 2014 (UTC)
 * However, if Reinhardt cardinal turns out to be consistent with ZF alone, we will get a system much stronger than Eliezer's, and number much larger than his. LittlePeng9 (talk) 04:35, May 24, 2014 (UTC)
 * So in other words, we can't reliably take advantage of Reinhardt cardinals until we've shown that ZF + "there exists a Reinhardt cardinal" is consistent. But then again, the same uncertainty applies to ZFC+I0, ZFC + "there is a Mahlo cardinal", ZFC + "there is an inaccessible cardinal," ZFC, ZF, and even Peano arithmetic if my sources are correct. Granted, it's more reasonable to doubt the consistency of ZFC plus some large cardinal axiom than ZF, but that's just an intuitive judgement. Objectively speaking, it makes just as much sense to doubt the consistency of ZF + "there exists a Reinhardt cardinal" as the totality of the Goodstein function. Right? you're.so.pretty! 16:12, May 24, 2014 (UTC)


 * Well, I disagree about PA mention and Goodstein function analogy. We can argue that PA is consistent by using well-foundedness of \(\varepsilon_0\), which is quite reasonable statement. If you don't want to take it for granted, then you probably have something of a formalist in you. In that case, to prove that induction up to \(\varepsilon_0\) makes sense, we have to use some stronger theory. But we don't know if it's sound/consistent, and to prove that we need an even stronger system, and so on (giving us so called "consistency hierarchy"). But that goes for everything - if we want to believe \(n^2\geq n\), which is a nontrivial theorem of PA, we have to assume its consistence. And we need to start somewhere - we need a system for which we take for granted that it's consistent. I believe second order arithmetic is good for this purpose, as it's a simple system which seems really reasonable. On the other hand, to believe consistency of, say, ZFC, we need a system a lot stronger than that, like Morse-Kelley set theory. But these systems are so vast and complex one can really doubt their consistency.
 * To sum up, in my opinion, PA consistency and Goodstein sequence totality follow from a very reasonable statement of arithmetic, but it's on far lower level than ZFC and large cardinal axioms, for which there is no reasonable statement which proves their consistency. So analogy you give is pretty much meaningless, although to some extent you are right. LittlePeng9 (talk) 16:50, May 24, 2014 (UTC)
 * I stand corrected. I was under the impression that the consistency of PA is a mere work in progress; I was unaware that there's a 'turtles-all-the-way-down' dilemma when it comes to axiomatic theories. you're.so.pretty! 20:28, May 24, 2014 (UTC)

Ripation
Ripation, anyone? King2218 (talk) 08:00, May 13, 2014 (UTC)
 * BOX_M~ you're.so.pretty! 15:05, May 13, 2014 (UTC)
 * It doesn't even provide any information on Ripation. King2218 (talk) 10:55, May 15, 2014 (UTC)
 * From the same paper. you're.so.pretty! 16:31, May 15, 2014 (UTC)
 * Okay fine. King2218 (talk) 17:12, May 15, 2014 (UTC)
 * You're free to create a separate article about it. you're.so.pretty! 19:05, May 15, 2014 (UTC)
 * Nevermind. The definition in the article is extremely messy and I don't think I can create an article containing the whole definition. King2218 (talk) 09:17, May 16, 2014 (UTC)

Googological tournament
Since we have a notable community, I thought about managing a googological tournament. It could looks as follows: we choose the best time for all competitors and give them 13 minutes to write their own definition their numbers. The notations must be defined from scratch, using only fundamental functions, definitions like A(n+1) = D(D(n)) (where D is Loader's function) aren't allowed. Ikosarakt1 (talk ^ contribs) 15:34, May 10, 2014 (UTC)
 * I've thought about doing this kind of competition before. The problem is that we all know too much about googology. I predict that we'll all leap to Rayo's function or some higher-order variant and start arguing philosophically about axiomatic set theories. you're.so.pretty! 02:26, May 11, 2014 (UTC)
 * How about another kind of competition? One member of the community makes some questions, while the others try to answer them as good as they can. After that, another member of the community makes some questions, etc. Wythagoras (talk) 05:46, May 11, 2014 (UTC)
 * How about a competition for the most clearly explained, formal proof? :D you're.so.pretty! 06:11, May 11, 2014 (UTC)
 * If you're asking for the formal proof, then it must be written on formal language, not on English. Ikosarakt1 (talk ^ contribs) 06:17, May 11, 2014 (UTC)
 * I'm talkin' a mixture of mathematics and English, as someone would see in a professional-grade academic paper. For example, "Let ___ be a member of ___, and let ___ be ___ so that ___. Suppose that ___ does not hold. Then we have ___, and therefore ___, which is a contradicton, so ___..." where the blanks are mathematical expressions/formulas/equations. you're.so.pretty! 06:21, May 11, 2014 (UTC)
 * In that case, it looks as mathematical tournament, not purely googological. For your argument against googological tournament: it is prohibited to refer to Rayo's function, logics and theories unless you defined it by your symbols. We have to define all notations from scratch during the competition. Ikosarakt1 (talk ^ contribs) 06:34, May 11, 2014 (UTC)
 * For me, personally, 15 minutes is easily enough to define FOST from scratch. And I love proofs! :) LittlePeng9 (talk) 06:39, May 11, 2014 (UTC)
 * I've posted some problems, so you can go solving one. Wythagoras (talk) 07:21, May 11, 2014 (UTC)

FOST
I found there are two major ordinals related to the FOST:

1) The ordinal $$\alpha$$ so that $$f_\alpha(n)$$ has the same growth rate as Ra(n), using natural way of defining fundamental sequences. Formal defintion of "growth rate" can be found here. For naturalness of fundamental sequences we can use the following restriction: no finite-to-finite function is allowed and $$\omega[n]$$ sets to n. So, for example, the definition of sequence $$(\omega*2)[n] = \omega+n^2$$ is not natural because it contains the finite-to-finite function $$f(n) = n^2$$.

2) The smallest ordinal (not FGH up to this ordinal!) which cannot be defined in FOST. Since it operates with infinite sets, it can define not only $$f_\omega(n)$$, $$f_{\varepsilon_0}(n)$$ and $$f_{\Gamma_0}(n)$$, but also $$\omega$$, $$\varepsilon_0$$ and $$\Gamma_0$$ as well, and there will be also ordinals which are undefinable by FOST.

Any suggestions about what ordinal is larger? Generally, when we deal with notation which works with infinite sets, there would be 2 such ordinals. It would be interesting to find the smallest one for which they are equivalent. Ikosarakt1 (talk ^ contribs) 10:22, April 27, 2014 (UTC)


 * First of all, there is universal problem of defining "natural" fundamental sequences. As far as it isn't a problem for ordinals describable by notations, the problem already starts for \(\omega^\text{CK}_1\). Rate of growth of this hierarchy might be, in long run, fairly sensitive to exact definition, and there is no way of finding the "most natural" definition, making 1) a bit meaningless.


 * 2) is perfectly well defined. It's easy to see that it's strictly greater than \(\omega^\text{CK}_1\), and I believe it must be recursively inaccessible. I conjecture that this ordinal is strictly greater than \(\Sigma\) (ITTM ordinal) and many of it's relatives which use oracles.


 * One can also define third ordinal - upper bound on all FOST-definable ordinals. This will be uncountable (as \(\omega_1\) is fairly easily definable) cardinal. LittlePeng9 (talk) 11:04, April 27, 2014 (UTC)


 * LittlePeng9 is right that there is no way we can define "natural fundamental sequences". We can define a specific set of fundamental sequences up to \(\omega^\text{CK}_1\) though, and quite a bit beyond. However, it doesn't seem likely that we can define fundamental sequences up to the limit of FOST, so 1) has a problem.
 * The third ordinal LittlePeng9 described is going to be greater than the smallest large cardinal for pretty much all large cardinals I think, so it will be VERY large.Deedlit11 (talk) 13:18, May 8, 2014 (UTC)
 * What if we plug that ordinal into $$\psi$$ function and create recursive ordinal? Wouldn't it be larger than Loader's ordinal? Ikosarakt1 (talk ^ contribs) 16:14, May 8, 2014 (UTC)
 * Note that you can't plug all cardinals into the psi function. \(\psi(I)\) maybe well-defined (I'm not sure), but \(\psi(M)\) is very likely not. Wythagoras (talk) 16:36, May 8, 2014 (UTC)
 * Deedlit: Are they larger than the Reinhardt cardinals, if they exist?
 * Also, will the Rayo's function grow faster than the uncomputable Reinhardt cardinals collapsing? Wythagoras (talk) 16:40, May 8, 2014 (UTC)
 * Wait, I meant the ordinal $$\alpha$$ plugged into $$\psi(\psi_\alpha(\alpha))$$. By the way, $$\psi(I)$$ and $$\psi(M)$$ aren't technical expressions, they can be only useless shorthands for $$\psi(\psi_I(I))$$ and $$\psi(\psi_M(M))$$ respectively. It is like we sometimes shorten $$\psi(\psi_{\Omega_2}(\Omega_2))$$ to $$\psi(\Omega_2)$$. Going further using such notation would be ambiguous, $$\psi(\Omega_2+\Omega)$$ could be either $$\psi(\psi_{\Omega_2}(\Omega_2+\Omega))$$ or $$\psi(\psi_{\Omega_2}(\Omega_2)+\Omega)$$. Ikosarakt1 (talk ^ contribs) 16:46, May 8, 2014 (UTC)
 * @Wyth: I really doubt it will go beyond Reinhardt. LittlePeng9 (talk) 17:19, May 8, 2014 (UTC)
 * LittlePeng9: I think that the second ordinal is smaller than the first, beacause if FOST can define the fast-growing hierarchy and an ordinal \(\alpha\), Rayo's function must grow faster than \(f_{\alpha}(n)\).
 * This doesn't give contraction, beacause \(\omega_1\) has no countable fundamental sequence (I found it in the article) and \(\omega_1\) is the smallest uncountable ordinal. Therefore, \(\omega_1\) has no fundamental sequence, and it can't be defined in the fast-growing hierarchy. Wythagoras (talk) 11:26, April 27, 2014 (UTC)


 * First, we don't know if FGH is FOST definable (I think it is, but without any certainty). Second, given a countable ordinal \(\alpha\), we'd have to learn to find it's fundamental sequence, which I have no idea how one could do. Third, your argument only shows that second ordinal isn't greater than first (leaving possibility of equality)
 * Also, now that I think of it, it might be the case that Rayo's function is above FGH! My argument is the following: imagine we can properly define FGH as a function \(F(\alpha,n)\) of ordinal and a number. I think we can formalize in FOST the following set: it'd be a set of ordered pairs of natural numbers, such that the first number appears in exactly one pair. This is formalized notion of a function, let's call it \(f\) here. Now we can say "for all \(\alpha\) we have \(N\) such that, for \(n>N\), we have \(f(n)>F(\alpha,n)\)", which simply means that \(f(n)\) eventually outgrows every FGH function, which makes it impossible for it to be on par with any.
 * But for me, this is more of an argument on why FGH is undefinable in FOST rather than Rayo(n) is all above FGH. LittlePeng9 (talk) 11:44, April 27, 2014 (UTC)
 * Oh, I thought that FGH was definable in FOST, beacause of what Ikosarakt said. Wythagoras (talk) 11:52, April 27, 2014 (UTC)
 * I think FGH is definable in FOST, just not for all ordinals possible. I think we can define FGH up to $$f_\omega(n)$$, $$f_{\epsilon_0}(n)$$, $$f_{\Gamma_0}(n)$$ and even $$f_{\omega_1^\text{CK}}(n)$$. But there will be the first $$\alpha$$ for which we can't define $$\alpha[n]$$. So there is a place for another major ordinal. Ikosarakt1 (talk ^ contribs) 13:43, April 27, 2014 (UTC)


 * We can define FGH up to a very, very large ordinal, far beyond CK ordinal. LittlePeng9 (talk) 17:19, May 8, 2014 (UTC)


 * Wait, actually, I found one subtle problem with the ordinals I defined, and with FOST in general - namely, what counts as a set? If we made definition "set which is empty and contains an empty set", then this would obviously fail to exist. But what if we defined "the smallest ordinal which has weakly inaccessible cardinality" will it define a set? It depends on existence of weakly inaccessible cardinal, which can be taken as an axiom or not. LittlePeng9 (talk) 12:05, April 27, 2014 (UTC)
 * Is Rayo's strength dependent on the axioms we use? How dependent? you're.so.pretty! 16:57, April 27, 2014 (UTC)
 * I think it can be very dependent. Here is an example: imagine we work with Kripke-Platek axioms. From them we can't prove well-foundedness of BH ordinal, so if we define "a non-empty set consisting of all infinite decreasing sequences below BH ordinal", we can't tell if it's a set or not, and if we define \(n\) to be number of elements of that set, or 0 if it's infinite, we can't tell if this is correct definition of a number.
 * But, if now we switch to usual ZF axioms, we can prove that there are no descending chains in BH ordinal, thereby proving that our set doesn't exist (note the word "non-empty"). Thus \(n\) has invalid definition.
 * I know that doesn't answer your question, but I know this: if we take set of axioms which is suffiscently small, resulting numbers will be significantly smaller too. LittlePeng9 (talk) 17:35, April 27, 2014 (UTC)
 * I think I have a better example, possibly answering your question: in Kripke-Platek axioms you can't prove BH to be an ordinal, so if you were to formalize FGH up to some ordinal, you'd need it to be below BH (because KP is consistent with "BH ordinal isn't well-founded", and by adding this axiom FGH above BH wouldn't make sense). But if we add to axioms the existense of recursively inaccessible, we can go far beyond BH, and formalize FGH further, so it provably makes sense, and we can produce numbers larger than with KP axioms alone. LittlePeng9 (talk) 17:42, April 27, 2014 (UTC)
 * I believe this implies that Rayo's function is ambiguously defined. Can we assume that he meant ZFC? you're.so.pretty! 18:01, April 27, 2014 (UTC)
 * I think so, but this still leaves some ambiguities related to things independent of ZFC (e.g. set which is non-empty iff continuum hypothesis fails). I think it's reasonable to assume all of these sets are then ill-defined. LittlePeng9 (talk) 18:05, April 27, 2014 (UTC)

Okay, so let's look at some of the things we can do with this information. An obvious step is to create a generalized variant of Rayo's function that lets us plug in not only a maximum length but also a set of axioms to deal with. Evaluation ignores any statements that are inconsistent or independent of the axioms. If we let n be the length and A be the axioms, then we can write "RayoA(n)." Presumably, Rayo's number is RayoZFC(10100).

To make a bit of a leap here, is it possible to provide a set of axioms that work in a second-order set theory? e.g. by allowing variables to represent proper classes? you're.so.pretty! 18:38, April 27, 2014 (UTC)


 * Yes LittlePeng9 (talk) 18:45, April 27, 2014 (UTC)
 * Neat.
 * I'm worried about inconsistency. Does the principle of explosion force us to reject ALL statements in an inconsistent theory? you're.so.pretty! 18:46, April 27, 2014 (UTC)
 * If there is an inconsistency (in ZFC, say) then everything is provable, so there are no independent statements. The problem is, this just loses it's sense anyway, because one can then prove that empty set contains all natural numbers and other oddities. So yes, we'd have to reject every statement, but for other reason than I explained above. LittlePeng9 (talk) 18:55, April 27, 2014 (UTC)
 * Since we don't know whether ZFC is consistent (although we're fairly certain), this means that we don't have 100% confidence that Rayo's number exists. Eep.
 * Related: paraconsistent mathematics. you're.so.pretty! 18:58, April 27, 2014 (UTC)

Rayo's function does not depend on ZFC, or any particular set of axioms. By RayoZFC I presume you mean that we only take those formal statements to be true that can be proven in ZFC. In such a case, we can construct an algorithm that lists all ZFC proofs (and therefore all true statements, according to our assumption). So RayoZFC can grow no faster than a level-1 oracle Busy Beaver function. The same argument will hold if we replace ZFC by any recursively enumerable formal theory.

To get the real strength of Rayo's function, we have to go by some Platonist assumptions; there exists a notion of a true statement in the language of set theory that cannot be captured by any recursively enumerable formal theory. This may be difficult to accept if, for example, you don't accept that there is necessarily a truth value for, say, the Continuum Hypothesis. But, we can still salvage Rayo's function if we accept that some statements are uneqivocally true, some statements are uneqivocally false, and some may not belong to either category. So, for example, we cannot take as a large number "the number n such that 2^aleph_0 = aleph_n" if we do not accept that 2^aleph_0 corresponds to a particular aleph_n. Of course, if our collection of true statements do not extend beyond ZFC or some recursively enumerable formal system, than we haven't gotten anywhere. But, it seems reasonable that there exists a collection of unequivocally true statements that are not recursively enumerable, and allow Rayo's function to be very large.

As for the unprovability of the consistency of ZFC, I think that people make too much hay about it. How do we have confidence in statements that we can prove? Any proof has to be based on starting axioms, so you can only believe in the conclusion if you believe in the axioms used to prove it. If you believe in a statement because of a ZFC proof, you should believe in ZFC as well. It's true that most proofs don't use the full strength of ZFC, but to me, the ZFC axioms seem quite reasonable, and I see no reason not to accept them. Deedlit11 (talk) 13:53, May 8, 2014 (UTC)


 * I understand your point. However, by the arguments I stated above, wouldn't the size of Rayo's number depend on what set of statements we Platonistically assume to be true? As a bad example, we could take an ultrafinitistic set of statements in which there in which existence of any infinite sets is negated, we would have a very narrow possibilities for large numbers. LittlePeng9 (talk) 14:11, May 8, 2014 (UTC)

New idea for Church-Kleene ordinal's fundamental sequence
Here LittlePeng9 said that my definition for natural fundamental sequences isn't the problem for recursive notations. So we get that $$f_\alpha(n)$$ is defined for any recursive $$\alpha$$.

We just define then $$\omega_1^\text{CK}[n]$$ as the largest $$\alpha$$ for which $$f_\alpha(n)$$ or the function with the same growth rate is definable by n-state Turing machine. Are there any flaws? Ikosarakt1 (talk ^ contribs) 06:09, May 1, 2014 (UTC)

Well, there is still some ambiguity. We can define fundamental sequences for recursive ordinals, but not in a unique way. My idea was to take a Turing machine which defines a well ordering of order type \(\alpha\) and take out a fundamental sequence out of it. The problem is that there is multitude of Turing machines which define a given recursive ordinal, so we'd have to choose one of them somehow. Other than that, I believe your idea for fundamental sequence of \(\omega_1^\text{CK}\) is correct. LittlePeng9 (talk) 11:09, May 1, 2014 (UTC)

I think it'd be better to use SGH. you're.so.pretty! 13:39, May 1, 2014 (UTC)

Yes, a good idea. By that we have strictly increasing sequence anyways, as we defined $$f_\alpha(n)$$ in SGH with m states, we know that we can naively extend it defining $$f_\alpha(n)+1 = f_{\alpha+1}(n)$$ with m+1 states. So $$\omega_1^\text{CK}[m+1] \geq \omega_1^\text{CK}[m]$$. We can have some results with the most trivial cases:

$$\omega_1^\text{CK}[0] = \omega$$

We have input n, 0-state TM can do nothing with it - output is n. f(n) = n is $$g_\omega(n)$$ in SGH.

$$\omega_1^\text{CK}[1] = \omega+1$$

The best thing which 1-state TM can do with n is go to the end of it and append 1. f(n) = n+1 is $$g_{\omega+1}(n)$$ in SGH.

$$\omega_1^\text{CK}[2] \geq \omega+2$$ (very likely equal)

The number of possible cases has been increased, but it has been checked that 2-state TM can't double the input. So it is optimal is add two ones with 2 states. With some effort one can prove it.

$$\omega_1^\text{CK}[3] \geq \omega*2+1$$

3 state TM can double the number and add one.

We can bound larger terms using Deedlit's blog post:

$$\omega_1^\text{CK}[6] \geq \omega^\omega$$ ($$2^n$$ has the same growth rate as $$n^n = g_{\omega^\omega}(n)$$, right?)

$$\omega_1^\text{CK}[11] \geq \varepsilon_0$$

Ikosarakt1 (talk ^ contribs) 15:39, May 5, 2014 (UTC)

Are 0-state machines even a thing? I doubt it, as a machine has to have initial state. LittlePeng9 (talk) 16:09, May 5, 2014 (UTC)
 * Besides n common states a TM has additional halt state. With 0 states, initial state is halt state. So it does nothing and returns the input.

By the way, using my definition, one can show the beautiful fact that $$\omega_n^\text{CK}[0][0][0]\cdots [0][0][0] = \omega$$ with n [0]'s. Ikosarakt1 (talk ^ contribs) 18:01, May 5, 2014 (UTC)

Wait, what is \(\alpha[0][0]...[0]\)? Is it iterated fundamental sequence taking? From what I can see, we can have \(\omega^\text{CK}_n[0]=\omega\) (by the way, I recently started doubting n-th order TMs have strength of \(\omega^\text{CK}_n\)) LittlePeng9 (talk) 18:06, May 5, 2014 (UTC)

Joyce's More Generalized Exponential Notation
I don't understand the rules of this function. Can we clean them up? I know g(n,n+1,n,2) = mag(n) = gag^n(n) but I cant see that in the article. Wythagoras (talk) 07:40, August 1, 2013 (UTC)

Rules don't specify what to do if a=1. I believe we chop leading 1's, but rules don't say that. LittlePeng9 (talk) 08:16, August 1, 2013 (UTC)

It's also odd how this linear array notation can express trimentri using g(g(2, g(2, 3, 3), 3), 1, 1, 4, 3, 3), as it stated in the source. Ikosarakt1 (talk ^ contribs) 08:24, August 1, 2013 (UTC)

It can't and it doesn't. Turns out that g(g(2,g(2,3,3),3),1,1,4,3,3) = g(3^3^3,1,1,4,3,3) ~ G(7,625,597,484,987). So not even CLOSE! Joyce does not understand how array notation works. He has severely underestimated it. It's about time Joyce got knocked off his high horse. He has nothing on Bowers. Sbiis Saibian (talk) 20:14, April 23, 2014 (UTC)
 * Bowers 1
 * Joyce 0 you're.so.pretty! 07:43, April 24, 2014 (UTC)

lel WikiRigbyDude (talk) 20:34, April 23, 2014 (UTC)

Joyce's day of "reckoning" (number pun intended) has finally arrived! I finally finished my analysis of Joyce's work and published a new article about it on my site (see update panel). The Final Assessment: Joyce doesn't even manage to reach a grand tridecal, {10,10,10,2}, even under generous assumptions, and the order-type of the polyadic g-function can not exceed w*2, and might be as small as w+2 (depending on how you interpret the continuation of the g-function to arbitrary arguments as detailed in the article ). I hope this vindicates Bowers' work in some small way

P.S. I have dubbed Joyce's attempt to compute Bowers' array notation as "the most epicly EPIC fail in all of googology history" :) Sbiis Saibian (talk) 02:22, April 30, 2014 (UTC)

I got a good laugh out of Saibian's new article c: WikiRigbyDude (talk) 17:40, April 30, 2014 (UTC)

When I only started to learn about googology, I met the article describing Joyce's rules, and even then they, purely intuitively, seemed to me so disgusting because they had lack of definitions past 6-entries (why stop at 6?), terminating rules and general arbitrariness of them. I just skipped it and went to Bowers' notation after learning about up-arrows and G function. Now I see clearly that I lost nothing. Ikosarakt1 (talk ^ contribs) 19:07, April 30, 2014 (UTC)

Update:

The article now includes a reference table towards the bottom, that lists the most relevant Joycian googolism's in size order, along with Bowerism's for size reference. Turns out the smallest Bowerian googologism to exceed all of Joyce's numbers is a corporalplex. In fact Joyce's googolism's can be bound by {3,3,2,2} Sbiis Saibian (talk) 21:43, April 30, 2014 (UTC)

SI prefixes
I see you had a little trouble figuring out Joyce's SI prefix extensions. I spent a lot of time analyzing the article while blissfully unaware of its flaws (hey, I was a beginner), so I think I know what Joyce/Halm meant.

In pataphysical tradition, Joyce notices this:


 * Take the Italian words for seven (sette) and eight (otto).
 * Prepend them with the letters Z and Y (zette, yotto). These are, of course, the last two letters of the alphabet in reverse order.
 * Discard everything except the first syllable, and append an A (zetta, yotta).
 * You now have the SI prefixes "zetta-" (10^21) and "yotta-" (10^24).
 * We can extend this by applying the same process with the word for nine, "nove". Prepend the third letter from the end of the alphabet to get xove -> xova-.
 * The same applies for ten, W + "dieci" = "wieca-". Joyce uses "weica-" instead; either a typo or pronunciation aid.
 * Similarly, V + undici = unda-.
 * U + dodici = uda-. Here Joyce replaces the entire first vowel with U.
 * T + tredici = teda-.
 * S + quattordici = satta-.
 * R + quindici = rinda-.
 * Q + sedici = qeda-.
 * P + diciasette = pica-.
 * O + diciotto = oca-.
 * N + diciannove = nica-.

etc.

There's more to it, but I'm in a hurry. Hope this helps. you're.so.pretty! 22:05, April 30, 2014 (UTC)

This was me when reading the article:

Safe to say that this g function thing only gets anywhere when the number of arguments is a multiple of 3. WikiRigbyDude (talk) 12:33, May 1, 2014 (UTC)

Googolplex written down
How many zeroes this number exactly contains? Ikosarakt1 (talk ^ contribs) 07:23, April 19, 2014 (UTC)

I never had any contact with Perl programing, but by looking at the source code I'm almost sure this page indeed would contain googol zeroes. LittlePeng9 (talk) 07:39, April 19, 2014 (UTC)
 * But we know that writing down googolplex is impossible. How it can be true? Ikosarakt1 (talk ^ contribs) 09:00, April 19, 2014 (UTC)


 * IF this page was able to load fully, THEN it would contain full decimal expansion of googolplex. HOWEVER, while loading, this page would overload memory of any existent computer, which basically means that this page can't be loaded. LittlePeng9 (talk) 09:05, April 19, 2014 (UTC)
 * ^ this ^. if I write a program that keeps printing zeroes until 10^100 are hit, i might be able to claim that the program prints googolplex, but in reality nobody will ever be able to run that program completely. you're.so.pretty! 19:30, April 19, 2014 (UTC)
 * Speaking of which... LittlePeng9 (talk) 20:10, April 19, 2014 (UTC)
 * ha! you're.so.pretty! 20:54, April 19, 2014 (UTC)

Meetup
I haven't once met a large number fan in real life. It would be nice to have some kind of physical meeting for googologists, or Skype if that's too hard. Just fantasizing over here. FB100Z &bull; talk &bull; contribs 01:36, July 15, 2013 (UTC)
 * We're living in different countries, and the majority of our community is 14-16 years old, so your idea is problematic. Ikosarakt1 (talk ^ contribs) 08:00, July 15, 2013 (UTC)
 * Skype idea sounds better, but still there may be problem caused by time zone difference. We can also plan a meeting 10 years in future. LittlePeng9 (talk) 08:13, July 15, 2013 (UTC)
 * But by then there will be a new generation of googologists to deal with! FB100Z &bull; talk &bull; contribs 23:25, July 15, 2013 (UTC)
 * Actually, why not? (or we could use the chat) King2218 (talk) 15:30, March 25, 2014 (UTC)
 * I think the chat is better. Ikosarakt1 (talk ^ contribs) 16:52, March 25, 2014 (UTC)

Numbers from question 5 of IMO2010
See here first, this game can lead large numbers.

If we have 6 boxes containing 1 coin each, we can make first 5 boxes empty and the last one contain 2*2^(2^^(2^^(2^14))) coins.

More detailed, we can make [a,0] to [0,2a], make [a,0,0] to [0,2^a,0], make [a,0,0,0] to [0,2^^a,0,0], and so on.

If we have n boxes containing 1 coin each, then do operations to make the first n-1 boxes empty, how many coins can we get in the last one? The maximum of it may have growth rate comparable to \(f_{\omega}(n)\). hyp$hyp?cos&#38;cos (talk) 02:36, February 12, 2014 (UTC)


 * The game is also introduced on this blog post by Adam Goucher, and discussed on this user blog post. This is all the relevant information I could find. -- ☁ I want more clouds! ⛅ 12:45, February 12, 2014 (UTC)

Sequences from OEIS?
The On-Line Encyclopedia of Integer Sequences has lots of integer functions, many of which are increasing. Should we include those that have a considerable growth rate? --Ixfd64 (talk) 19:38, February 5, 2014 (UTC)
 * The most notable functions from this source, like TREE and Goodstein's one already have articles for them. Ikosarakt1 (talk ^ contribs) 22:04, February 5, 2014 (UTC)
 * OEIS has lots of sequences that are arbitrary or not that interesting. It's not especially good as a source for content, but rather a supplement that we can direct readers to. FB100Z &bull; talk &bull; contribs 22:51, February 5, 2014 (UTC)
 * How about:
 * A101811
 * A000088
 * A000273
 * A094268
 * A023199?
 * They profide all a growth rate between ex and ee x
 * Wythagoras (talk) 14:51, February 6, 2014 (UTC)


 * Some more: A217716, A217757, A216151. hyp$hyp?cos&#38;cos (talk) 15:31, February 6, 2014 (UTC)
 * They are too hard to calculate and even ee x is the very small function in known googology. Ikosarakt1 (talk ^ contribs) 15:33, February 6, 2014 (UTC)

Upper limit of Bowers' notation - has it been confirmed?
A few months ago, "Hyp cos" claimed the strength of Bowers' notation was an incredibly large ordinal - much larger than the TFB, for instance. There was no consensus on this, yet it was added to many of the articles on Bowers' numbers. But as the late Carl Sagan once said, "extraordinary claims require extraordinary evidence." Has anyone else been able to confirm or refute this claim? --Ixfd64 (talk) 21:23, January 5, 2014 (UTC)
 * Well, before we can claim strength we have to formalize the damn thing; that's an especially slippery problem, since Bowers himself never bothered. I have conjectured this definition as the One True Damn Thing&trade;. After that can be confirmed as a reasonable formalization, we can work formally to prove BEAF's upper limit without intuition or guessing (which have been the primary tools used so far for analysis). FB100Z &bull; talk &bull; contribs 21:37, January 5, 2014 (UTC)
 * Update: so far \(⟨\omega, \omega, 2⟩ = \varepsilon_0\) is proven. Wythagoras and Aarex have argued that \(⟨\omega, \omega, 1, 1, 2⟩ = \vartheta(\Omega^2)\), and although the proof is not complete, I don't doubt the validity of the solution. FB100Z &bull; talk &bull; contribs 21:04, January 11, 2014 (UTC)
 * I also proved \(⟨\omega, \omega, 3⟩ = \zeta_0\), and you confirmed that calculation. The proof for \(⟨\omega, \omega, 1+\beta⟩ = \theta(\beta)\) is indeed not entirely complete (but that is just a small gap) and \(⟨\omega, \omega, 1, 1, 2⟩ = \vartheta(\Omega^2)\) is just Aarex's weirdness. Edit: I fixed lemma 3 and lemma 4. Wythagoras (talk) 07:40, January 12, 2014 (UTC)

Theta
What does \(\vartheta\) mean in, say, \(\vartheta(\Omega^\omega)\)? It's obviously an ordinal collapsing function but I haven't been able to find its definition. FB100Z &bull; talk &bull; contribs 18:03, April 4, 2013 (UTC)

I learned about gist of this function from Chris Bird's comparisons between ordinals and his separators. He doesn't defined it formally and explicitly but I understood how it works though. Ikosarakt1 (talk ^ contribs) 18:19, April 4, 2013 (UTC)

Can Deedlit or someone shed some light on this issue? FB100Z &bull; talk &bull; contribs 04:25, April 6, 2013 (UTC)


 * Hrm, is \(\vartheta(\alpha)\) is the least ordinal not constructible from 0, 1, \(\omega\), \(\Omega\) using addition, multiplication, exponentiation, the Veblen function, and previous values produced by \(\vartheta\)? This seems consistent with \(\vartheta(\Omega) = \Gamma_0\). FB100Z &bull; talk &bull; contribs 18:34, April 6, 2013 (UTC)
 * lol whoops, under this definition \(\vartheta(0) = \Gamma_0\) :P FB100Z &bull; talk &bull; contribs 18:40, April 6, 2013 (UTC)
 * Wrong. \(\vartheta(n)\) = \(\varphi(n,0)\) AarexTiao 00:14, December 20, 2013 (UTC)
 * FB100Z already said that the definition doesn't match what we already know. You don't need to say that again. -- ☁ I want more clouds! ⛅ 05:27, December 20, 2013 (UTC)
 * there's a reason i wrote "lol whoops" FB100Z &bull; talk &bull; contribs 08:06, December 20, 2013 (UTC)
 * Why you writed "lol whoops"? AarexTiao 22:09, December 21, 2013 (UTC)
 * Because he made a (fairly obvious) mistake. LittlePeng9 (talk) 22:13, December 21, 2013 (UTC)
 * @Aarex Sorry if this is a little harsh, but you seem to have an incomplete grasp of English. No worries, though, it's a tough language :| FB100Z &bull; talk &bull; contribs 23:05, December 21, 2013 (UTC)
 * Is :| emoij? This is Comment chain 7! AarexTiao 15:27, December 22, 2013 (UTC)
 * @Aarex It's an emoticon.
 * @FB100Z You never tried to learn Polish. Each adjective can have freaking 100 forms, many of them different. English is extremelly simple compared to Polish. LittlePeng9 (talk) 21:13, December 22, 2013 (UTC)
 * 1. Learn Polish Now! 2. Correct. AarexTiao 21:51, December 22, 2013 (UTC)
 * @LittlePeng Holy crap. 0_o FB100Z &bull; talk &bull; contribs 22:54, December 22, 2013 (UTC)
 * Continue the comment chain! AarexTiao 22:56, December 22, 2013 (UTC)
 * aw yeah i found it FB100Z &bull; talk &bull; contribs 18:47, April 6, 2013 (UTC)
 * Actually, the definition in the article is for the \(\theta\) function (with two variables), not the \(\vartheta\) function (with one variable). I'll explain the differences between the various notations in a little while. Deedlit11 (talk) 03:08, April 8, 2013 (UTC)

Saibian's "Jacob's ladder" article
Has anyone ideas how large the largest number defined by Saibian is this article? I guess it's something in heptational range. Ikosarakt1 (talk ^ contribs) 15:09, July 23, 2013 (UTC)

(AA...AA) w/n a's = n

I found out hat the largest 'A' structure full written out contains about 167,000 A's.

A line contains 75 A's

Alpha = [A] ~ 167,000*75 = 12,525,000

Beta = [AA] ~ 12,525,000*75 = 939,375,000

[AA...AA] w/n a's ~ \(167000*75^n\)

A ~ \(167000*75^{12525000}\) ~ \(3,5*10^{23485147}\)

AA ~ \(167000*75^{939375000}\) ~ \(3,2*10^{1761385679}\)

[A] ~ \(75^{ A }\)

...[[A...]] w/n brackets ~ E(75)12525000#(n-1)

Sequence:

seq(1) = [A] ~ 12,525,000

seq(2) = ...[[A...]] w/[A] brackets ~ E(75)12525000#1#2

seq(3) = ...[[A...]] w/seq(2) brackets ~ E(75)12525000#1#3

seq(n) ~ E(75)12525000#1#n

seq([A]) ~ E(75)12525000#1#[A]

seq(seq([A])) ~ E(75)12525000#1#[A]#2

seq<span lang="NL" style="mso-fareast-font-family:"Times New Roman"; mso-bidi-font-family:"Times New Roman";font-weight:normal">[A](n) ~ E(75)12525000#1#1#[A]

seq<span lang="NL" style="mso-fareast-font-family:"Times New Roman"; mso-bidi-font-family:"Times New Roman";font-weight:normal">seq(n)(n) ~ E(75)12525000#1#1#1#2

seqseq ... seq(n)... (n) (n) w/a seqs ~ E(75)12525000#1#1#1#a Wythagoras (talk) 17:45, July 23, 2013 (UTC)

Correct, as E(a)1#1#1#1#b = a heptated to b. Ikosarakt1 (talk ^ contribs) 17:45, July 23, 2013 (UTC)

The first thing I thought of when I saw that page: http://uncyclopedia.wikia.com/wiki/AAAAAAAAA!‎ FB100Z &bull; talk &bull; contribs 03:08, July 24, 2013 (UTC)


 * There are exactly 167,332 A's on the largest 'A' structure. Here's a (slightly) more accurate version of the above values:


 * Alpha = [A] = 167,332*75 = 12,549,900
 * Beta = [AA] = 12,549,900*75 = 941,242,500
 * [AA...AA] w/n a's = \(167332*75^n\)
 * A = \(167332*75^{12549900}\) ~ \(3.741*10^{23531836}\)
 * AA = \(167332*75^{941242500}\) ~ \(2.701*10^{1764887356}\)


 * The rest of the conversion is pretty obvious. -- ☁ I want more clouds! ⛅ 04:03, July 24, 2013 (UTC)


 * I found it more!
 * AAA = \(167332*75^{167332*75^3}\)
 * [[[A] ] ] = \(167332*75^{167332*75^{12549900}}\)

It said, "Let's say we go beyond all of the steps of adding another set of block braces, brace by brace and instead jump ahead of this whole sequence by having ALPHA BRACKETS !"


 * [[[[[[[[[[ ... ... ... ... [[[[[[[[[[[[A]]]]]]]]]] ... ... ... ... ]]]]]]]]]]]] where there are [A] brackets ~ E(75)12549900#12549899
 * [[[[[[[[[[ ... ... ... ... [[[[[[[[[[[[A]]]]]]]]]] ... ... ... ... ]]]]]]]]]]]] with as many brackets as the number [[[[[[[[[[ ... ... ... ... [[[[[[[[[[[[A]]]]]]]]]] ... ... ... ... ]]]]]]]]]]]] with [A] brackets ~ E(75)12549900#12549900#3
 * [[[[[[[[[[ ... ... ... ... [[[[[[[[[[[[A]]]]]]]]]] ... ... ... ... ]]]]]]]]]]]] with as many brackets as the number [[[[[[[[[[ ... ... ... ... [[[[[[[[[[[[A]]]]]]]]]] ... ... ... ... ]]]]]]]]]]]] with as many brackets as the number [[[[[[[[[[ ... ... ... ... [[[[[[[[[[[[A]]]]]]]]]] ... ... ... ... ]]]]]]]]]]]] with [A] brackets ~ E(75)12549900#12549900#4

Let 1st member is [A] = 12549900 and (n+1)-th member is ...[[A...]] (n-th member nested) ~ E(75)12549900#12549900#n.


 * [A] member ~ E(75)12549900#12549900#12549900
 * [AA] member ~ E(75)12549900#12549900#941242500
 * [[A] ] member ~ E(75)12549900#12549900#\(167332*75^{12549900}\)

What is number of A's on (2nd member)-th member? AarexTiao 23:48, December 10, 2013 (UTC)

Growth rates and subjectivity
What does it mean when two functions have "comparable growth rates"? FB100Z &bull; talk &bull; contribs 21:25, October 4, 2013 (UTC)
 * If two functions, f(x) and g(x) have comparable growth rates, then there exist some linearly growing function h(x) so that f(x) < g(h(x)). For example, f(n) = 2n and g(n) = 3n have comparable growth rates, since one will catch another in the linear time. Ikosarakt1 (talk ^ contribs) 23:05, October 4, 2013 (UTC)
 * Reviving an old topic &mdash; don't g(n) = TREE(n) and f(n) = 0 have comparable growth rates by this definition? FB100Z &bull; talk &bull; contribs 07:41, November 8, 2013 (UTC)
 * f(k) = 0 for any k, so f(h(n)) << g(n) for any function h(n). Wythagoras (talk) 15:48, November 8, 2013 (UTC)
 * But the point is, Ikosarakt's definition is only one way, so f(n)=0 and g(n)>0, so f(n)<g(h(n)). LittlePeng9 (talk) 16:29, November 8, 2013 (UTC)
 * Good point. I think giving f(x) < g(h(x)) and g(x) < f(h(x)) will do it. LittlePeng9 (talk) 14:45, November 8, 2013 (UTC)


 * Yes, but we should add that h(x) doesn't have to be a linear growing function. h(x) should be an function such that h(x) < f(x) and h(x) < g(x) for any x. Wythagoras (talk) 15:48, November 8, 2013 (UTC)
 * Hmm, I don't think that'd do. For example, if f(n)=3^n and g(n)=2^2^n, then for h(n)=3^n-1 g(n)=2^2^n<3^3^(n-1)=f(h(n)). But I agree linear function is a bit too weak for our purposes, look below. LittlePeng9 (talk) 16:29, November 8, 2013 (UTC)
 * Already there is confusion, and this is the point that I'm trying to get at. We need to be careful throwing around the term "growth rate" until we can agree on a definition. FB100Z &bull; talk &bull; contribs 21:56, November 8, 2013 (UTC)
 * In my opinion, it is a bit more than that. For example, in googological sense, TREE(n^2) doesn't grow much more quickly than TREE(n), and relation is more than linear. LittlePeng9 (talk) 05:46, October 5, 2013 (UTC)
 * Yes, but in that case, the term "growth rate" really has subjective meaning. The way in which I defined it is just an attempt to formalize it. Ikosarakt1 (talk ^ contribs) 07:30, October 5, 2013 (UTC)

We may have to deal with several possible definitions of growth rate, depending on the kind of scale we're going for. I have envisioned (but not implemented) a metric based on proof-theoretic ordinals; it would be very useful for comparing very strong computable functions, but it would be imprecise for smaller functions and it wouldn't work at all for uncomputable ones. FB100Z &bull; talk &bull; contribs 22:04, November 8, 2013 (UTC)

Hyper-Cantor normal form
How to define the Cantor's normal form, using the base \(\Omega\) (with the purposes of the ordinal collapsing functions)? It seems that writing the ordinal \(\theta(\Omega*\omega^2)\) as \(\theta(\omega^{\Omega+2})\) makes it easier to fit the usual Cantor's normal form, as we avoid multiple multiplications in a single non-additively principal ordinal: \(\theta(\Omega*\omega^2)\) can be expanded to \(\theta(\Omega*\omega*\omega)\), and it's probably escapes the Cantor's normal form. Ikosarakt1 (talk ^ contribs) 10:27, August 11, 2013 (UTC)

I believe Cantor normal form uses only addition and exponentiation, so your ordinal would be \(\theta(\omega^{\Omega+2})\). By the way, this ordinal is additively principal. LittlePeng9 (talk) 12:40, August 11, 2013 (UTC)

Cantor normal form can use multiplications, but only when one number is a finite. This makes more convenient (shortened) situation in the rule \(\omega^{\alpha+1} = \omega^\alpha*n\) instead of \(\omega^\alpha+\omega^\alpha+\cdots+\omega^\alpha+\omega^\alpha\) (with n \(\omega^\alpha\)'s). I wonder how we can create an equivalent system allowing transfinite multiplications. Ikosarakt1 (talk ^ contribs) 17:28, August 11, 2013 (UTC)


 * Cantor normal form to a base \(\alpha\) is \(\alpha^{\beta_1} \gamma_1 + \alpha^{\beta_2} \gamma_2 + \ldots + \alpha^{\beta_n} \gamma_n\), where \(\beta_1 > \beta_2 > \ldots > \beta_n\) and \(\gamma_i < \alpha\) for all \(i\). When \(\alpha = \omega\), you can expand things out so that the \(\gamma_i\)'s go away. Deedlit11 (talk) 22:00, October 30, 2013 (UTC)

Distributed project on BB(5)
Last time I explored remaining 5-state undecided TMs to be halting, using the force of computer program. We can sufficiently accelerate that process if we should do it together (like members of GIMPS in prime number searches). Anyone interested in improving the bound for \(\Sigma(5)\)? I mean just run one of TMs on your computer, so you haven't to do anything constanly (well, it can slow your computer a bit if it's too weak).

The list of undecided TMs you can find here and here. Ikosarakt1 (talk ^ contribs) 16:57, September 7, 2013 (UTC)
 * This project would have basically zero practical application, so I wouldn't expect very many volunteers. I'd be happy to help, though. FB100Z &bull; talk &bull; contribs 17:21, September 7, 2013 (UTC)
 * Okay, but we, being fans of googology probably interested in improving bounds. In addition, we shall make our GWiki pretty glorious if we should discover it (as this bound will be in Wikipedia). Ikosarakt1 (talk ^ contribs) 21:29, September 7, 2013 (UTC)
 * Using this simulator I found that TM#827123 doesn't halt after 238,418,940 steps and leaves more than 19,978 1's on the tape. Wythagoras (talk) 17:34, September 7, 2013 (UTC)
 * What's the most efficient algorithm you know for simulating a TM? FB100Z &bull; talk &bull; contribs 18:01, September 7, 2013 (UTC)
 * I don't see how we can simulate TM faster than just step-by-step running unless the function which it computes is known exactly. For HNR typed TMs we must perform step-by-step algorithm on the computer program. Ikosarakt1 (talk ^ contribs) 21:28, September 7, 2013 (UTC)


 * Method used for finding bounds used macro machine, together with subroutines searching for repetitions. I believe 5-state HNR's have some type of regularity to them, so some machine-assisted checking would show this. LittlePeng9 (talk) 21:49, September 7, 2013 (UTC)
 * True, but creating such a program would be pretty hard work I believe. Can you verify, for example, TM#827123 using this method? Ikosarakt1 (talk ^ contribs) 21:54, September 7, 2013 (UTC)
 * I found that TM#2523420 doesn't halt after 75 billion steps. Wythagoras (talk) 18:51, September 7, 2013 (UTC)
 * Thanks, but I don't recommend you to use this simulator if you want to run TM for really large number of steps, since it overflows after 2147483647 steps. The better thing is made up your own program in the background like Lazarus. I done it, and I'm planning to run TM#827123 up to 10 trillionth step (over 2 trillion have been performed already). Ikosarakt1 (talk ^ contribs) 19:27, September 7, 2013 (UTC)
 * I believe HNR#7 doesn't halt. It has a loop: State 5-3-1-3-1-3-4-1-2-5-3-1-3-1-3-4-1-2 Wythagoras (talk) 06:48, September 8, 2013 (UTC)
 * Are "state-loop" enough to prove TM to be non-halting? There must be also repeating condition for tape. Ikosarakt1 (talk ^ contribs) 11:54, September 8, 2013 (UTC)
 * No, but there is also a repeating string on the tape, for more information you can look here.
 * Wythagoras (talk) 13:08, September 8, 2013 (UTC)

TM#34605254 (HNR#35)
I ran TM#34605254 (HNR#35) and it doesn't halt after 2.75e12 steps. (rate: 5e9 steps/second) -- ☁ I want more clouds! ⛅ 08:59, September 9, 2013 (UTC)
 * Thanks! I've calculated that it has been done with 9 minutes 10 seconds. Ikosarakt1 (talk ^ contribs) 10:14, September 9, 2013 (UTC)

HNR#35 exhibits 4-ary counting behavior, and currently (around after 10,000 steps) it is counting with 33 "digit-holders", each consisting of 2 digits between 11's. For example, in step 13419, the tape's content is:

0(68 0's)11001100110111001100110011001100(25 1100's)11111001100110011010111011

Where the head is on the leftmost 0 and in state 2 (starting from 0). You can see that the third from the left digit is 2, and all other digits are 0.

Based on this observation, I guess that if this TM ever halts, it will not happen in the next 1040 steps. -- ☁ I want more clouds! ⛅ 14:11, September 9, 2013 (UTC)
 * By the way, this TM reminds me of Forever Endeavor... -- ☁ I want more clouds! ⛅ 14:16, September 9, 2013 (UTC)
 * I expect that exact value of S(5) will lie somewhere in the exponential range, so I doubt that it halts. We can't be sure, however. Ikosarakt1 (talk ^ contribs) 17:02, September 9, 2013 (UTC)

HNR#16 & HNR#19
These TMs actually have similar behavior. I noted that they halt when head finds two 1's (or rows of 1's) separated by only one space and TM has the state either 3 or 4, head placed on the first 1. Can anyone prove that this situation won't ever happen? Ikosarakt1 (talk ^ contribs) 17:02, September 9, 2013 (UTC)

I think I can prove it for HNR#19, but I can't for HNR#16. The only way to come in state 3 is via state 2. There must be a 1 under the head, so the one to the right must also be a one. However, that's impossible, since you had to enter state 2 using state 1, which would have left a blank on the tape. Wythagoras (talk) 17:40, September 9, 2013 (UTC)

Noob question
What's the standard way of enumerating TMs? When we say TM#x, what's the ruleset given x? FB100Z &bull; talk &bull; contribs 18:38, September 10, 2013 (UTC)

There is no "standard" way. One of ways is Wolfram's numbering, but we can do it in many different ways. LittlePeng9 (talk) 19:01, September 10, 2013 (UTC)
 * What's the rule for Wolfram's numbering? FB100Z &bull; talk &bull; contribs 19:58, September 10, 2013 (UTC)

HNR#42
How about this counter? For me, it seems that it just a binary counter where "11111"'s works like "1"'s, representing a single bit. "11 11"'s and spaces works like digit-separators. I don't expect that it halts, but a strict proof would be pleased. Ikosarakt1 (talk ^ contribs) 17:59, September 12, 2013 (UTC)

I ran the machine for 100,000 steps and I saw a pattern: 111 11 111 11 111 11 111 11 ... Wythagoras (talk) 18:09, September 12, 2013 (UTC)

Rule in phi function
We know that, in phi function exist some ordinals in form \(\phi(\#\,\alpha,0,\cdots,0,\beta+1)\) (\(\alpha\) is a limit ordinal, \(\beta\) can be not). There are two variants for handling this:


 * \(\phi(\#\,\alpha,0,\cdots,0,\beta+1) = \phi(\#\,\alpha[n],\phi(\#\,\alpha,0,\cdots,0,\beta)+1,\cdots,0,\beta)\)


 * \(\phi(\#\,\alpha,0,\cdots,0,\beta+1) = \phi(\#\,\alpha[n],0,\cdots,\phi(\#\,\alpha,0,\cdots,0,\beta)+1,0)\)

So, must we need to place the successor of original expression where the last entry is reduced by 1 to the start or the end of the row of zeroes? Ikosarakt1 (talk ^ contribs) 10:45, July 12, 2013 (UTC)

Place the successor of original expression where the last entry is reduced by 1 to the start.AarexTiao 19:18, September 8, 2013 (UTC)

First digits of Graham's number
The question about first digits of Graham's number (and other past-double-exponential sized numbers) interested me for a long time. The algorithm computing it is for a reasonable time is unknown, but there may be some hyper-algorithm which sorts out all normal algorithms by the time complexity? If it would be given, then we can just look at the fastest of them and immediately decide our ability to compute first digits of Graham's number on our computers.

I believe that such hyper-algorithm can be programmed on the OTM. Ikosarakt1 (talk ^ contribs) 17:11, August 24, 2013 (UTC)

Consider following Turing machine (standard, not oracle one): It first computes G ones, then translates it into decimal. Now machine moves to the first digit of G. If it is, for example, 7, then machine halts, and escapes to infinity otherwise. This machine can be constructed in 88 states and 11 symbols. Now, some OTM can check 9 such machines, each for different digit, and exactly one of them will halt, giving answer to which digit is first. So OTM can fairly easily check for digits of G. LittlePeng9 (talk) 17:31, August 24, 2013 (UTC)

I'm looking for something more trickier: OTM must not just compute first digits of G, but rather it have to find all algorithms for usual TM which computes first digits of G, pick the fastest one and tell us how it fast. That's why we need namely OTM, not just a TM.

This is pretty important because it may turn that we can compute G(n) by the exponential time or so.

Ikosarakt1 (talk ^ contribs) 19:31, August 24, 2013 (UTC)

There is an algorithm doing this in constant time. Only thing we need is to check if input number isn't too large. If it is, we instantly halt with an error. Now we seek for nth digit in time upperbounded by some constant. Best possible algorithm is pretty trivial one, with x states, where x is number of digits in G. LittlePeng9 (talk) 20:28, August 24, 2013 (UTC)

The problem is that we can't run Turing machine with x states in our computers. Okay, you pointed me that the number of states mustn't be itself unreasonably big. We can restrict it to, say, myriad of states. Then we request OTM to run through all TMs with myriad states or less for proving whether they compute first digits of G for bounded time complexity (we need only algorithms no slower than O(2^n)). Ikosarakt1 (talk ^ contribs) 10:25, August 25, 2013 (UTC)

Competition on primitive recursion
There are 2 categories in googological unlimited competition: by creating fastest growing computable and uncomputable functions. What about creating fastest growing primitive recursive function? Ikosarakt1 (talk ^ contribs) 17:51, August 23, 2013 (UTC)

Primitive recursion is already an explored territory. For example, for every such function F there is number a such that eventually \(f_a\) outgrows F, and even there is number b such that for all n>1 \f_b(n)>F(n)\). But it would be more interesting if we considered "natural" functions only. LittlePeng9 (talk) 18:18, August 23, 2013 (UTC)

What are these "natural" functions? Ikosarakt1 (talk ^ contribs) 18:28, August 23, 2013 (UTC)

There is no formal definition of this term, but intuitively it is function which isn't "designed" to be fast growing. For example, TREE function, in its definition, has nothing what would point out that it will be any faster than tetrational function, but it still grows to extremely large values. But array notations aren't natural, because they have rules designed to give large values. LittlePeng9 (talk) 20:47, August 23, 2013 (UTC)

Primitive-recursive functions are limited by \(f_\omega\), and below \(\omega\) there aren't any "landmark" ordinals other than 0 and 1. Larger ordinals such as \(\) and \(\omega_1\) give us a little more elbow room. FB100Z &bull; talk &bull; contribs 21:48, August 23, 2013 (UTC)

Little Peng, these "natural" functions, even though they don't seem to be fast-growing, they diagonalize through some certain set of structures which are defined so that recursion is still allowed inside them.

FB100Z, I must note that all our notations, recursive and non-recursive, will be exhausted well below \(\omega_1\), actually they all limit at the first undefinable ordinal. Ikosarakt1 (talk ^ contribs) 11:13, August 24, 2013 (UTC)

In my opinion, "naturalness" isn't property of function itself, rather it's property of definition. Suppose function 2^^n. An unnatural definition would be "exponential stack of n 2's", while natural definition would be "number of elements in set V_n". It's up to you to decide if given function/definition is natural or not. LittlePeng9 (talk) 13:00, August 24, 2013 (UTC)

Number of the functions
How big is the ordinal measuring absolute limit of functions? I mean not just functions mapping natural numbers onto natural numbers, or reals to reals, but if we allow them to handle arguments of any kind. Ikosarakt1 (talk ^ contribs) 11:01, August 14, 2013 (UTC)

It depends on how you measure ordinals of functions. For every cardinal \(\kappa\) we can have set of functions of such cardinality. Very likely we also can, for any non-trivial measure, choose such set for which every function has different measure. This would imply that there is no absolute limit. LittlePeng9 (talk) 11:23, August 14, 2013 (UTC)

Second uncountable ordinal
It doesn't seems that the name "the second uncountable ordinal" is a good for that ordinal which we usually mean. In fact, is the ordinal \(\omega_1+1\) uncountable? If yes, then there are namely two uncountable ordinals which are no larger than \(\omega_1+1\), meaning that it is the second uncountable ordinal. I propose to call it the "ordinal with the second level of uncountability". Ikosarakt1 (talk ^ contribs) 16:21, August 9, 2013 (UTC)

Yes, second uncountable ordinal is \(\omega_1+1\), as it encodes set one larger than \(\omega_1\), so is still uncountable. Proper terms for \(\omega_2\) are second uncountable regular ordinal, third infinite regular ordinal, smallest ordinal such that \(|\omega_2|>|\omega_1|\). LittlePeng9 (talk) 17:10, August 9, 2013 (UTC)

Some expression in Bird's array notation
How to resolve {3,3 [1 [1 [1 ¬ 1 ¬ 2] 2 [2 \3 2] 2] 2] 2} in BAN? Ikosarakt1 (talk ^ contribs) 15:21, July 30, 2013 (UTC)

In {3,3 [1 [1 [1 ¬ 1 ¬ 2] 2 [2 \3 2] 2] 2] 2} is [1 ¬ 1 ¬ 2] the lowest level expression that can be solved, so

{3,3 [1 [1 [1 ¬ 1 ¬ 2] 2 [2 \3 2] 2] 2] 2} =

{3,3 [1 [1 [1 ¬ [1 [1 [1 ¬ 1,2] 2 [2 \3 2] 2] 2]] 2 [2 \3 2] 2] 2] 2} Wythagoras (talk) 16:25, July 30, 2013 (UTC)

Large numbers from finite promise games?
This posting by Friedman looks pretty interesting. I wonder if we could use it to define some super-fast growing function. --Ixfd64 (talk) 00:30, July 11, 2013 (UTC)

From the posting, we have for example

PROPOSITION 1.2. Let T be in PL and n >= 1. If m,s are sufficiently large then Bob wins G(T,n,s) even if Bob accepts all factorials > m offered by Alice.

So we can set m = s, and define f(T, n) to be the the smallest t such that Bob wins G(T, n, s) for all s >= t. According to Friedman, f dominates all functions provably recursive in the theory ZFC + {there is a strongly k-Mahlo cardinal}_k. This is a very strong theory, so it will be a very fast growing function, faster than Loader's function I believe. Deedlit11 (talk) 13:16, July 11, 2013 (UTC)

Do you know what subscript _k after extension of ZFC means? Also, is there any published proof of these results? It seems that we have winner in terms of fastest recursive function. LittlePeng9 (talk) 15:24, July 11, 2013 (UTC)


 * I believe it means that for each natural number k you add the axiom "There is a strongly k-Mahlo cardinal". As far as a published proof goes, it seems like Harvey Friedman makes a lot of posts to the F.O.M. mailing list including results that never get published in papers - perhaps he has to many.  So I don't know if this particular theorem has a published proof anywhere.
 * Concerning the fastest recursive function, Friedman has some other results that result in faster growing functions: See for instance the papers "Finite functions and the Necessary Use of Large Cardinals" and "Finite Trees and the Necessary Use of Large Cardinals", both of which describe functions that dominate the provably recursive functions of the theory ZFC + {there exists a k-subtle cardinal}_k.  Like this one, the theorems are fairly elementary, which is nice.
 * Actually, you can define a recursive function that dominates the provably recursive functions of any theory T (so you could take a really strong theory like ZFC + I0). Define f_T(n) to be the smallest m such that, for all Sigma^0_1 formulas of the form Ex A(x), if Ex A(x) is provable in T using fewer than n symbols, then A(x) is true for some x < m.  This function will have the properties stated above, so it will be an extremely fast-growing function. Deedlit11 (talk) 14:18, July 12, 2013 (UTC)


 * I've heard about method in your last paragraph (Loader mentioned it in explantation of his program) but I did not suspect it could be possible to do in such strong system as ZFC (nuff said bout extensions). I really have to take a good look at FOM mailing when I have time! LittlePeng9 (talk) 19:39, July 12, 2013 (UTC)


 * I wouldn't be surprised if Friedman could beat any of us in a large number naming contest without breaking the Gentleman's Rule or using uncomputable functions. --Ixfd64 (talk) 21:41, July 12, 2013 (UTC)

Some ordinal
To what the ordinal \(\phi(\Gamma_0,1)\) is equivalent in psi function? Ikosarakt1 (talk ^ contribs) 18:38, July 6, 2013 (UTC)

\(\phi(\Gamma_0,1)\) is the limit ordinal of \(\phi(\alpha,\Gamma_0+1) = \phi(\alpha,\phi(\Gamma_0,0)+1)\) as \(\alpha \rightarrow \Gamma_0\)


 * \(\psi(\Omega^\Omega) = \Gamma_0\)
 * \(\psi(\Omega^\Omega+1) = \varepsilon_{\Gamma_0+1}\)
 * \(\psi(\Omega^\Omega+\Omega) = \phi(2,\Gamma_0+1)\)
 * \(\psi(\Omega^\Omega+\Omega^2) = \phi(3,\Gamma_0+1)\)
 * \(\psi(\Omega^\Omega+\Omega^\omega) = \phi(\omega,\Gamma_0+1)\)
 * \(\psi(\Omega^\Omega+\Omega^{\varepsilon_0}) = \phi(\varepsilon_0,\Gamma_0+1)\)
 * etc.

So I guess that \(\phi(\Gamma_0,1)\) is equivalent to \(\psi(\Omega^\Omega+\Omega^{\psi(\Omega^\Omega)})\). -- ☁ I want more clouds! ⛅ 00:15, July 7, 2013 (UTC)


 * Correct! Deedlit11 (talk) 04:29, July 7, 2013 (UTC)
 * Okay, thanks. Ikosarakt1 (talk ^ contribs) 19:48, July 7, 2013 (UTC)

Breaking free from the integers
Hello, all! I'm writing to discuss a problem that's been on my mind for a long time: how can a googologist break free from the iron grip of the integers?

Daniel Geisler has done some cool things with continuous function iteration for things like tetration. I am interested in the general problem of extending googology past counting numbers. What is \(3 \uparrow\uparrow\uparrow 4.5\)? Or \(5 \uparrow^{1/2} 4\)? Or \(\{10, 100 (\pi) 2\}\)?

My math idol wrote a book discussing surreal numbers, an extension of the ordinals into things like \(\omega - 1\) and \(\varepsilon_\pi\). It might be interesting to figure out what, say, \(f_{1/2}(n)\) or \(f_{\sqrt{\Gamma_0}}(n)\) means.

We may not be able to convert all our favorite notations into continuous forms, but there's a whole world to explore here.

FB100Z &bull; talk &bull; contribs 20:22, March 19, 2013 (UTC)

Well, we can at extend tetration: \(a \uparrow\uparrow (1/b)\) by \(\sqrt[a \uparrow\uparrow (b-1)]{(a)}\) and pentation: \(a \uparrow\uparrow\uparrow (1/b)\) = \(\sqrt[a \uparrow\uparrow\uparrow (b-1)]{(a)}_s\). I believe that definition is reasonable, since as b tends to infinity, we get closer to 1, and for b=1 it limits to a. Ikosarakt1 (talk ^ contribs) 20:42, March 19, 2013 (UTC)

Extending tetration and the Ackermann function has been discussed to death at http://math.eretrandre.org/tetrationforum/index.php so we can probably get some useful ideas from there.

Is \(\varepsilon_\pi\) really defined? I'd like to see the definition.

Ikosarakt, what about \(a \uparrow\uparrow (b/c)\) for b > 1? Deedlit11 (talk) 21:13, March 19, 2013 (UTC)

In the matter of \(a \uparrow\uparrow (b/c)\), it would be defined as \(\sqrt[a \uparrow\uparrow (c-1)]{(a \uparrow\uparrow b)}\). It also have a good connection with integer results, for example to calculate \(3 \uparrow\uparrow (3/3)\) we need to extract \(3 \uparrow\uparrow 2\) = 27-th root of 7625597484987. The result is 3, and so even under my definition \(3 \uparrow\uparrow (3/3) = 3 \uparrow\uparrow 1\). Ikosarakt1 (talk ^ contribs) 21:24, March 19, 2013 (UTC)

Wait, that no works. Under that, \(2 \uparrow\uparrow (1/2) = 2 \uparrow\uparrow (2/3) = 2 \uparrow\uparrow (3/4)\), etc... so that problem remains undecided. Ikosarakt1 (talk ^ contribs) 21:45, March 19, 2013 (UTC)


 * @Deedlit This is sort of off-topic, but if I have it correctly

\[\varepsilon_{a} = \{\varepsilon_{a^L} + 1, \omega^{\varepsilon_{a^L} + 1}, \omega^{\omega^{\varepsilon_{a^L} + 1}}, \ldots|\varepsilon_{a^R} - 1, \omega^{\varepsilon_{a^R} - 1}, \omega^{\omega^{\varepsilon_{a^R} - 1}}, \ldots\}\]
 * e.g.

\[\varepsilon_{1/2} = \{\varepsilon_0 + 1, \omega^{\varepsilon_0 + 1}, \omega^{\omega^{\varepsilon_0 + 1}}, \ldots|\varepsilon_1 - 1, \omega^{\varepsilon_1 - 1}, \omega^{\omega^{\varepsilon_1 - 1}}, \ldots\}\]
 * and this can be used repeatedly ad infinitum to obtain \(\varepsilon_a\) where a is not a dyadic rational. FB100Z &bull; talk &bull; contribs 02:22, March 20, 2013 (UTC)
 * Cool! Deedlit11 (talk) 02:25, March 20, 2013 (UTC)