User blog:Hyp cos/Idea for legion

This blog post is a tip for new googologists who want to create a fast-growing function by recursion. It leads you to \(\psi(\Omega_\omega)\) - the limit of \(\Pi_1^1-CA_0\).

Basic idea of "legion"
In BEAF, the symbol "X" can take on some arithmetic, such as exponentiation. Exponentiation is not very fast-growing in googoloical sense. However, when symbol "X" takes on exponentiation, "X^X&" will be a very strong operator, and result very fast-growing functions - much fast-growing than exponentiation. Also, in ExE, when symbol "#" takes on exponentiation, "#^#" will be a very strong separator, and the result functions are much faster-growing than exponentiation. And, in FGH, when symbol "ω" takes on exponentiation, "ω^ω" also results a much faster-growing function than exponentiation...

Naturally such ideas come out - why not using BEAF itself on X's, using ExE itself on #'s, using FGH itself on ω's... And that's the basic idea of "legion".

Using BEAF on X's
Jonathan Bowers worked on that idea, but unfortunately, now we all know he failed, and it turns out that BEAF is ill-defined beyond tetrational arrays.

Let go one more step to see the reason he fails. Have a look at an X^^(X+1) array (or an {X,X+1,2} array). By definition of up-arrow notation or linear BEAF, we have X^^(X+1) = X^(X^^X) (or {X,X+1,2} = {X,{X,X,2}}). The next time we reduce it, it'll become X^(X^^n) - note that the "X^^X" reduces immediately after X^^(X+1) reduces. Not only "X", but # and ω also have this sort of problem, if they take on tetration.

As a result, X^^(X+1) is even worse than X^^X+1. The latter is something equivalent to {a,b(X^^X)a+1} = {X^^X&a(X^^X)a} - where the outside separator (X^^X) will reduce a-1 more times, and every time the prime, "b", will become much larger than last time. Not only in BEAF, but also #^^#*# and ω^^ω+1 are stronger than #^^(#+1) and ω^^(ω+1) respectively.

Using ExE on #'s
Let's try another thing - using ExE itself on #'s. f(n) = En#n have a tetrational growth rate, so let's start with that. (To avoid confusing, I use #2 here for the # mark of E# notation, and use # for the #'s we operate on) E##2# reduces to E##2n, then EE##2(n-1), then EEE##2(n-2), etc. We finally get 10^10^...10^10^#, then the # reduces immediately, making it even worse than the "##".

But the next step is more fatal. What about E##2(#+1)? It's EE##2#, and just adds a 10 to the bottom of the power tower, which still pales compared with "##". As a result, we can't go very further with that, too. Also, EX#X and Eω#ω are weaker than X^2 and ω^2 respectively.

Using FGH on ω's
This time it works. \(f_3(\omega)\) reaches ω^^ω, and \(f_0(f_3(\omega))\), \(f_3^2(\omega)\) and \(f_4(\omega)\) are stronger, comparable to \(\varepsilon_0+1\), \(\varepsilon_{\varepsilon_0}\) and \(\zeta_0\) respectively.

Conclusion
The point that decides the success of this sort of notations is whether the strongest part of the operator can keep enough time after one reduction. A^^(A+1) and EA#A fail (where A is X, #, ω, or this sort of symbols in your own notation) because the strongest part, the A's in EA#A and the A in A^^A, reduces immediately after EA#A and A^^(A+1) reduces. On the opposite, \(f_3^2(A)\) successes because the strongest part, \(f_3(A)\), doesn't reduce until the outside \(f_3\) in \(f_3(f_3(A))\) reduces.

One can fix the problem with a very simple way. Even simply adding 1 can make big difference. For example, if the rules are modified to {a,b+1,c+1 #} = {a,{a,b,c+1 #}+1,c #} and E@m#(n+1) = E@((E@m#n)+1), then they work. Sbiis uses a complicated way to define a ">" operator, and then define hyper-operation on #'s - that's also one way, but more complicated.