User blog comment:Simplicityaboveall/Extremely Large Numbers 2/@comment-5529393-20160801203256/@comment-5529393-20160803224305

Simplicity,

The strongest ordinal notations may be the ones devised by proof theorists in published mathematical papers. However, the strongest ones are very hard to understand, so I won't bother linking to them. Instead, here are some very strong "amateur" efforts:

Taranovsky's main notation [] is a very strong notation, and actually the definition is not all that complicated. But, the way it is defined (via a comparison relation) makes it somewhat hard to analyze and see how it grows.

Hyp Cos's Strong Array Notation [] is also very good; the definition is pretty complicated, but on the plus side it gives fundamental sequences, and that makes it easier to see how it grows. (although it is still hard, I'm still analyzing it)

Our Japanese Googology colleagues have created the Bashicu Matrix System [], which is a nice, fairly easy to define system that nevertheless grows quite fast. People analyzing it have claimed it grows faster than Taranovsky's notation, but I am skeptical of their claims. Note that, under the current rules system, some notations keep reducing forever, which is a problem. KurohaKafka has created a new rule system to fix the problem, but I haven't been able to understand his explanation. But, even if some notations do not reduce, the main ones still may and so we may still have a strong notation system.

Lawrence Hollom's Extended Factorial Array Notation [] is a nice little notation that doesn't seem to go quite as far as some of these others; but it is quite short, and seems ripe for extension.

Wythagoras's Dollar function [] was a competitor with Hyp Cos's R function; Wythagoras has abandoned it, but it might be interesting to check out.

Also, if you are interested in strong notations I would suggest reading my introductions to ordinal notations on my blog page; look for "Ordinal Notations I" through "Ordinal Notations VI".

As for PsiCubed statement, I agree that heshbar-q(Rayo(10^100)) < Rayo(10^100 + 1) is likely to be true. Note that heshbar-q(n) will have a growth rate comparable to f_a(n) for some recursive ordinal a. Then f_{a+1)(10^100 + 1) = f_a(f_{a+1}(10^100)) ~ heshbar-q(f_{a+1}(10^100)).  So we get a similar statement with just f_{a+1}(n), which will be much slower growing then the Rayo function.  That doesn't prove PsiCubed's statement, but intuitively one can see why it is likely to be true.

PsiCubed's claim that "This is true for any computable function" is not true, since for example f(n) = Rayo(10^100+2) is a computable function. I'm sure PsiCubed is aware of this; I imagine he meant to say that any computable function that one can reasonably define that doesn't invoke the Rayo function or some similarly strong concept will satisfy the inequality, because of the argument described above. This seems like a reasonable statement.