User blog comment:P進大好きbot/New Googological Ruler/@comment-31580368-20190629142620/@comment-35470197-20190724025044

> How come f_wck1(n) is fundamentally stronger than BB(n)? Sounds quite counter-intuitive to me.

To begin with, the system of fundamental sequences associated to Kleene's O heavily depends on the choice of the enumeration of computable functions. If one intentionally employs a "weak" enumeration, then the resulting FGH can be very weak. (A Japanese googologist actually created an example of such an enumeration with a proof. The resulting system is bounded by ω+3 in Wainer hierarchy.)

Therefore it is harmless to assume that the enumeration which we employs is actually strong when we talk about the level of f_{ω_1^{CK}}. Then how can we compare uncomputable large functions? There is no effective theory applicable to many cases, and hence what we can do is just to guess from several factors related to the size, e.g. the computability level (the computation model which is required to compute), the definability (axioms which is required to ensure the definability), and the well-definedness (axioms which is required to ensure the unique existence). (I know that these factors do not characterise the size.) Fortunately, both of BB and the FGH are definable and well-defined under ZFC. Therefore we compare the computability level. BB is computable by an oracle Turing machine. The second order BB, i.e. Σ_2, is computable by a second order oracle Turing machine. Similarly, for a recursive ordinal α, Σ_α is computable by an α-th order oracle Turing machine. On the other hand, I heard that it is known that for any recursive ordinal α, the set of countable ordinals computable by an α-th order Turing machine coincides with ω_1^{CK}. In particular, Kleene's O itself is uncomputable by such a hyper-arithmetic computation model. (Unlike the hyper-arithmetic recursion of higher oracle BB's, Kleenes O itself is defined in set theoretic recursion.) That is why I guess that f_{ω_1^{CK}} is not so easy to go beyond.

As I clarified in "Regulations" section, the next level is not necessarily proved to be significantly large. On the other hand, even if you can verify that a large number goes beyond oracle Turing machines, it is difficult to show that it goes beyond f_{ω_1^{CK}}. On the other hand, if you can prove that it goes beyond f_{ω_1^{CK}}, then it is usually provable to go beyond oracle Turing machines as long as it is not a naive extension of f_{ω_1^{CK}}.

> If our quest is to create the largest possible computable number with an explicit set of computable rules, then TI's and Friedman games are perfectly valid. But in this case, it would make no sense to delve into insanely complicated ordinal notatoins which take years to master.

One solution is to set a regulation for an ordinal notation, i.e. a notation with a recursive well-ordering. On the other hand, there are many googologists who comfound ordinal notations and systems with fundamental sequence, e.g. UNOCF and BMS, and hence I did not set it. If the notion of an ordinal notation becomes more famous, i.e. correctly understood, then the regulation works well.

> OTOH if our quest is to BUILD the strongest googological NOTATION from the grounds up (like Bird's notation or BMS or SAN) then TI's and Friedman games are completely irrelevant.

But if it is not required to be an ordinal notation, then one can create a notation which goes beyond TI.

> Much of the confusion here stems from the unstated assumption that "larger numbers" automatically means "more advanced". This may have been correct in the early days of googology, but it is no longer true.

It is a new point of view for me. Interesting. (I recall that I actually do not know googology in early days, because I started googology these two years or something like that.)

> As another example, I think the Pair Sequence System is one of the coolest googological inventions ever made. It may not be the most powerful notation ever devised, but it so simple that a child could understand how it works (though he won't understand why it works). I find PSS to be a far more impressive innovation then cryptic-but-powerful array notations like SAN.

In my opinion, Primitive Sequence System is also one of the coolest googological inventions, although Pair Sequence System is much more beloved in Japan.

> In short: Even in googology, size is not everything. This is why my own "Psi Levels" scale does not even try to order things in terms of "difficulty". It just attempts to give a vivid map of the large numbers landscape.

I agree the result. My attempt is just making a new ruler, which provides others how to go beyond the current level. Therefore it is completely differently directed other rulers like your scaling.

> Not really. Scorcher007 made claims that I would not have made, and went into details that I personally find irelevant to the discussion.

As I commented to Scorcher007, the definition itself does not have problems. However, I think that we still have the same issues; There is no effective way to determine the level in that scaling. If we have a notation with a system of fundamental sequences, then we can traditionally understand the approximation through a comparison map approximately preserving the system of fundamental sequences. That is why I think that my ruler should mainly consist of FGHs. It does not matter if we just provide a scaling based on significant goals, but I would like to use it together with proofs. (I am not good at traditional table based analysis, and hence I usually create comparison maps such that the compatibility of fundamental sequences is provable by finitely many inductions.)