User blog comment:Edwin Shade/How Do I Evaluate BEAF Arrays In Two Dimensions ?/@comment-30754445-20170827183955/@comment-32876686-20170830155201

Hm...I believe I understand it then. So pentational arrays are not well-defined because of certain difficulties in notation.

There is now one matter I have been wondering about. After the ordinal infinity omega there comes the next ordinal infinity, epsilon-nought,and so on. Although Cantor proved there are an infinite amount of ordinal infinities, omega times one and omega times two are basically the same fundamentally, whereas omega and epsilon are different in a fundemental way, (hopefully you understand what I'm trying to say here), so although the amount of ordinal infinities is infinite, might the number of unique ordinal infinities be finite ?

It is with this thought that I say this: Perhaps functions are the same way, and after a while there can be no fundementally different type of function. In the short time I've been on this site it seems all the computable functions rely on recurssion and diagnolization, and while you can change the rules to make the recursion faster or diagnolization more powerful, there is no fundemental difference to the general process of making these functions. Stronger functions can be made though if they are uncomputable, and of all the uncomputable functions I have seen, they too rely on only a few elements; either calculating the highest number possible of a process that may or may not be infinite [the busy beaver function], or calculating the highest number possible in a certain framework of math with a given number of symbols [Rayo's function].

So at the end of it all it seems that although one can continue to use diagnolization, recurssion, turing machines, and other devices, wouldn't there a be a limit at which no fundementally unique devices can be used in a function ? For instance, is there even anything beyond uncomputable functions, and if so could someone give me an example of such a thing ?