User talk:80.98.179.160

Creation of unsourced articles
Hello, it appears that you have created articles for numbers without sources. Articles about numbers must be sourced using the  tags. The articles you have created have been nominated for deletion because of this. Next time, create articles with sources. Alternatively, you can create an account and create a blog post and add your numbers there. Thank you. -- ☁ I want more clouds! ⛅ 11:58, October 6, 2017 (UTC)

Caution
If you continue to create articles without sources, you may be blocked from editing. -- ☁ I want more clouds! ⛅ 14:06, October 16, 2017 (UTC)
 * Sorry, there aren't \(\aleph_0\) clouds, so we can't fulfill your ask of more clouds. := 80.98.179.160 15:58, December 25, 2017 (UTC)

LOL Username5243 (talk) 16:25, December 25, 2017 (UTC)
 * Why the LOL? :% 80.98.179.160 13:02, December 28, 2017 (UTC)

testing
In the future, please use the Sandbox for editing tests. Thanks. --Ixfd64 (talk) 18:22, October 27, 2017 (UTC)

Your Profile Page
I suppose you could construct a system of mathematics in which 2ω>ω, but in the form of ordinal arithmetic used the most, \(\omega2\ne2\omega=\omega\). This is built into the central axioms which govern ordinal arithmetic. Believing the contrary is logically wrong, but it is an understandable objection if you are new to ordinals. If you have any questions I'll answer them to the best of my ability. Edwin Shade (talk) 23:59, November 22, 2017 (UTC)
 * I saw ω2 as 2ω (here). And a video showed 3ω>ω (there)! 80.98.179.160 09:30, November 23, 2017 (UTC)
 * People may write 2ω so that what they say can be more easily understood by those who know ordinal arithmetic, nand do understand it correctly themselves. In both the case of Saibian and the PBS Infinite Series I feel they wrote 2ω just for clarity, but it is not correct. The screenshot of the comment section shows a person addressing this error, another person's response, and then the response of the channel who uploaded the video saying they are indeed right.
 * Edwin Shade (talk) 23:07, November 26, 2017 (UTC)
 * And he wrote 2ω after ω, \(\varepsilon_0\) as \(\varphi(0,1)\)=ω, \(\zeta_0\) as \(\varphi(0,2)\)=ω^2, and \(\varepsilon_{\varepsilon_0}=\varphi(\varphi(0,1),1)\) in Reverse Veblen Notation (RVN), which is \(\varphi(\omega,1)\), but \(\varphi(\omega,1)>\varepsilon_{\varepsilon_0}\). The Feferman-Schütte ordinal was written as \(\varphi(0,0,1)\), again ω, but is \(\varphi(1,0,0)\), ω^ω as \(\varphi(\omega,0)\). The most natural use is nonzero arguments before zeroes, or extended Veblen functions. But there're some correct values. (\(\varphi(\alpha,\beta,\cdots,\beta,\alpha)\)). 80.98.179.160 08:38, November 27, 2017 (UTC)
 * Sbiis Saibian is wrong in his representation of ordinals. In fact, I even sent him an email about this.
 * SentemailtoSbiisSaibian.png
 * Edwin Shade (talk) 01:39, November 28, 2017 (UTC)
 * You only talked about 2ω, but not RVN. Is it supposed to be? 80.98.179.160 14:23, November 29, 2017 (UTC)
 * I'm not entirely sure if it's correct to use Reverse Veblen Notation, but if you make clear how it's supposed to be different from normal Veblen notation, then I don't see a problem using it. Edwin Shade (talk) 15:21, November 29, 2017 (UTC)
 * Reverse Veblen Notation is listing Veblen function's arguments in reverse order. That's the difference. 80.98.179.160 19:36, November 29, 2017 (UTC)
 * Okay, then it's correct. Edwin Shade (talk) 21:34, November 29, 2017 (UTC)
 * Actually, Saibian never made the claim that 2ω = ω+ω. He specificially says on his site that he is aware of this being false (as far as ordinal arithmetic goes) and that he uses things like "2ω" as his preferred notation for the ordinal ω2. He is simply using ordinary polynomials to notate all ordinals under ωω because it looks nicer. This may not be the standard notation, and it may lead to erroneous conclusions by his readers, but it is not mathematically "wrong" in any way. It's just notation.
 * Notice that later, when he goes up to ε0 and beyond, he changes to the correct order. Whenever he's actually doing a calculation, he always writes ω+ω as ω2. He even writes explicitly: "In Cantor's ordinal arithmetic order matters however, and we find that 1+ω=ω and 2ω=ω". So yes, he is aware of this PsiCubed2 (talk) 01:48, November 30, 2017 (UTC)
 * But why does he use ordinary polynomials for ordinals? Another error: he lists ω+1 as {ω,1,2,...}, correctly {0,1,...,ω}. And he listed ωε0, could be ε0. And he doesn't change RVN. 80.98.179.160 08:08, November 30, 2017 (UTC)
 * ω+1 is equal to {ω,0,1,2,...} as a set. You seem to confuse the set ω+1 with the ordering defined by that ordinal. The set ω+1 is simply the collection of all ordinals less than ω+1, and these can be written in any order. The whole point of that paragraph is to show that the sets ω and ω+1, while being different ordinals, actually have the exact same number of elements (aleph null).
 * The sole instance of ωε0 is indeed an error, though.
 * As for why he uses polynomials: ω3+7ω2+5ω+2 looks cleaner and prettier than ω3+ω27+ω5+2. PsiCubed2 (talk) 19:30, November 30, 2017 (UTC)
 * I meant ordering. True, but they assume ordinal arithmetic's commutativity. And he wrote that ε0ε0=\(\omega^{\omega^{2\varepsilon_0}}\), again you may think that must be ε0. I've already deleted that part of user page saying \(2\omega\ne\omega\). 80.98.179.160 20:39, December 2, 2017 (UTC)

New numbers
Kaliumillion is \(10^{10^{570}6}\) in long scale and \(10^{10^{570}3+3}\) in short. How do you like my numbers on a scale of 0 to 10? How much do you like my numbers? 0 1 2 3 4 5 6 7 8 9 10 80.98.179.160 14:12, December 6, 2017 (UTC)

Who gave a 3? I presume it's someone who doesn't like that his number isn't the largest -illion now defined. Or that the table is too large. 80.98.179.160 17:02, December 19, 2017 (UTC)

I gave the 3. It's because I did not like using 421290 without any reason (I was not able to figure out why 421290 was used), and it's hard to extend the system. Rpakr (talk) 19:14, December 27, 2017 (UTC)


 * 421290 is used due to:


 * There are 118 -illions with 1 element name but 0 other prefixes,
 * 13924 with 2 element names but 0 other prefixes,
 * and added 1 due to the next name being different,
 * which sums up to 14043, which I multiplied by 30 to get 421290.
 * 80.98.179.160 15:53, December 29, 2017 (UTC)

FGH versus EFGH
Assuming h appears n times in the right hand side of both your successor and limit rules, then your successor rule is weaker than two applications of the normal successor rule, and your limit rule is weaker than an applying the limit rule and then the successor rule.

Meaning, if we set \(f_\alpha(n) = h_\alpha(n)\), and define \(f_{\alpha+1}(n)\) and \(f_{\alpha+2}(n)\) using the successor rule from the normal fast-growing hierarchy, then \(f_{\alpha+2}(n) > h_{\alpha+1}(n)\) for \(n > 1\). Also, if we set \(f_\beta(n) = h_\beta(n)\) for all \(\beta < \alpha\) given some limit ordinal \(\alpha\), and define \(f_\alpha(n)\) and \(f_{\alpha+1}(n)\) using the normal limit and successor rules, then \(f_{\alpha+1}(n) > h_{\alpha}(n)\) for \(n > 1\). Thus, in general we will have

\(f_{\alpha + 2m + 1}(n) > h_{\alpha + m}(n)\) for \(n \ge 2, m \ge 0\), and \(\alpha\) a limit ordinal, where f is the FGH and h is the EFGH.

Thus to answer the questions on your profile page, \(h_\omega\) is about \(f_{\omega+1}\), \(h_{\omega_1^\text{CK}}\) is about \(f_{\omega_1^\text{CK}+1}\), and the FGH catches up to the EFGH at every limit ordinal. (I am using the following definition of "catching up": f catches up to h at \(\alpha\) if for every \(\beta < \alpha\) there exists a \(\gamma < \alpha\) such that \(f_\gamma\) eventually dominates \(h_\beta\). This notion assumes that \(f_\delta\) is generally less than \(h_\delta\).) Deedlit11 (talk) 16:15, December 19, 2017 (UTC)
 * In successor rule, h appears \(h_\alpha(n)\) times, in limit rule, \(h_{\alpha[n]}(n)\) times, and you answered the question about HH positively because you answered about FGH positively. 80.98.179.160 16:56, December 19, 2017 (UTC)
 * In that case, in the previous paragraph \(h_{\alpha+1}\) is a little bigger than \(f_{\alpha+2}\), but not that much, and it is much smaller than \(f_{\alpha+3}\). Similarly, for limit ordinals \(\alpha\) we have that \(h_\alpha\) is a little bigger than \(f_{\alpha+1}\), but much smaller than \(f_{\alpha+2}\). So we will have
 * \(f_{\alpha+3m+2}(n)>h_{\alpha+m}(n)\) for \(n\ge2, m\ge0\), and \(\alpha\) a limit ordinal, where f is the FGH and h is the EFGH.
 * and in fact I believe we will have \(f_{\alpha + 2m+2}(n)>h_{\alpha+m}(n)\) as well.
 * I don't know how you want to define "catching up" - you should note that saying "under the most natural definition" is not a helpful way to explain your definition to other people, since everyone has there own notion of what is "natural". But, to me the the definition I gave above is the "most natural", and it is one for which the SGH and HH catch up to the FGH, unlike others. Under this definition, we again have FGH catching up to the EFGH at every limit ordinal. Therefore the SGH, HH, and MGH will catch up to the EFGH at the same ordinals at which they catch up to the FGH. The MGH catches up to the FGH at every multiple of \(\omega^\omega\); the HH catches up to the FGH at every \(\varepsilon\)-number. The ordinals at which SGH catches up to the FGH is a very complicated problem, and varies very much depending on how fundamental sequences are defined. However, in the literature the first paper on the subject gives the first ordinal at which SGH catches up to the FGH as \(\psi_0(\Omega_\omega)\). Deedlit11 (talk) 19:13, December 19, 2017 (UTC)
 * Now I upgraded the definition, h appears \(h_\alpha^{h_\alpha^{\ddots^{h_\alpha(n)}\cdots}(n)}(n)\) (\(h_\alpha^{h_\alpha^{\ddots^{h_\alpha(n)}\cdots}(n)}(n)\) (... \(h_\alpha^{h_\alpha^{\ddots^{h_\alpha(n)}\cdots}(n)}(n)\) \(h\)s) \(h\)s)...) times; with \(h_\alpha(n)\) lines. And f catches up to h iff there's an ordinal \(\alpha\) that \(f_\alpha\approx^*h_\alpha\), that is, neither function eventually dominates the other. Now find the smallest ordinal at which the FGH is on par with EFGH. 80.98.179.160 07:17, December 20, 2017 (UTC)
 * As I could notice, during last years a lot of people tried to invent a super-fast-growing hierarchy (SFGH) like you do.
 * Let's define SFGH as follows:

\(F_\alpha\uparrow^{\beta} 0(n)=n\) if \(\beta=0\) or \(\beta=\gamma+1\)

\(F_\alpha\uparrow^0 (m+1)(n)=F_\alpha(F_\alpha\uparrow^0 m(n))\)

\(F_\alpha\uparrow^{\beta+1} (m+1)(n)=F_\alpha\uparrow^{\beta}(F_\alpha\uparrow^{\beta+1} m(n))(n)\)

\(F_\alpha\uparrow^{\beta} m (n)=F_\alpha\uparrow^{\beta[m]}m(n)\) if \(\beta\) is a limit ordinal

\(F_0(n)=10^n\)

\(F_{\alpha+1}(n)=F_\alpha\uparrow^{\alpha+1} n(n)\)

\(F_\alpha(n)=F_{\alpha[n]}\uparrow^{\alpha}n(n)\) if \(\alpha\) is a limit ordinal.

Under this definition:

\(F_\alpha\uparrow^0m(n)=F_\alpha^m(n)=\underbrace{F_\alpha(F_\alpha(...(F_\alpha(n))...))}_{m\quad F's}\)

\(F_\alpha\uparrow^1m(n)=\underbrace{F_\alpha^{F_\alpha^{...^{F_\alpha^n(n)}...}(n)}}_{m\quad F's}(n)\)

\(F_\alpha\uparrow^2m(n)=\left. \begin{matrix} \underbrace{F_\alpha^{F_\alpha^{...^{F_\alpha^n(n)}...}(n)}(n)} \\ \underbrace{F_\alpha^{F_\alpha^{...^{F_\alpha^n(n)}...}(n)}(n)}\\ \vdots\\ \underbrace{F_\alpha^{F_\alpha^{...^{F_\alpha^n(n)}...}(n)}(n)}_{n}\\ \end{matrix} \right \} \text{m lower braces}\)

and so on.

\(F_\alpha\uparrow^{\omega+1}m+1(n)=F_\alpha\uparrow^\omega(F_\alpha\uparrow^{\omega+1}m(n))(n)=F_\alpha\uparrow^{F_\alpha\uparrow^{\omega+1}m(n)}(F_\alpha\uparrow^{\omega+1}m(n))(n)\)

Hmm, seems cool. But did it really gives significant acceleration for FGH? No. I guess:

\(F_\alpha(n)\approx f_{\alpha^2}(n)\)

where \(F\) is the mentioned SFGH and \(f\) is usual FGH --Denis Maksudov (talk) 15:42, December 20, 2017 (UTC)
 * Now I upgraded the definition even more: now the \(h_\alpha\) is a 0D space (point), a recursion tower is a 1D space (line), a recursion tower of them is a 2D space (plane), and now the definition gives the polymension number as \(h_\alpha(n)\) for all \(n\in\mathbb{N},\alpha<\omega_1\). AND the limit rule is now using \(h_{\alpha[n]}(n)\) as polymension number, where \(\alpha\) can be replaced with any limit ordinal \(<\omega_1\). Now what's your opinion? 80.98.179.160 09:33, December 21, 2017 (UTC)

Is BEAF extensible to ordinals?
Because {ω,ω,2}=ε0, and at MathSE, the ordinal tetration gives {ω,ω,3}=φ(2,0), generally, {ω,ω,α}=φ(α-1,0) iff α is successor, and I got {ω,ω,ω}={ω,3(1)2}=φ(ω,0), {ω,ω,ω+1}=Γ0. But, this is only 3-entry BEAF extended to ordinals! Is there such as {ω,ω(1)2}? Or {ω,ω(ω)ω}? Or {ω,ω((1)1)2}? Or, say, \(\omega^\omega\&\omega\)? Also, tetrational arrays are well-defined due to epsilon-0, so are pentational due to Cantor's ordinal, hexational due to eta-zero, and {X,X,X}&a is well-defined due to φ(ω,0), {X,X,X+1}&a due to Feferman–Schütte ordinal. But still, is BEAF extensible to ordinals?80.98.179.160 17:03, December 23, 2017 (UTC)
 * Where on MathSE is it shown that {ω,ω,3}=φ(2,0)? LittlePeng9 (talk) 10:56, December 24, 2017 (UTC)
 * Nowhere, this was just quick intuition. Here was \(\omega\uparrow^33=\varepsilon_{\varepsilon_0}\) shown.
 * UPDATE: I found 4-entry ordinal BEAF in here, defining {ω,ω,1,2} as Ackermann ordinal. Thus, expandal arrays are well-defined, having \(\vartheta(\Omega^3)=\vartheta(\Omega\Omega\Omega)\) entries, all defaulting at 1, thus Cantor's Attic is wrong as they say that even tetrational arrays aren't! That would mean that \(2^2\&3\) isn't well-defined due to it being tetrational, pentational, hexational, heptational, etc. 80.98.179.160 11:01, December 24, 2017 (UTC)

How large is Omega One Quantum 2048?
I give the notation for this: \(\omega_1^{\text{Q}_{2048}}\). I know 2 type-lower bounds: \(\omega^5\), and \(\omega^6\) for the infinitary-superposition variant (\(\omega_1^{\text{Q}_{2048_\infty}}\)), and the 3D variant (\(\omega_1^{\text{Q}_{2048_{3\text{D}}}}\)).

Your Comments
On numerous blog posts and talk pages you have left comments which are often wrong, or comments in which you do not explain your reasoning, but just assume others will understand what you mean. It's okay to have a differing opinion than others, but it is good to think through what you're posting before you post it, so you can be sure there won't be confusion with people or erroneous content posted. In addition, many of your comments are salad extensions which don't contribute to a person's understanding of the post they are reading, and are usually pointless. If you keep this up I'll tell Cloudy.

Keep in mind also, it isn't necessary to post a reply to your own comment just to revise the first comment, as you can edit the first comment by clicking the blue link next the "reply" button, make your edit, then click publish. Edwin Shade (talk) 17:22, December 31, 2017 (UTC)