User blog comment:Vel!/FGH Gripe/@comment-5982810-20150325004329/@comment-146.111.120.99-20150325235311

@Vel!

I'm going to respond to your points one at a time.

(1) "fairly safe bet that for sufficiently fast growing functions small fluxations in the fundamental sequences won't make a signficant dent"

"Why do you think so? Can you provide any concrete justification for this? The burden of proof is on you to show that FGH can reliably operate in the absense of a specified FS System"

We commonly accepted that the Weiner Hierarchy is standard up to e0, and that there are at least two commonly used divergent systems used after this. For example:

e1 = lim{e0+1, w^(e0+1) , w^w^(e0+1) , w^w^w^(e0+1) , w^w^w^w^(e0+1) , ... }

e1 = lim{1,e0,e0^e0,e0^e0^e0,e0^e0^e0^e0, ... }

label the first FS as e1[n]_1 and the second as e1[n]_2. Which FS do you think leads to a faster FGH? by how much? Let's look at how we can convert the members of the second FS into the standard format:

e0^e0 = w^(e0*e0) = w^e0^2 = w^w^(e0*2)

e0^e0^e0 = w^(e0*w^w^(e0*2)) = w^w^w^(e0*2)

e0^e0^e0^e0 = w^(e0*w^w^w^(e0*2)) = w^w^w^w^(e0*2)

So we have:

w^w^w^ ... ^w^w^w^(e0+1) < w^w^w^ ... ^w^w^w^(e0*2)

Therefore:

e1[n]_1 < e1[n]_2 : n>1

Notice how extremely close these are: e1[n]_1 NEVER overtakes e1[n]_2, but also e1[n]_2 NEVER over takes e1[n+1]_1. Now consider two hierarchies, "f" and "g". "f" uses the first FS and "g" uses the second. It follows then that:

f_e1(n) < g_e1(n)

but also that g_e1(n) < f_e1(n+1)

The sequences, despite being different, stabilize against each other, meaning the relation is preserved for all "n". Now consider the successive case:

f_e1+1(n) and g_e1+1(n).

We know that f_e1+1(n) = f_e1^n(n) < g_e1^n(n) = g_e1+1(n)

but also g_e1+1(n) = g_e1^n(n) < f_e1^n(n+1) < f_e1^(n+1)(n+1) = f_e1+1(n+1)

So once again the functions are stabilized. The effect of using the different fundamental sequence here is no more sigificant than the effect of saying:

G(n) = {3,3,G(n-1)}

and G'(n) = {3,3,G(n-1)+1}

In both cases the +1s can not provide any significant benefit that would cause one function to destablized against the other. This "stabilization" effect has been

observed many many times in casual observation in googology and has even been proven in a few cases. The Knuth Up arrow theorem is an example of such stabilization:

4^^n = (2^^2)^^n < 2^^(n+2)

Here we have stabilization of the functions 2^^n and 4^^n with a offset of 2. When functions stabilize I say that they are in the same "growth class".

What is needed to destabilize the functions would be that first the FS's would have to not stabilize against each other. This in itself isn't really too likely for most of (practiced) googology. But even if they weren't stable it still wouldn't make much of a difference. Just to drive the point home let's define a new FS:

e1[n]_3 = e0^^(n^^n)

So we have:

e1[1]_3 = e0

e1[2]_3 = e0^e0^e0^e0

e1[3]_3 = e0^e0^e0^e0^ ... ^e0^e0^e0^e0 w/7,625,597,484,987 e0s

e1[4]_3 = e0^e0^e0^e0^ ... ^e0^e0^e0^e0 w/4^^4 e0s

etc.

Now let g use this fundamentla sequence. Now it is true that g_e1(n) grows "faster" than f_e1(n). They will definitely not stabilize. But what about g_e1+1(n) and f_e1+1(n)?

Well we know f_e1+1(n) < g_e1+1(n)

but we can say the following:

g_e1(n) ~ f_e1(n^^n) < f_e1(f_e1(n)). f_e1(n) grows much much faster than n^^n

therefore g_e1+1(n) << f_e1+1(2n)

This doesn't prove they stabilize but it does show how little of an effect it has even after one primitive recursion. The effect compeletly disappears by the second PR since:

g_e1+2(n) = g_e1+1^n(n) < f_e1+1(2*f_e1+1(2*f_e1+1( ... (2*f_e1+1(2n)) ... ))) < f_e1+1^n(2n+1) < f_e1+1^n(f_e1+1(n+1) = f_e1+1^(n+1)(n+1) = f_e1+2(n+1)

And there you have it. Stabilization. But here's the point: No one would make such a f'ing ridiculous FS unless they were a complete amateur in googology. You see Vel, you already implicitly understand stablization because you coined the term salad number. We know salad numbers don't work because we understand that the difference in growth rates of functions is so vast that in a mish mash of such growth rates the largest is the only one that matters. Guess what: FSs have relative growth rates. We can compare those growth rates to the growth rates of ordinary functions, and these relative growth rates tend to be vanishingly small compare to anything we see in googology, hence why their effect can be treated as neglible. Really the only thing standing between these conjectures and proof is tedium. They're actually not to bad to prove precisely because googology gives us so much wiggle room.