User blog comment:Tetramur/Well-defined Oblivion number (I think)/@comment-35470197-20200106221111/@comment-39541634-20200108065446

@Tetramur

The seven symbols you've mentioned, by themselves, are meaningless. The symbol "∈" on its own, for example, means nothing.

To have an actual mathematical language, you also need a set of rules. Rules that tell you how the symbols relate to one another and how to extract meaning from them. Otherwise, they are just a random string of squiggles on a page.

It's just like the situation with ordinary human languages. English is not just a set of 26 letters. The complexity and expressive power of English stem from its vocabulary and grammer and common usage.

Also, as P-bot already stated, it is very easy to convert any n-symbol language to a 1-symbol language... at least, it is easy once you know the trick. And the trick is to do it in two steps:

Say we have a language with 7 symbols (say "a,b,c,d,e,f,g"). The first step would be to convert all our letters into digits:

1 instead of "a"

2 instead of "b"

3 instead of "c"

4 instead of "d"

5 instead of "e"

6 instead of "f"

7 instead of "g"

So "beef" (for example) would become 2556.

The second step (which is where the ingenious trick comes in) is to realize that the string of digits we've just written can be interperted as a single number. Therefore, instead of writing 2556, we could just repeat a single symbol 2556 times:

XXXXXXXXXXXXXXXXXXXXXXXX...XXXX (with 2556 X's)

At this point, you might say "Aha! But the second expression is far longer than the first. We now need a whopping 2556 symbols to write a word that was originally 4 letters long".

That is true, but it is also irrelevant. The definition of K(n) doesn't care about how cumbersome your notation is. So we end up in a situation where any theory can be expressed with a single symbol, which means that K(1) is ill-defined.

You might be tempted to try and fix this situation by putting some artificial limit on the length of expressions in K(n), but that won't work either. The problem here is that mathematical systems derive their power from being potentially unbounded.

To define something as large as (say) a gongulus, you need a system that can manipulate strings that are roughly a gongulus characters long. The beauty and power of googology is that:

(1) We don't need to actually see that huge string with our own eyes to prove that it is a valid string.

(2) You can prove that a relatively short string (like {10,10 (100) 2}(*)) is equivalent in meaning to that huge string.

But that huge gongulus-character-long string must still be available for the system itself to fetch on demand.

(*) I'm simplifying here. The actual string representing a gongulus in any mathematical language will also have to include the rules of BEAF translated into that language. But compared to the full expansion of a gongulus, it would still a very short string.