User blog:Alemagno12/idea

so i recently saw this blog post: http://karpathy.github.io/2015/05/21/rnn-effectiveness/

there's this thing called an LSTM

you can give it training data, like, some wikipedia articles

then you let the LSTM train for a certain amount of time

then the LSTM will learn from that data, and it will try to output something like the training data

so if we used, say, googology wiki articles as the training data and we let the LSTM train for a long time, the LSTM will try to output something like a googology wiki article.

and we could use the LSTM to generate other stuff, like generate random ordinals between 0 and e0, or do a comparison of two notations

even though it might not generate actual googology but just hallucinated googology, i would still like to see what the results could be

i could do this myself, but i have no idea how to code an LSTM, however andrej karpathy included the source code of a LSTM that uses text as inputs (https://github.com/karpathy/char-rnn), but it uses torch, and torch doesn't run on windows

so its just an idea for now