im confused, surely there already exists clasification models, which, given multidimensional sensor data, output the most likely "activity" type? why would text modality be a better choice?
I may be mistaken but I seem to recall having also read on numerous occasions that ejection of one of the bodies is an inevitability in these types of systems.
Given the limited state space of the simulation, I'm not sure I see what the big discovery is here.
It's certainly a neat result to see it visualized though.
this is nice! wonder what it would take to make it emulate a small DOS like environment. the qt py has an esp32s3, which afaik is quite a beast for a microcontroller.
> Good question. For efficiency, we try to keep commonly used terms such as letters as short as possible. Note that the words we use for letters all one syllable, whereas the NATO phonetic alphabet are mostly two syllables. The words we use for spelling are also chosen so that they can be chained together easily and quickly, eg “harp each look look odd”. Easy to say quickly without slurring
> The idea that fluid is solving the Navier-Stokes equation seems like an obvious error to me - it cannot solve the equation, it simply acts a fluid, which the equation was designed to approximate.
that's his "Principle of Computational Equivalence", that natural processes are themselves computations. it's easy to see this is true in a "made-up" universe, say Game of Life, but whether our universe is computational is still an open question I think.
"learns on its own" makes it sound like unsupervised learning, but it's labeled data: the researchers also input the correct "output voltage" which they want the system to learn.
it's still neat tho. I feel that an AGI will come out of an analog computer rather than a digital one.