Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Think of the absurdity of trying to understand the Pi number by looking at its first billion digits and trying to predict the next digit. And think of what it takes to advance from memorizing digits of such numbers and predicting continuation with astrology-style logic to understanding the math behind the digits of Pi.

I'm prepared to believe that a sufficiently advanced LLM around today will have some "neural" representation of a generalization of a Taylor Series, thus allowing it to "natively predict" digits of Pi.



Anthropic had a recent paper on why llms can't even get e.g. simple arithmetic consistently correct, much less generalize the concept of infinite series. The finding was that they don't find a way to represent the mechanics of an operation, they build chains of heuristics that sometimes happen to work.


> I'm prepared to believe that a sufficiently advanced LLM

This is the opposite of engineering/science. This is animism.


I want to believe, man. Just two more layers and this thing finally becomes a real boy.


Sometimes I feel this website, very much like LLMs themselves, prove that handling of language in general and purple prose in particular have absolutely no (as in 0) correlation with intelligence.


I suspect your definition of "intelligence" differs from mine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: