Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> No, it is not.

Yes, it is. You seem to have misunderstood what I wrote. The critique I was pointing to is of the amount of examples and energy needed during model training, which is what the "learning" in "machine learning" actually refers to. The paper uses GPT-3 which had already absorbed all that data and electricity. And the "learning" the paper talks about is arguably not real learning, since none of the acquired skills persists beyond the end of the session.

> So am I.

This is easy to settle. Go check any frontier model and see how far they get with multiplying numbers with tool calling disabled.

> Not a great time for you to rest on your intellectual laurels. Same goes for Penrose.

Neither am I resting, nor are there much laurels to rest on, at least compared to someone like Penrose. As for him, give the man a break, he's 94 years old and still sharp as a tack and intellectually productive. You're the one who's resting, imagining you've settled a question which is very much still open. Certainty is certainly intoxicating, so I understand where you're coming from, but claiming anyone who doubts computationalism is not bringing any arguments to the table is patently absurd.





Yes, it is. You seem to have misunderstood what I wrote. The critique I was pointing to is of the amount of examples and energy needed during model training, which is what the "learning" in "machine learning" actually refers to. The paper uses GPT-3 which had already absorbed all that data and electricity. And the "learning" the paper talks about is arguably not real learning, since none of the acquired skills persists beyond the end of the session.

Nobody is arguing about power consumption in this thread (but see below), and in any case the majority of power consumption is split between one-time training and the burden of running millions of prompts at once. Processing individual prompts costs almost nothing.

And it's already been stipulated that lack of long-term memory is a key difference between AI and human cognition. Give them some time, sheesh. This stuff's brand new.

This is easy to settle. Go check any frontier model and see how far they get with multiplying numbers with tool calling disabled.

Yes, it is very easy to settle. I ran this session locally in Qwen3-Next-80B-A3B-Instruct-Q6_K: https://pastebin.com/G7Ewt5Tu

This is a 6-bit quantized version of a free model that is very far from frontier level. It traces its lineage through DeepSeek, which was likely RL-trained by GPT 4.something. So 2 out of 4 isn't bad at all, really. My GPU's power consumption went up by about 40 watts while running these queries, a bit more than a human brain.

If I ask the hardest of those questions on Gemini 3, it gets the right answer but definitely struggles: https://pastebin.com/MuVy9cNw

As for him, give the man a break, he's 94 years old and still sharp as a tack and intellectually productive.

(Shrug) As long as he chooses to contribute his views to public discourse, he's fair game for criticism. You don't have to invoke quantum woo to multiply numbers without specialized tools, as the tests above show. Consequently, I believe that a heavy burden of proof lies with anyone who invokes quantum woo to explain any other mental operations. It's a textbook violation of Occam's Razor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: