Hacker Newsnew | past | comments | ask | show | jobs | submit | sh4rks's commentslogin

> developers can use any tools they choose (primarily Cursor Pro with Claude 3.5/3.7 Sonnet—frontier models at the time of the study

Sonnet 3.5 came out in mid 2024


I want to use Gemini CLI with OpenClaw(dbot) but I'm too scared to hook it up to my primary Google account (where I have my Google AI subscription set up)

Gemini or not, a bot is liable to do some vague arcane something that trips Google autobot whatevers to service-wide ban you with no recourse beyond talking to the digital hand and unless you're popular enough on X or HN and inclined to raise shitstorms, good luck.

Touching anything Google is rightfully terrifying.


People are still falling for the "stochastic parrot" meme?

Until we have world models, that is exactly what they are. They literally only understand text, and what text is likely given previous text. They are very good at this, because we've given it a metric ton of training data. Everything is "what does a response to this look like?"

This limitation is exactly why "reasoning models" work so well: if the "thinking" step is not persisted to text, it does not exist, and the LLM cannot act on it.


Text comes in, text goes out, but there's a lot of complexity in the middle. It's not a "world model", but there's definitely modeling of the world going on inside.

There is zero modeling of the world going on inside, for the very simple reason that it has never seen the world. It's only been given text, which means it has no idea why that text was written. This is the fundamental limitation of all LLMs: they are only trained on text that humans have written after processing the world. You can't "uncompress" the text to get back what the world state was to understand what led to it being written.

> They literally only understand text

I don't see why only understanding text is completely associated with 'schastic-parrot'-ness. There are blind-deaf people around (mostly interacting through reading braille I think) which are definitely not stochastic parrots.

Moreover, they do have a little bit of Reinforcement Learning on top of reproducing their training corpus.

I believe there has to be some even if very primitive form of thinking (and something like creativity even) even to do the usual (non-RL, supervised) LLMs job of text continuation.

The most problematic thing is humans tend to abhor middle grounds. Either it thinks or it doesn't. Either it's an unthinking dead machine, a s.p., or human-like AGI. The reality is probably in between (maybe still more on the side of s.p. s, definitely with some genuine intelligence, but with some unknown, probably small, sentience as of yet). Reminder that sentience and not intelligence is what should give it rights.


Because blind-deaf people interact with the world directly. LLMs do not, cannot, and have never seen the actual world. A better analogy would be a blind-deaf person born in Plato's Cave, reading text all day. They have no idea why these things were written, or what they actually represent.

Dead Internet theory


Is that using nominal GDP or PPP?


Post model


Ah, the casino tactic


How is this different from the several other alternatives?


Wouldn't it be easier to just install a bolt lock on your door?


Easier, sure? More fun? Probably not.


Ha. I read this comment and found the exact same diagonal "Ape"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: