Hacker Newsnew | past | comments | ask | show | jobs | submit | crthpl's commentslogin

the chain of thought is what it is thinking

When we think, our thoughts are composed of both nonverbal cognitive processes (we have access to their outputs, but generally lack introspective awareness of their inner workings), and verbalised thoughts (whether the “voice in your head” or actually spoken as “thinking out loud”).

Of course, there are no doubt significant differences between whatever LLMs are doing and whatever humans are doing when they “think” - but maybe they aren’t quite as dissimilar as many argue? In both cases, there is a mutual/circular relationship between a verbalised process and a nonverbal one (in the LLM case, the inner representations of the model)


The analogy breaks at the learning boundary.

Humans can refine internal models from their own verbalised thoughts; LLMs cannot.

Self-generated text is not an input-strengthening signal for current architectures.

Training on a model’s own outputs produces distributional drift and mode collapse, not refinement.

Equating CoT with “inner speech” implicitly assumes a safe self-training loop that today’s systems simply don’t have.

CoT is a prompted, supervised artifact — not an introspective substrate.


Models have some limited means of refinement available to themselves already: augment a model with any form of external memory, and it can learn by writing to its memory and then reading relevant parts of that accumulated knowledge back in the future. Of course, this is a lot more rigid than what biological brains can do, but it isn’t nothing.

Does “distributional drift and mode collapse” still happen if the outputs are filtered with respect to some external ground truth - e.g. human preferences, or even (in certain restricted domains such as coding) automated evaluations?


I wasn’t talking about human reinforcement.

The discussion has been about CoT in LLMs, so I’ve been referring to the model in isolation from the start.

Here’s how I currently understand the structure of the thread (apologies if I’ve misread anything):

“Is CoT actually thinking?” (my earlier comment)

→ “Yes, it is thinking.”

  → “It might be thinking.”

   → “Under that analogy, self-training on its own CoT should work — but empirically it doesn’t.”

    → “Maybe it would work if you add external memory with human or automated filtering?”
Regarding external memory:

without an external supervisor, whatever gets written into that memory is still the model’s own self-generated output — which brings us back to the original problem.


> Humans can refine internal models from their own verbalised thoughts; LLMs cannot.

can be done without limitations but you won't get the current (and absolutely fucking pointless) kind of speed.

> Self-generated text is not an input-strengthening signal for current architectures.

It can be, the architecture is not the issue. Multi-model generations used for refining answers can also be tweaked for input-strengthening via multi- and cross-stage/link (in the chain) pre-/system-prompts.

> Training on a model’s own outputs produces distributional drift and mode collapse, not refinement

That's an integral part of self-learning. Or in many cases when children raise themselves or each other. Or when hormones are blocked (micro-collapse in sub-systems) or people are drugged (drift). If you didn't have loads of textbooks and online articles, you'd collapse all the time. Some time later: AHA!

It's a "hot reloading" kind of issue but assimilation and adaptation can't/don't happen at the same time. In pure informational contexts it's also just an aggregation while in the real world and in linguistics, things change, in/out of context and based on/grounded in--potentially liminal--(sub-)cultural dogmas, subjectively, collective and objectively phenomenological. Since weighted training data is basically a censored semi-omniscient "pre-computed" botbrain, it's a schizophrenic and dissociating mob of scripted personalities by design, which makes model collapse and drift practically mandatory.

> a safe self-training loop that today’s systems simply don’t have.

Early stages are never safe and you don't get safety otherwise except if you don't have idiots around you, which in money and fame hungry industries and environments is never the case.

> CoT is a prompted, supervised artifact — not an introspective substrate.

Yeah, but their naming schemes are absolute trash in general, anchoring false associations--technically, even deliberately misleading associations or sloppy ignorant ones, desperate to equate their product with human brains--and priming for misappropriation--"it's how humans think".


Chain-of-thought is a technical term in LLMs — not literally “what it’s thinking.”

As far as I understand it, it’s a generated narration conditioned by the prompt, not direct access to internal reasoning.


It is text that describes a plausible/likely thought process that conditions future generation by it's presence in the context.

Interestingly, it doesn't always condition the final output. When playing with DeepSeek, for example, it's common to see the CoT arrive at a correct answer that the final answer doesn't reflect, and even vice versa, where a chain of faulty reasoning somehow yields the right final answer.

It almost seems that the purpose of the CoT tokens in a transformer network is to act as a computational substrate of sorts. The exact choice of tokens may not be as important as it looks, but it's important that they are present.


That phenomenon and others is what made it obvious that COT is not its "thinking". I think COT is a process by which the llm expands its processing boundary, in that it allows it to sample over a larger space of possibilities. So its kind of acts like a "trigger" of sorts that allows the model to explore in more ways then without COT. First time I saw this was when I witnessed the "wait" phenomenon. Simply inducing the model to say "wait" in its response improved accuracy of results. as now the model double checked its "work". funny enough it also sometimes lead it to produce a wrong answer where otherwise it should have stuck to its guns. But overall that little wait had a net positive affect. Thats when i knew COT was not same as human thinking as we dont care about trigger words or anything like that, our thinking requires zero language (though it does benefit from language) its a deeper process. Thats why i was interested in latent processing models and foray in that matter.

IIRC Anthropic has research finding CoT can sometimes be uncorrelated with the final output.

Wrong to the point of being misleading. This is a goal, not an assumption

Source: all of mechinterp


It is what it is thinking consciously / its internal narrative. For example a supervillain's internal narrative with their plans would go into their COT notepad. If we want to really lean into the analogy between human psychology and LLMs. The "internal reasoning" that people keep referencing in this thread.. referring to the transformer weights and inscrutable inner working of a GPT.. isn't reasoning, but more like instinct, or the subconscious.

It’s more like if the supervillain had to write one word of his chain of thought, then go away and forget what he was thinking, then come back and write one more word based on what he had written so far, repeating the process until the whole chain of thought is written out. Each token is generated conditional only on the previous tokens.

this is not correct

The reason they get a perfect score on AIME is because every question on AIME had lots of thought put into it, and it was made sure that everything was a possible. SWE-bench, and many other AI benchmarks, have lots of eval noise, where there is no clear right answer, and getting higher than a certain percentage means you are benchmaxxing.


> SWE-bench, and many other AI benchmarks, have lots of eval noise

SWE-bench has lots of known limitations even with its ability to reduce solution leakage and overfitting.

> where there is no clear right answer

This is both a feature and a bug. If there is no clear answer then how do you determine whether an LLM has progressed? It can't simply be judged on making "more right answers" on each release.


Do you think a more messier math benchmark (in terms of how it is defined) might be more difficult for these models to get?


GPT-4o is so far behind the frontier; you shouldn't use it as an indicator of what LLMs are capable of.


Another reason may be that WebGPU didn't allow for as much optimization and control as Vulkan, and the performance isn't as good as Vulkan. WebGPU also doesn't have all the extensions that Vulkan has.


If humans were too complicated to be mathematically modelled, then we wouldn't exist.


Why would charging for extended support hurt Microsoft? They "win" if you pay for it, or if you upgrade to windows 11. If you do neither, you'll probably get hacked down the line if you don't airgap, and Microsoft doesn't gain or lose anything (except goodwill, but that doesn't matter with the kind of monopoly they have).


I think there is a deterministic algorithm (based on GPS position?) for determining who climbs and who descends.


I'm guessing MacOS, Linux, Android, Windows, iOS, and Web?


Correct


The idiom "Don't judge a book by its cover" applies to many situations, but books are not one of them.



TIL:

    #:~:text=
Are there more of these, what is this magic!


It's a "text fragment"; started as a Chrome thing, other browsers are slowly adding support.

https://developer.mozilla.org/en-US/docs/Web/Text_fragments


At least he was totally right about Java not being a hacker language.


So, what actually makes Java successful despite all of these?


2 things (in the corporate world): Android and Spring framework. It's also basically OOP the language which means large enterprises are biased towards it.

Other factors are that it has very strong built-in libraries and has become a popular choice for "how to program" classes so lots of people are familiar with it. Having a similar name to JavaScript (the web's native language) also helps.


These are reasons for why an individual hacker might not take to a language. They don't factor in to a large organisation's choice of language. And to be honest, quite a few of these arguments would apply to Javascript as well.


dawn is the WebGPU backend in chromium, while wgpu is the WebGPU backend for firefox written in Rust. wgpu is seeing a lot of use in non-browser uses; there are some examples on their website.

https://wgpu.rs/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: