> It's clear that the human language network is not like LLM in that sense.
Is it though? If rhythm or tone changes meaning, then just add symbols for rhythm and tone to LLM input and train it. You'll get not just words out that differ based on those additional symbols wrapping words, but you'll also get the rhythm and tone symbols in the output.
Just as enterprise software is proof positive of no intelligence under the hood.
I don't mean the code producers, I mean the enterprise itself is not intelligent yet it (the enterprise) is described as developing the software. And it behaves exactly like this, right down to deeply enjoying inflicting bad development/software metrics (aka BD/SM) on itself, inevitably resulting in:
Well, you're #24 in this article's hall of fame, and the LLM thinks your moderation views stood the test of time. Perhaps it can already retrieve them for you.
> we do both text and voice (roughly 70% of data collection is typed, 30% spoken). Partly this is to make sure the model is learning to decode semantic intent (rather than just planned motor movements)
Both of these modes are incredibly slow thinking. Conciously shifting from thinking in concepts to thinking in words is like slamming on brakes for a school zone on an autobahn.
I've gathered most people think in words they can "hear in their head", most people can "picture a red triangle" and literally see one, and so on. Many folks who are multi-lingual say they think in a language, or dream in that language, and know which one it is.
Meanwhile, some people think less verbally or less visually, perhaps not verbally or visually at all, and there is no language (words).
A blog post shared here last month discussed a person trying to access this conceptual mode, which he thinks is like "shower thoughts" or physicists solving things in their heads while staring into space, except "under executive function". He described most of his thoughts as words he can hear in his head, with these concepts more like vectors. I agree with that characterization.
I'm curious what % of folks you've scanned may be in this non-word mode, or if the text and voice requirement forces everyone into words.
I agree that thinking in words is much slower than thinking in concepts would be -- that's the point of training models like this, so that ideally people can always just think in concepts. That said, we do need to get some kind of ground truth of what they're thinking in order to train the model, so we do need them to communicate that (in words).
One thing that's particularly exciting here is that the model often gets the high-level idea correct, without getting any words correct (as in some of the examples above), which suggests that it is picking up the idea rather than the particular words.
> ideally people can always just think in concepts
Are you pursing an idea of how to help people like this author* access this mode that some of us are always in unless kicked out of it by the need for words?
Very needed right now — the opposite of the YouTube-ization of idea transfer.
It doesn't seem clear this is accessible without other changes in wiring? The inability to "picture" things as visuals seems to swap out for "conceptualizing" things in -- well, I don't have words for this.
An attempt from that essay:
This is not what Hadamard is talking about when he describes the wordless thought of the mathematicians and researchers he has surveyed. Instead, what they seem to be doing is something similar to this subconscious, parallelized search, except they do it in a “tensely” focused way.
The impression I get is that Hadamard loads a question into his mind (either in a non-verbal way, or by reading a mathematical problem that has been written by himself or someone else), and then he holds the problem effortfully centered in his mind. Effortfully, but wordlessly, and without clear visualizations. Describing the mental image that filled his mind while working on a problem concerning infinite series for his thesis, Hadamard writes that his mind was occupied by an image of a ribbon which was thicker in certain places (corresponding to possibly important terms). He also saw something that looked like equations, but as if seen from a distance, without glasses on: he was unable to make out what they said.
A couple of this author's speculations aren't how I'd say it works when this is one's default mode, but most are in the neighborhood. He comes the closest of what I've read by people who do think the way the author thinks — which seems to be most people.
I'm pretty sure they not only show up with a PowerPoint file, but one with missing/nonembedded fonts, web images, perhaps even a video in there somewhere. At least that's been my experience with people sending me stuff to print.
When I did IT work for my university, I was in charge of a big plotter printer that the science students used to print posters with summaries of their research for conferences. The only format I ever got was PowerPoint. Based on the number of search results for "powerpoint research poster template", it looks like this PowerPoint is still the format of choice.
I never really thought about it, but it is kind of odd that the same community that loves using LaTeX for document formatting and typesetting research papers is also using PowerPoint as a desktop publishing substitute.
It would be neat if Valve would fund having Steam Client run on Apple Silicon without Rosetta 2 so arm games like Baldur's Gate 3 can be fully supported.
> Deep work with an open office? Dont make me laugh. Please for the love of god bring back cubicles.
Or doors.
25 years ago, Microsoft Redmond had a slogan: "Every dev a door".
In early 2000s, it began to be two devs per room. We all know what happened since. Open offices save facilities concrete money per seat. Productivity lost from lack of deep work is not a line item anyone knows how to track.
The "every dev a door" plus "pair programming" was shown by studies from groups like Pivotal Labs as being optimal for working code, but ... and a big but ...
Companies intentionally optimize for things other than working code. You get what you measure and they measure what's easy instead of measure what matters.
Is it though? If rhythm or tone changes meaning, then just add symbols for rhythm and tone to LLM input and train it. You'll get not just words out that differ based on those additional symbols wrapping words, but you'll also get the rhythm and tone symbols in the output.
reply