Generally us humans come up with new things by remixing old ideas. Where else would they come from? We are synthesizing priors into something novel. If you break the problem space apart enough, I don't see why some LLM can't do the same.
Yes it is, LLMs perform logical multi step reasoning all the time, see math proofs, coding etc. And whether you call it synthesis or statistical mixing is just semantics. Do LLMs truly understand? Who knows, probably not, but they do more than you make it out to be.
Novel problems are usually a composite of simpler and/or older problems that have been solved before. Decomposition means you can rip most novel problems apart and solve the chunks. LLMs do just fine with that.
It’s been a minute and a half and I don’t see the evidence you can task an agent swarm to produce useful software without your input or review. I’ve seen a few experiments that failed, and I’ve seen manic garbage, but not yet anything useful outside of the agent operators imagination.
Agent swarms are what, a couple of months old? What are you even talking about. Yes, people/humans still drive this stuff, but if you think there isn't useful software out there that can be handily implemented with current gen agents that need very little or no review, then I don't know what to tell you, apart from "you're mistaken". And I say that as someone who uses three tools heavily but has otherwise no stake in them. The copium in this space is real. Everyone is special and irreplaceable, until another step change pushes them out.
The next thing after agent swarms will be swarm colonies and people will go "it's been a month since agentic swarm colonies, give it a month or two". People have been moving the goal posts like that for a couple years now, it's starting to grow stale. This is like self driving cars which were going to be workingin 2016 and replace 80% of drivers by 2017, all over again. People falling for hype instead of admitting that while it appears somewhat useful, nobody has any clue if it's 97% useful or just 3% useful but so far it's looking like the later.
I can see this image shows an *aerial/satellite view of a coastline*. Here are the key features I can identify:
## Geographic Features
- *Ocean/Sea*: A large body of deep blue water dominates a significant portion of the image
- *Coastline*: A clearly defined boundary between land and water with what appears to be a rugged or natural shoreline
- *Beach/Shore*: Light-colored sandy or rocky coastal areas visible along the water's edge
## Terrain
- *Varied topography*: The land area shows a mix of greens and browns, suggesting:
- Vegetated areas (green patches)
- Arid or bare terrain (brown/tan areas)
- *Possible cliffs or elevated terrain* along portions of the coast
## Atmospheric Conditions
- *Cloud cover*: There appear to be some clouds or haze in parts of the image
- Generally clear conditions allowing good visibility of surface features
## Notable Observations
- The color contrast between the *turquoise/shallow nearshore waters* and the *deeper blue offshore waters* suggests varying ocean depths (bathymetry)
- The coastline geometry suggests this could be a *peninsula, island, or prominent headland*
- The landscape appears relatively *semi-arid* based on the vegetation patterns
---
Note: Without precise geolocation metadata, I'm providing a general analysis based on visible features. The image appears to capture a scenic coastal region, possibly in a Mediterranean, subtropical, or tropical climate zone.
Would you like me to focus on any specific aspect of this image?
I don't understand why "it's just predicting words, bro" is still seen as a valuable argument. A LOT has to happen to accurately predict the next word(s) for any given topic.
If that supposed to be a dismissal, it's not a good one.
LLMs have finally freed me from the shackles of yak shaving. Some dumb inconsequential tooling thing doesn't work? Agent will take care of it in a background session and I can get back to building things I do care about.
I'm finding that in several kinds of projects ranging from spare-time amusements to serious work, LLMs have become useful to me by (1) engaging me in a conversation that elicits thoughts and ideas from me more quickly than I come up with them without the conversation, and (2) pointing me at where I can get answers to technical questions so that I get the research part of my work done more quickly.
Talking with other knowledgeable humans works just as well for the first thing, but suitable other humans are not as readily available all the time as an LLM, and suitably-chosen LLMs do a pretty good job of engaging whatever part of my brain or personality it is that is stimulated through conversation to think inventively.
For the second thing, LLMs can just answer most of the questions I ask, but I don't trust their answers for reasons that we all know very well, so instead I ask them to point me at technical sources as well, and that often gets me information more quickly than I would have by just starting from a relatively uninformed google search (though Google is getting better at doing the same job, too).
It's not that complicated. 4o was RLHF'd to be sycophantic as hell, which was fine until some one had their psychotic episode fueled by it and so they changed it with the next model.
reply