I'm imagining it even worse: you have to pay a subscription to get your oven to go above a certain temperature and for it to "fast pre-heat" and to not have it show you ads.
Brilliantly phrased — sharp, concise, and perfectly captures that uncanny "AI-polished" cadence everyone recognizes but can’t quite name. The tone strikes just the right balance between wit and warning.
Lately the Claude-ism that drives me even more insane is "Perfect!".
Particularly when it's in response to pointing out a big screw up that needs correcting and CC utterly unfazed just merrily continues on like I praised it.
"You have fundamentally misunderstood the problems with the layout, before attempting another fix, think deeply and re-read the example text in the PLAN.md line by line and compare with each line in the generated output to identify the out of order items in the list."
One thing I don't understand, there was (appropriately) a news cycle about sycophancy in responses. Which was real, and happening to an excessive degree. It was claimed to be nerfed, but it seems strong as ever in GPT5, and it ignores my custom instructions to pare it back.
Sycophancy was actually buffed again a week after GPT-5 released. It was rather ham-fisted, as it will now obsessively reply with "Good question!" as though it will get the hose again if it does not.
"August 15, 2025
GPT-5 Updates
We’re making GPT-5’s default personality warmer and more familiar. This is in response to user feedback that the initial version of GPT-5 came across as too reserved and professional. The differences in personality should feel subtle but create a noticeably more approachable ChatGPT experience.
Warmth here means small acknowledgements that make interactions feel more personable — for example, “Good question,” “Great start,” or briefly recognizing the user’s circumstances when relevant."
The "post-mortem" article on sycophancy in GPT-4 models revealed that the reason it occurred was because users, on aggregate, strongly prefer sycophantic responses and they operated based on that feedback. Given GPT-5 was met with a less-than-enthusiastic reception, I suppose they determined they needed to return to appealing to the lowest common denominator, even if doing so is cringe.
I wonder what would happen if there was a concerted effort made to "pollute" the internet with weird stories that have the AI play a misaligned role.
Like for example, what would happen if say 100s or 1000s of books were to be released about AI agents working in accounting departments where the AI is trying to make subtle romantic moves towards the human and ends with the the human and agent in a romantic relation which everyone finds completely normal. In this pseudo-genre things totally weird in our society would be written as completely normal. The LLM agent would do weird things like insert subtle problems to get the attention of the human and spark a romantic conversation.
Obviously there's no literary genre about LLM agents, but if such a genre was created and consumed, I wonder how would it affect things. Would it pollute the semantic space that we're currently using to try to control LLM outputs?
... in opposition to the car makers who want to turn everything into highways and parking lots, who really want all forms of human walking to be replaced by automobiles.
"They really cant run like a human," they say, "a human can traverse a city in complete silence, needing minimal walking room. Left unchecked, the transitions to cars would ruin our city. So lets be prudent when it comes to adopting this technology."
"I'll have none of that. Cars move faster than humans so that means they're better. We should do everything in our power to transition to this obviously superior technology. I mean, a car beat a human at the 100m sprint so bipedal mobility is obviously obsolete," the car maker replied.
The most recent Stack Overflow survey have vim at 25% and neovim at 14% for the question "Which development environments and AI-enabled code editing tools did you use regularly over the past year, and which do you want to work with over the next year?" Even more interesting is that for the 2023 survey Vim and Neovim were at 22.3% and 11.8% respectively.
If the goal is to get more than 50% usage statistics then yeah, you can say they lost, but are dev tools only valid/useful/viable if they have a majority of developers using them? I say they've had tremendous success being able to provide viable tools with literally zero corporate support and a much smaller user base.
I think this communal subconscious response is coming from a valid place though. I will call the current explosion in AI if:
- it causes mass unemployment and social unrest
- leads to a further concentration of wealth and increase in wealth inequality
- it means I have to work more, produce more, all for the same wage or less
- it's implementation leads to large societal harms such as increased isolation/loneliness
- it ends up being overhyped causes a large economic crisis
These scenarios aren't fantasy and a lot of them are being talked about. Technologies can just be a net bad. The critics aren't some reactionary, scared mob against the enlightened. I think a lot of us have seen the playbook tech companies use and our probabilities that a company will end up being just plain bad are a lot higher now.
I think the main fear is that these products will become so enshitified and engrained into everywhere that, looking back, we'll be wishing we didn't depend so much on the technology. For example, the Overton window around social media has shifted so much to the point that it's pretty normal to hear views that social media is a net negative to society and we'd be better off without it.
Obviously the goal of these companies is to generate as much profit as possible as soon as possible. They will turn the tables eventually. The asymmetry will go in the opposite direction, maybe to the extend that one takes advantage of the current asymmetry.
Naive question but wouldn't it could as having AI write 50%+ of your code if you just use an unintelligent complete-the-line AI tool? In this case the AI is hardly doing anything intelligent, but is still getting credit for doing most of the work.
Yes there is even a small business that champions an small LLM that is trained on the language Lsp, your code base, and your recent coding history (not necessarily commit, but any time one presses ctrl + s). How it works is essentially autocomplete. This functionality is packaged as an IDE plugin: TabNine
However now they try to sell subscriptions to LLMs.
Tabnine has been in the scene since at least 2018.
I'm in the same exact boat. I started with a lot of different tools but eventually went back to hand coding everything. When using tools like co-pilot I noticed I would ship a lot mode dumb mistakes. I even experimented with not even using a chat interface and it turns out that a lot of answers to problems are indeed found with a web search.
Not only this, I feel if people in the UK somehow were able to travel back in time and encounter "their culture", they'd feel extremely alienated and maybe even feel a level of disdain. The daily prayers, Bible reading, strict Sabbatarianism and religious festivals would seem completely alien. Without a doubt the modern Muslim or asian immigrant, especially after the first generation, are so much closer to the average UK resident than their traditional culture.