Haven't done much more with the boids yet (though I imagine if I continue, I will learn a thing or two!) but I have an example from another domain.
Shell scripting was something that I failed to get any real traction with for decades, and since letting AI help me write dozens of shell scripts, I find that I have gained a basic level of proficiency. (i.e. I found that I had become relatively fluent in writing them even without assistance, which surprised me!)
It ties in with the Input Hypothesis of language acquisition. More volume in = more opportunities for brain to find the patterns, and learn them naturally.
That doesn't work for the approach where you don't look at the code at all, though. Which seems to depend on your goals for the project.
For the boids thing, the math isn't something I enjoy doing manually anyway (I've done that for the last 15 years and it always felt like pulling teeth), so the main learning I'd get there would be at the applied level... How to tweak the Perlin noise to get the result I want, rather than how or why it works in the first place.
Obviously that matters, but how much does it matter? Does it matter if you don't learn anything about computer architecture because you only code in JS all day? Very situational.
There's a subset of people whose identity is grounded in the fact that they put in the hard work to learn things that most people are unable or unwilling to do. It's a badge of honor, and they resent anyone taking "shortcuts" to achieve their level of output. Kind of reminds me of lawyers who get bent out of shape when they lose a case to a pro se party. All those years of law school and studying for the bar exam, only to be bested by someone who got by with copying sample briefs and skimming Westlaw headnotes at a public library. :)
It's that, but it's also that the incentives are misaligned.
How many supposed "10x" coders actually produced unreadable code that no one else could maintain? But then the effort to produce that code is lauded while the nightmare maintenance of said code is somehow regarded as unimpressive, despite being massively more difficult?
I worry that we're creating a world where it is becoming easy, even trivial, to be that dysfunctional "10x" coder, and dramatically harder to be the competent maintainer. And the existence of AI tools will reinforce the culture gap rather than reducing it.
It's a societal problem we are just seeing the effects in computing now. People have given up, everything is too much, the sociopaths won, they can do what they want with my body mind and soul. Give me convenience or give me death.
I think it's important in AI discussions to reason correctly from fundamentals and not disregard possibilities simply because they seem like fiction/absurd. If the reasoning is sound, it could well happen.
I use Claude Code a lot but one thing that really made me concerned was when I asked it about some ideas I have had which I am very familiar with. It's response was to constantly steer me away from what I wanted to do towards something else which was fine but a mediocre way to do things. It made me question how many times I've let it go off and do stuff without checking it thoroughly.
I've had quite a bit of the "tell it to do something in a certain way", it does that at first, then a few messages of corrections and pointers, it forgets that constraint.
> it does that at first, then a few messages of corrections and pointers, it forgets that constraint.
Yup, most models suffer from this. Everyone is raving about million tokens context, but none of the models can actually get past 20% of that and still give as high quality responses as the very first message.
My whole workflow right now is basically composing prompts out of the agent, let them run with it and if something is wrong, restart the conversation from 0 with a rewritten prompt. None of that "No, what I meant was ..." but instead rewrite it so the agent essentially solves it without having to do back and forth, just because of this issue that you mention.
Seems to happen in Codex, Claude Code, Qwen Coder and Gemini CLI as far as I've tested.
LLMs do a cool parlour trick; all they do is predict “what should the next word be?” But they do it so convincingly that in the right circumstances they seem intelligent. But that’s all it is; a trick. It’s a cool trick, and it has utility, but it’s still just a trick.
All these people thinking that if only we add enough billions of parameters when the LLM is learning and add enough tokens of context, then eventually it’ll actually understand the code and make sensible decisions? These same people perhaps also believe if Penn and Teller cut enough ladies in half on stage they’ll eventually be great doctors.
Yes, agreed. I find it interesting that people are saying they're building these huge multi-agent workflows since the projects I've tried it on are not necessarily huge in complexity. I've tried variety of different things re: isntructions files, etc. at this point.
So far, I haven't yet seen any demonstration of those kind of multi-agent workflows ending up with code that won't fall down over itself in some days/weeks. Most efforts so far seems to have to been focusing on producing as much code as possible, as fast as possible, while what I'd like to see, if anything, is the opposite of that.
Anytime I ask for demonstration of what the actual code looks like, when people start talking about their own "multi-agent orchestration platforms" (or whatever), they either haven't shared anything (yet), don't care at all about how the code actually is and/or the code is a horrible vibeslopped mess that contains mostly nonsense.
been experimenting with the same flow as well, it is sort of the motivation behind this project - to streamline the generate code -> detect gaps -> update spec -> implement flow.
curious to hear if you are still seeing code degradation over time?
That's a strange name, why? It's more like a "iterate and improve" loop, "Groundhog Day" to me would imply "the same thing over and over", but then you're really doing something wrong if that's your experience. You need to iterate on the initial prompt if you want something better/different.
Create an AGENTS.md that says something like, "when I tell you to do something in a certain way, make a note of this here".
The only catch is that you need to periodically review it because it'll accumulate things that are not important, or that were important but aren't anymore.
Call me a conspiracy theorist, and granted much of this could be attributed to the fact that the majority of code in existence is shit, but im convinced that these models are trained and encouraged to produce code that is difficult for humans to work on. Further driving and cementing the usage of then when you inevitably have to come back and fix it.
I don't think they would be able to have an LLM withouth the flaws. The problem is that an LLM cannot make a distinction between sense and nonsense in the logical way. If you train an LLM on a lot of sensible material, it will try to reproduce it by matching training material context and prompt context. The system does not work on the basis of logical principles, but it can sound intelligent.
I think LLM producers can improve their models by quite a margin if customers train the LLM for free, meaning: if people correct the LLM, the companies can use the session context + feedback to as training. This enables more convincing responses for finer nuances of context, but it still does not work on logical principles.
LLM interaction with customers might become the real learning phase. This doesn't bode well for players late in the game.
This could be the case even without an intentional conspiracy. It's harder to give negative feedback to poor quality code that's complicated vs. poor quality code that's simple.
Hence the feedback these models get could theoretically funnel them to unnecessarily complicated solutions.
No clue has any research been done into this, just a thought OTTOMH.
Mediocre is fine for many tasks. What makes a good software engineer is that he spots the few places in every software where mediocre is not good enough.
Besides the general awfulness of Windows that you describe, have you looked at C:\Windows recently? It is an unorganised mess with multiple different case styling all over the place. I get this is not that important but I can't help feel it illustrates just how little care is taken behind the scenes. The whole thing seems like a nightmare to deal with.
I had a fresh install of Windows on a new computer which refused to install updates until I ran a bunch of commands in the "terminal". The whole thing is beyond fixing at this point.
I've used Claude Code to do the same (large refactor). It has worked fairly well but it tends to introduce really subtle changes in behaviour (almost always negative) which are very difficult to identify. Even worse if you use it to fix those issues it can get stuck in a loop of constantly reintroducing issues which are slightly different leading to fixing things over and over again.
Overall I like using it still but I can also see my mental model of the codebase has significantly degraded which means I am no longer as effective in stopping it from doing silly things. That in itself is a serious problem I think.
Yes, if you don't stay on top of things and rule with an iron fist, you will take on tons of hidden tech debt using even Opus 4.5. But if you manage to review carefully and intercede often, it absolutely is an insane multiplier, especially in unfamiliar domains.
There's never been a case in my long programming career so far where knowing the low level details has not benefited me. The level of value varies but it is always positive.
When you use LLMs to write all your code you will lose (or never learn) the details. Your decision making will not be as good.
I think there is a big difference. You could and should have both knowledge. This applies to whether you're a lowly programmer or a CEO. Knowing the details will always help you make better decisions.
That’s the credo I’ve lived my life by, but I’ve come to believe it’s not entirely true: knowing the details can lead to ratholes and blurring requirements / solutions / etc. Some of the best execs I’ve met are good precisely because they focus on the business layer, and delegate / rely on others to abstract out the details.
I can’t do that. But I’m coming around to the value in it.
I've seen cases in my career where people knowing the low level things is actually a hindrance.
They start to fight the system, trying to optimise things by hand for an extra 2% of performance while adding 100% of extra maintenance cost because nobody understands their hand-crafted assembler or C code.
There will always be a place for people who do that, but in the modern world in most cases it's cheaper to just throw more money at hardware instead of spending time optimising - if you control the hardware.
If things run on customer's devices, then you need the low level gurus again.
reply