I use LLM’s daily and agents occasionally. They are useful, but there is no need to move any goal posts; they easily do shit work still in 2026.
All my coworkers use agents extensively in the backend and the amount of shit code, bad tests and bugs has skyrocketed.
Couple that with a domain (medicine) where our customer in some cases needs to validate the application’s behaviour extensively and it’s a fucking disaster —- very expensive iteration instead of doing it well upfront.
I think we have some pretty good power tools now, but using them appropriately is a skill issue, and some people are learning to use them in a very expensive way.
I find that chat is pretty good when you're describing what you want to do, for saying "actually, I wanted something different," or for giving it a bug report. For making fine adjustments to CSS, it would be nice if you could ask the bot for a slider or a color picker that makes live updates.
I just dont understand how anyone can go "yeah okay lets install these guys rootkit complete with keylogger and who knows what, totally legit!", and for what? to play a game
What I’ve seen also happen is senior devs suddenly starting to put out garbage code and PRs. One senior dev in our project has become a menace and the quality of his work has dramatically dropped.
I graduated literally 3 months ago so that's my skill level.
I also have no idea what the social norms are for AI. I posted the comment after a friend on Discord said I should disclose my use of AI.
The underlying purpose of the PR is ironically because Cline and Copilot keep trying to use `int` when modern C++ coding standards suggest `size_t` (or something similar).
Those lazy employees need that strict supervision!
Maybe these c suites and other employee hating assholes are projecting their own lazyness. Or maybe they think they are so superior compared to ”common” people that the ”common” people must be lazy trash.
I don’t know, but it is weird to assume most people won’t do their job without ”strict supervision”. Like super weird.
(Btw, anecdotally, most people I know work more efficiently from home with fewer breaks)
> Those lazy employees need that strict supervision!
This comment is a bit reactionary. It would be more balanced to say that lower motivation employees will benefit from a more structured working environment.
Since you are only willing to go for 1:1 odds with a 3 year timeframe, I assume you are in agreement that it might happen? Otherwise I’m sure you would give him better odds with a larger timeframe :)
Mikko Hyppönen, who holds at least some level of authority on the subject, just recently said in an interview that he believes currently the defenders have the advantage. He claimed there’s currently zero known large incidents where the attackers have been known to utilize LLMs. (Apart from social hacking.)
To be fair, he also said that the defenders having the advantage is going to change.
To be perfectly fair saying ”it’s aware of the best practices, context and internal usage” is very misleading. It’s aware none of those (as it is not ”aware” of anything), and that is perfectly clear when it produces nonsensical results. Often the results are fine, but I see nonsensical results often enough in my more LLM dependant coworkers’ PRs.
I’m not saying not to use them, but you putting it like that is very dishonest and doesn’t represent the actual reality of it. It doesn’t serve anyone but the vendors to be a shill about LLMs.
All my coworkers use agents extensively in the backend and the amount of shit code, bad tests and bugs has skyrocketed.
Couple that with a domain (medicine) where our customer in some cases needs to validate the application’s behaviour extensively and it’s a fucking disaster —- very expensive iteration instead of doing it well upfront.
reply