It’s not a measure of productivity, but some number of new lines is generally necessary for new functionality. And in my experience AI tends to produce more lines of code than a decent human for similar functionality. So I’d be very shocked if an agent completing a feature didn’t crank out 1500 lines or more.
I think a lot of people have a fear of AI coding because they're worried that we will move from a world where nobody understands how the whole system works, to a world where nobody knows how any of it works.
This comment on the article sums it up for me, at least in part:
“Nobody knows how the whole system works, but at least everybody should know the part of the system they are contributing to.
Being an engineer I am used to be expert of the very layer of the stack I work on, knowing something of the adjacent layers, mostly ignoring how the rest work.
Now that LLMs write my very code, what is the part that I’m supposed to master? I think the table is still shifting and everybody is failing to grasp where it will stabilize. Analogies with past shifts aren’t helping either.‘
While that may very well be true, it's a valid reply to the GP who made this claim, not to my comment explaining to the parent why their argument was logically flawed.
Just because you disagree with me doesn’t mean my argument is “logically flawed.” And as the other commenter said, I never said AI had no value. I have used various AI tools for probably 4 years now.
If you’re going to talk to and about people in such a condescending way then you at least ask clarifying questions before jumping to the starkest, least charitable interpretation of their point.
reply