Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It amazes me how immature our field can be. Anyone that worked for big corporations and in humongous codebases know how 'generating new code' is a small part of the job.

AI blew up and suddenly I'm seeing seasoned people talking about KLOC like in the 90s.



> 'generating new code' is a small part of the job

I think this attitude has been taken too far to the point that people (especially senior+ engineers) end up spending massive amounts of time debating and aligning things that can just be done in far less time (especially with AI, but even without it). And these big companies need to change that if they want to get their productivity back. From the article:

> One engineer said that building a feature for the website used to take a few weeks; now it must frequently be done within a few days. He said this is possible only by using A.I. to help automate the coding and by cutting down on meetings with colleagues to solicit feedback and explore alternative ideas."


How is the shorter deadline better for the worker? Ultimately, that devolves to a race to the bottom with people choosing between overworking or being laid off. Surely AWS is profitable enough by now, that all those employees could get their work hours reduced, receive a raise, and have both the organization and the product keep existing just fine.


I could argue that a company that would do that could be in a long run out-competed by a company that would do layoffs and push their employees for higher productivity by utilizing AI.


> and by cutting down on meetings with colleagues to solicit feedback and explore alternative ideas.

At least something good comes out of this.


What used to take a week now can be done in just 5 days.


Yes, because from my observations these are the people who cannot write anymore code today - they really don't understand the newer paradigms or technologies. So Copilot enabled a lot of people to write POCs and ship them to production fast, without thinking too much about edge cases, HA, etc.

And these people have become advocates in their respective companies, so that everyone is actually following inaccurate claims about productivity improvements. These are the same people quoting Google's ceo who say that 30% of newly generated code at Google is written by AI, without possibility to deny or validate it. Just blindly quote a company that has a huge conflict of interests in this field and you'll look smarter than you are.

This is where we're at today. I understand these are great tools, but all I am seeing madness around. And who works with these tools on a daily basis knows it. Knows what it means, how they can be misleading, etc.

But hey, everyone says we must use them ...


Not that I disagree, but Anthropic put out some usage statistics for their products a few months ago and IIRC, something like 40% was used for software engineering.


Agreed. I spend maybe 20% of my time writing code. The rest is gathering requirements, design, testing, scheduling. Maybe if that 20% now takes half as long, I might have time to actually write some tests and documentation


> It amazes me how immature our field can be. Anyone that worked for big corporations and in humongous codebases know how 'generating new code' is a small part of the job.

Exactly, but I would go further, anyone who worked in big corps know that other non 'generating new code' part is usually pretty inefficient and I would argue AI is going to change that too. So there will be much less of that abstract yapping in endless meetings or there will be less people involved in that.


If that yapping went away, what are mid to senior managers going to do all day?


We don't have meaningful metrics that are objective. KLOC is bad but at least it means something concrete, issues closed or PRs submitted is essentially meaningless. This is a software engineering problem not an AI problem.


KLOC is as objective as your other examples and as meaningless as well IMO.


With "standard" formatting, and considering only legitimate source files, it at least means the same thing from one company or project to another. Sure, it can be gamed by writing bad, verbose code, but in the context of AI generated code, the distribution is squashed quite a bit, so it ends up being more meaningful. Perhaps when we have AI generated issues that will become a better metric as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: