Hacker Newsnew | past | comments | ask | show | jobs | submit | tjr's commentslogin

I see AI coding as something like project management. You could delegate all of the tasks to an LLM, or you could assign some to yourself.

If you keep some for yourself, there’s a possibility that you might not churn out as much code as quickly as someone delegating all programming to AI. But maybe shipping 45,000 lines a day instead of 50,000 isn’t that bad.


You need to understand the frustration behind these kinds of posts.

The people on the start of the curve are the ones who swear against LLMs for engineering, and are the loudest in the comments.

The people on the end of the curve are the ones who spam about only vibing, not looking at code and are attempting to build this new expectation for the new interaction layer for software to be LLM exclusively. These ones are the loudest on posts/blogs.

The ones in the middle are people who accept using LLMs as a tool, and like with all tools they exercise restraint and caution. Because waiting 5 to 10 seconds each time for an LLM to change the color of your font, and getting it wrong is slower than just changing it yourself. You might as well just go in and do these tiny adjustments yourself.

It's the engineers at both ends that have made me lose my will to live.


I can't believe we're back to using LoC as a metric for being productive again.

Go enough shoulders down, and someone had to have been the first giant.

Probably not homo sapiens.. other hominids older than us developed a lot of technology

A discovery by a giant is in some sense a new base vector in the space of discoveries. The interesting question is if a statistical machine can only perform a linear combination in the space of discoveries, or if a statistical machine can discover a new base vector in the space of discoveries.. whatever that is.

For sure we know modern LLMs and AIs are not constrained by anything particularly close to simple linear combinations, by virtue of their depth and non-linear activation functions.

But yes, it is not yet clear to what degree there can be (non-linear) extrapolation in the learned semantic spaces here.


Pythagoras is the turtle.

Pythagoras learned from Egyptians that have been largely erased by euro/western narratives of superiority.

There has long been debate that software development is not even an engineering discipline at all, because it lacks certain characteristics from "real world engineering". I have worked with software mostly in aerospace, and I believe that what is typically done in that industry counts as "engineering". Reams of requirements, boatloads of tests (including simulated testing, testing in hardware labs, and testing on the plane), and sign-offs from multiple people who attest to software quality.

I would further think that the same practices could be applied to any software, whether if it was safety-critical or not. If software development isn't engineering, it's not because it can't be, but because not every project is critical enough to warrant the extra time and expense.

I think a similar train of thought applies here. As the article points out, skipping reading the code is probably not a good idea for safety-critical software, but for less critical things, it may be fine.

If someone told me that they applied avionics-level rigor to an iOS puzzle game, I would think that (a) it's probably very solid software, but also (b) they were probably wasting their time. But on the flip side, if someone wanted to spend their time making their puzzle game rock-solid, I don't think that's necessarily a bad thing to do. It's not harmful to have especially robust puzzle games.

Is it worth it to review LLM-generated code? For some projects, maybe not. Even for many projects, maybe not. But I'm not sure that it should be frowned upon either. It might turn up something interesting. Put in whatever level of rigor matches your project needs, personal interest, and schedule!


> Put in whatever level of rigor matches your project needs, personal interest, and schedule!

This is the most refreshing, grounded response I've gotten in awhile <3


Well, when you clickbait/lie about your own premise you can’t really expect a decent conversation lol

?? What

You claim you don’t read the code. People believe you. Later you reveal that actually you do read the code, as well as metrics about the code. You just don’t read line by line and scrutinize them individually. Then you want to say their opinions weren’t grounded, but all that happened is you misrepresented your own argument

In certain, extenuating circumstances, I will read the code. It is not in my common / critical path. It's not how I'd describe my workflow.

All I’m saying is that

by ‘I don’t read code,’ I mean: I don’t do line-by-line review as my primary verification method for most product code. I do read specs, tests, diffs selectively, and production signals - and I would advocate escalating to code-reading for specific classes of risk.

Is not at all what people consider “not reading the code” to be


One of the main points of the GPL was to prevent software from being siphoned up and made part of proprietary systems.

I personally disagree with the rulings thus far that AI training on copyrighted information is "fair use", not because it's not true for human training, but because I think that the laws were neither written nor wielded with anyone but humans in mind.

As a comment upstream a bit said, some people are now rethinking even releasing some material into the public, out of not wanting it to be trained by AI. Prior to a couple of years or so ago, nearly nobody was even remotely thinking about that; we could have decades of copyrighted material out there that, had the authors understood present-day AI, they wouldn't have even released it.


Is your proposition that programmers are now incapable of writing code?

Eventually yes, when incapable becomes a synonymous with finding a job in an AI dominated software factory industry.

Enterprise CMS deployment projects have already dropped amount of assets teams, translators, integration teams, backend devs, replaced by a mix of AI, SaaS and iPaaS tools.

Now the teams are a fraction of the size they used to be like five years ago.

Fear not, there will be always a place for the few ones that can invert a tree, calculate how many golf balls fit into a plane, and are elected to work at the AI dungeons as the new druids.


While I don't share this cynical worldview, I am mildly amused by the concept of a future where, Warhammer 40,000 style, us code monkeys get replaced by tech priests who appease the machine gods by burning incense and invoking hymns.

Same for ERP/CRM/HRM and some financial systems ; all systems that were heavy 'no-code' (or a lot of configuration with knobs and switches rather than code) before AI are now just going to lose their programmers (and the other roles); the business logic / financial calcs etc were already done by other people upfront in excel, visio etc ; now you can just throw that into Claude Code. These systems have decades of rigid code practices so there is not a lot of architecting/design to be done in the first place.

Nick Offerman wants to have a word with you. Given the choice of building my own furniture and things or IKEA and I had the skills I’d go the build it myself route. It’s doable. It was before. And it still is. All we got now is super duper capable auto correct and text completion. Use it for what it is. Don’t let it replace you.

An actor, that happens do carpentry as hobby.

s/Nick Offerman/any talented carpenter.... you missed the point, though

I don't see why it's not possible.

In your example scenario, it sounds like Bob is vibe-coding a project in assembly language, and then using an existing assembler to create the final binary?

I have not tried it, but I would imagine that current LLMs are relatively weak on generating assembly language, just due to less thorough training compared to higher-level languages, but even if so, that's surmountable.

As for, what I think you are suggesting, having the LLM also do the assembly step? Again, in theory, sure, but I would think just using the existing known-good assembler would be preferable to training an LLM to convert assembly language to binary. I'm not sure what you would gain in terms of either speed or overall productivity to have the LLM itself do the assembler step?


I didn't think there would be a gain of speed or productivity, but just a (possibly ominous) idea of 'cutting out the middleman'. Granted, that middleman is very important.

Natural language is not the best language for formal descriptions of projects, but it is the one that humans use most day to day. Perhaps a chain like this would be the start of on demand programs.


Most of my career has been as an individual engineer, but the past few years I have been a project manager. I find this to be very much like using AI for coding.

Which also makes me refute the idea that AI coding is just another rung up on the programming abstraction ladder. Depending on how much you delegate to AI, I don't think it's really programming at all. It's project management. That's not a bad thing! But it's not really still programming.

Even just in the context of my human team, I feel less mentally engaged with the code. I don't know what everything does. (In principle, I could know, but I don't.) I see some code written in a way that differs from how I would have done it. But I'm not the one working day-in, day-out with the code. I'll ask questions, make suggestions, but I'm not going to force something unless I think it's really super important.

That said, I don't 100% like this. I enjoy programming. I enjoy computer science. I especially enjoy things more down the paths of algorithm design, Lisp, and the intersection of programming with mathematics. On my team, I do still do some programming. I could delegate it entirely, but I indulge myself and do a little bit.

I personally think that's a good path with AI too. I think we're at the point where, for many software application tasks, the programming could be entirely hands-off. Let AI do it all. But if I wish to, why not indulge in doing some myself also? Yeah, I know, I know, I'll get "left behind in the dust" and all of that. I'm not sure that I'm in that much of a hurry to churn out 50,000 lines of code a day; I'm cool with 45,100.


I find that AI allows me to get into algorithm design more, and the intersection of math and programming more, by avoiding boilerplate.

You can indulge even more by letting AI take care of the easy stuff so you can focus on the hard stuff.


What happens when the AI does the hard stuff as well?

As described above, I think with AI coding, our role shifts from "programmer" to "project manager", but even as a project manager, you can still choose to delegate some tasks to yourself. Whether if you want to do the hard stuff yourself, or the easy stuff, or the stuff that happens on Thursdays. It's not about what AI is capable of doing, but rather, what you choose to have it do.

SkyNet. When it can do the hard stuff, why do you think we'll still be around for project management and prompting? At that point, we are livestock.

Look around. We have been livestock for at least a decade now.

In fact, we are worse. At least livestock are cared for.


Here's an example from my recent experience: I've been building a bunch of mostly throwaway TUIs using AI (using Python and Rich), and a lot of the stuff just works trivially.

But there are some things where the AI just does not understand how to do proper boundary check to prevent busted layouts, and so I can either argue with it for an hour while it goes back and forth breaking the code in the process of trying to fix my layout issues - or I can just go in and fix it myself.


I wonder how much more it would take Anthropic to make CCC on par with, or even better than, GCC.

At the current state of AI, some centuries.

As far as I underestand from the comments, Anthropic released a "compiler" that translates C code to some assembly, which might or might not be valid input for a linker.

They claimed they were able to compile the Linux kernel (which version ? which config) and boot it (was the boot successfull ? were all devices correctly initialized ? Is userland running without problems ?)

At the moment it really looks like a political farce with no real outcome except some promisses.

Is there a code repository so i can test it ?

What is the licence of this compiler ?


I’m glad cheap stuff exists. Sometimes I really do need something quickly, and borderline-disposable quality is good enough. But I also want the option to buy better than that.

I installed some drywall a few years ago. I plan to install a room of drywall exactly never again. Not worth it for me to buy the best drywall tools.

But I have installed multiple wood floors, replacing old carpet, and would do so again if needed. I’d rather get higher quality tools there so I can keep them and reuse them for years.


Will the agents also be the ones using the software?

yes, so the thought experiment here is, how far can agents (owned by different entities) get in being productive without a human stepping in

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: