Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I believe that quote in Thomas’ blog can be attributed to me. I’ve at least said something near enough to him that I don’t mind claiming it.

I _never_ made the claim that you could call that 10x productivity improvement. I’m hesitant to categorize productivity in software in numeric terms as it’s such a nuanced concept.

But I’ll stand by my impression that a developer using ai tools will generate code at a perceptibly faster pace than one who isn’t.

I mentioned in another comment the major flaw in your productivity calculation, is that you aren’t accounting for the work that wouldn’t have gotten done otherwise. That’s where my improvements are almost universally coming from. I can improve the codebase in ways that weren’t justifiable before in places that do not suffer from the coordination costs you rightly point out.

I no longer feel like my peers are standing still, because they’ve nearly uniformly adopted ai tools. And again, you rightly point out, there isn’t much of a learning curve. If you could develop before them you can figure out how to improve with them. I found it easier than learning vim.

As for hallucinations I don’t experience them effectively _ever_. And I do let agents mess with terraform code (in code bases where I can prevent state manipulation or infrastructure changes outside of the agents control).

I don’t have any hints on how. I’m using a pretty vanilla Claude code setup. But im not sure how an agent that can write and run compile/test loops could hallucinate.



Appreciate the comment!

> I mentioned in another comment the major flaw in your productivity calculation, is that you aren’t accounting for the work that wouldn’t have gotten done otherwise. That’s where my improvements are almost universally coming from. I can improve the codebase in ways that weren’t justifiable before in places that do not suffer from the coordination costs you rightly point out.

I'm a bit confused by this. There is work that apparently is unlocking big productivity boosts but was somehow not justified before? Are you referring to places like my ESLint rule example, where eliminating the startup costs of learning how to write one allows you to do things you wouldn't have previously bothered with? If so, I feel like I covered this pretty well in the article and we probably largely agree on the value that productivity boost. My point is still stands that that doesn't scale. If this is not what you mean, feel free to correct me.

Appreciate your thoughts on hallucinations. My guess is the difference between what we're experiencing is that in your code hallucinations are still happening but getting corrected after tests are run, whereas my agents typically get stuck in these write-and-test loops and can't figure out how to solve the problem, or it "solves" it by deleting the tests or something like that. I've seen videos and viewed open source AI PRs which end up in similar loops as to what I've experienced, so I think what I see is common.

Perhaps that's an indication of that we're trying to solve different problems with agents, or using different languages/libraries, and that explains the divergence of experiences. Either way, I still contend that this kind of productivity boost is likely going to be hard to scale and will get tougher to realize as time goes on. If you keep seeing it, I'd really love to hear more about your methods to see what I'm missing. One thing that has been frustrating me is that people rarely share their workflows after makign big claims. This is unlike previous hype cycles where people would share descriptions of exactly what they did ("we rewrote in Rust, here's how we did it", etc.) Feel free to email me at the address in my about page[1] or send me a request on LinkedIn or whatever. I'm being 100% genuine that I'd love to learn from you!

[1] https://colton.dev/about/


> but getting corrected after tests are run, whereas my agents typically get stuck in these write-and-test loops

This maybe a definition problem then. I don’t think “the agent did a dumb thing that it can’t reason out of” is a hallucination. To me a hallucination is a pretty specific failure mode, it invents something that doesn’t exist. Models still do that for me but the build test loop sets them aright on that nearly perfectly. So I guess the model is still hallucinating but the agent isn’t so the output is unimpacted. So I don’t care.

For the agent is dumb scenario, I aggressively delete and reprompt. This is something I’ve actually gotten much better at with time and experience, both so it doesn’t happen often and I can course correct quickly. I find it works nearly as well for teaching me about the problem domain as my own mistakes do but is much faster to get to.

But if I were going to be pithy. Aggressively deleting work output from an agent is part of their value proposition. They don’t get offended and they don’t need explanations why. Of course they don’t learn well either, that’s on you.


What I'm saying is that the model will get into one of these loops where it needs to be killed, and I'll look at some of the intermediate states and the reasons for failure and they are because it hallucinated things, ran tests, got an error. Does that make sense?

Deleting and re-prompting is fine. I do that too. But even one cycle of that often means the whole prompting exercise takes me longer than if I just wrote the code myself.


I think maybe this is another disconnect. A lot of the advantage I get does not come from the agent doing things faster than me, though for most tasks it certainly can.

A lot of the advantage is that it can make forward progress when I can’t. I can check to see if an agent is stuck, and sometimes reprompt it, in the downtime between meetings or after lunch before I start whatever deep thinking session I need to do. That’s pure time recovered for me. I wouldn’t have finished _any_ work with that time previously.

I don’t need to optimize my time around babysitting the agent. I can do that in the margins. Watching the agents is low context work. That adds the capability to generate working solutions during times that was previously barred from that.


I've done a few of these types of hands off and go to a meeting style interactions. It has worked a few times, but I tend to just find that they over do it or cause issues. Like you ask them to fix an error and they add a try catch, swallow the error, and call it a day. Or the PR has 1000 line changes when it should have two.

Either way, I'm happy that you are getting so much out of the tools. Perhaps I need to prompt harder, or the codebase I work on has just deviated too much from the stuff the LLMs like and simply isn't a good candidate. Either way, appreciate talking to you!


> One thing that has been frustrating me is that people rarely share their workflows after making big claims

Good luck ever getting that. I've asked that about a dozen times on here from people making these claims and have never received a response. And I'm genuinely curious as well, so I will continue asking.


People share this stuff all the time. Kenton Varda published a whole walkthrough[1], prompts and all. Stories about people's personal LLM workflows have been on the front page here repeatedly over the last few months.

What people aren't doing is proving to you that their workflows work as well as they say they do. You want proof, you can DM people for their rate card and see what that costs.

[1] https://news.ycombinator.com/item?id=44159166


Thanks for sharing and that is interesting to read through. But it's still just a demo, not live production code. From the readme:

> As of March, 2025, this library is very new, prerelease software.

I'm not looking for personal proof that their workflows work as well as they say they do.

I just want an example of a project in production with active users depending on the service for business functions that has been written 1.5/2/5/10/whatever x faster than it otherwise would have without AI.

Anyone can vibe code a side project with 10 users or a demo meant to generate hype/sales interest. But I want someone to actually have put their money where their mouth is and give an example of a project that would have legal, security, or monetary consequences if bad code was put in production. Because those are the types of projects that matter to me when trying to evaluate people's claims (since those are what my paycheck actually depends on).

Do you have any examples like that?


Dude.

That code tptacek linked you to? It's part of our (Cloudflare's) MCP framework. Which means all of the companies mentioned in this blog post are using this code in production today: https://blog.cloudflare.com/mcp-demo-day/

There you go. This is what you are looking for. Why are you refusing to believe it?

(OK fine. I guess I should probably update the readme to remove that "prerelease" line.)


Lol misunderstanding a disclaimer in a readme is not refusing to believe something. But my apologies and appreciate the clarification.


Yeah OK fair that line in the readme is more prominent than I remember it being.

I never look at my own readmes so they tend to get outdated. :/

Fixing: https://github.com/cloudflare/workers-oauth-provider/pull/59


See, I just shared Kenton Varda describing his entire workflow, and you came back asking that I please show you a workflow that would find more credible. Do you want to learn about people's workflows, or do you want to argue with them that their workflows don't work? Nobody is interested in doing the latter with you.


I don't think you understood me at all. I don't care about the actual workflow. I just want an example of of a project that:

1. Would have legal, security, or monetary consequences if bad code was put in production

2. Was developed using an AI/LLM/Agent/etc that made the development many times faster than it otherwise would have (as so many people claim)

I would love to hear an example where "I used Claude to develop this hosting/ecommerce/analytics/inventory management service that is used in production by 50 paying companies. Using an LLM we deployed the project in 4 week where it would normally take us 4 months." Or "We updated an out of date code base for a client in half the time it would normally take and have not seen any issues since launch"

At the end of the day I code to get paid. And it would really help to be able to point to actual cases where both money and negative consequences of failure are on the line.

So if you have any examples please share. But the more people deflect the more skeptical I get about their claims.


Seems like I understand you pretty well! If you wanted to talk about workflows in a curious and open way, your best bet would have been finishing that comment with something other than "the more people deflect the more skeptical I get". Stay skeptical! You do you.


Sorry if I came of as prickly, but it wasn't exactly like your parent comment was much kinder.

I mean it's pretty simple - there are a lot of big claims that I read but very few tangible examples that people share where the project has consequences for failure. Someone else replied with some helpful examples in another thread. If you want to add another one feel free, if not that's cool too.


It almost feels like sealioning. People say nobody shares their workflow, so I share it. They say well that's not production code, so I point to PRs in active projects I'm using, and they say well that doesn't demonstrate your interactive flow. I point out the design documents and prompts and they say yes but what kind of setup do you do, which MCP servers are you running, and I point them at my MCP repo.

At some point you have to accept that no amount of proof will convince someone that refuses to be swayed. It's very frustrating because, while these are wonderful tools already, its clear that the biggest thing that makes a positive difference is people using and improving them. They're still in relative infancy.

I want to have the kind of conversations we had back at the beginning of web development, when people were delighted at what was possible despite everything being relatively awful.


I don't care about your workflow, that can be figured out from the 10,000 blog posts all describing the same thing. My issue is with people claiming this huge boost in productivity only to find out that they are working on code bases that have no real consequence if something fails, breaks, or doesn't work as intended.

Since my day job is creating systems that need to be operational and predictable for paying clients - examples of front end mockups, demos, apps with no users, etc don't really matter that much at the end of the day. It's like the difference between being a great speaker in a group of 3 friends vs standing up in front of a 30 person audience with your job on the line.

If you have some examples, I'd love to hear about them because I am genuinely curious.


Sure, I'm working on a database proxy in rust at the moment, if you hop on GitHub, same username. It's not pure AI in the PRs but I know approximately no Rust, so AI support has been absolutely critical. I added support for parsing binary timestamps from PG's wire format, as an example.

I spent probably a day building prompts and tests and getting an example of failing behavior in Python, and then I wrote pseudocode and had it implement and write comprehensive unit tests in rust. About three passes and manual review of every line. I also have an MCP that calls out to O3 as a second opinion code review and passes it back in

Very fun stuff


I use agentic flows writing code that deals with millions of pieces of financial data every day.

I rolled out a PR that was a one shot change to our fundamental storage layer on our hot path yesterday. This was part of a large codebase and that file has existed for four years. It hadn’t been touched in 2. I literally didn’t touch a text editor on that change.

I have first hand experience watching devs do this with payment processing code that handles over a billion dollars on a given day.


Thanks, it's quite helpful to hear examples like that.

When you say you didn't touch a text editor, do you mean you didn't review the code change or did you just look at the diff in the terminal/git?


I reviewed that PR in the GitHub web gui and in our CI/CD gui. It was one of several PRs that I was reviewing at the time, some by agents, some by people and some by a mix.

Because I was the instigator of that change a second code owner was required to approve the PR as well. That PR didn't require any changes, which is uncommon but not particularly rare.

It is _common_ for me to only give feedback to the agents via the GitHub gui the same way I do humans. Occasionally I have to pull the PR down locally and use the full powers of my dev environment to review but I don't think that is any more common than with people. If anything its less common because of the tasks the agents get typically they either do well or I kill the PR without much review.


> But I’ll stand by my impression that a developer using ai tools will generate code at a perceptibly faster pace than one who isn’t.

And this is the problem.

Masterful developers are the ones you pay to reduce lines of code, not create them.


Every. Single. Time. You say you get productivity gains from ai tools on the internet someone will tell you that you weren’t good at your job before the ai tooling.

Perhaps, start from the assumption that I have in fact spent a fair bit of time doing this job at a high level. Where does that mental exercise take you with regard to your own position on ai tools.

In fact, you don’t have to assume I’m qualified to speak on the subject. Your retort assumes that _everyone_ who gets improvement is bad at this. Assume any random proponent isn’t.


I think what GP is saying is that in most cases generating allot of code is not a good thing. Every line of LLM generated code has to be audited because they are prone to hallucinations and auditing someone else's code is much more difficult and time consuming than auditing your own code. Allot of code also requires more maintenance.


The comment is premised on the idea that Kasey either doesn't know what a "masterful developer" is or needs to be corrected back to it.


It's a commentary on one of the things I perceive as a flaw with LLMs, not you.

One of the most valuable qualities of humans is laziness.

We're constantly seeking efficiency gains, because who wants to carry buckets of water, or take laundry down to the river?

Skilled developers excel at this. They are "lazy" when they code - they plan for the future, they construct code in a way that will make their life better, and easier.

LLMs don't have this motivation. They will gleefully spit out 1000 lines of code when 10 will do.

It's a fundamental flaw.


Now, go back and contemplate what my feedback means if I am well versed on Larry Wall-isms.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: