I can tell you from reading the code in the 90s, no LLM will save you. It’s well written, but it’s not structured like modern programs. IIRC he invented his own trampoline system using goto that will leave you scratching your head for days, just trying to figure out how it works. An LLM might be able to guess, but it def isn’t going to one-shot it and that means you will need to be able to understand it as well.
I do think it is possible with the advent of Claude Agent to transpile the code. First I would refactor the trampoline system to be functional and unit test everything. Then I would use those tests to validate the transpilation. It's something that I would consider doing for a Wall Street Raider 2 to overhaul the engine and deliver massive improvements to the engine itself. I do want to do this to a certain extent to implement automated e2e testing. But I don't mind BASIC at all, prefer it actually, I just want automated testing set up. But a lot of this is beyond the scope of my goals for Early Access release.
Which, IMHO, should be why we should be able to change them freely or make our own. Being locked into a specific harness because you pay 20 bucks per month vs. pay-per-use ... is kinda dumb.
The reason Anthropic is pushing on the closed harness is that they're not confident with their ability to win on model quality long term, so they're trying to build lock-in. They can capture some additional telemetry owning the harness as well, but given the amount of data the agent loop already transmits, that borders on unethical spyware (which might be part of the reason they're afraid to open source).
Ultimately the market is going to force them to open up and let people flex their subs.
> Being locked into a specific harness because you pay 20 bucks per month vs. pay-per-use ... is kinda dumb.
I’ll probably get downvoted for this, but am I the only one who thinks it’s kind of wild how much anger is generated by these companies offering discounted plans for use with their tools?
At this point, there would be less anger and outrage on HN if they all just charged us the same high per-token rate and offered no discounts or flat rate plans.
Okay, but why on earth should I as an OpenCode user accept that limitation when OpenAI explicitly supports 3rd party clients? That's how competition works in a healthy market.
I certainly haven't built up enough brand loyalty to tolerate Anthropic's behavior as they tightened usage quotas on the Pro plan to the point of becoming unusable for actual development.
(And sure, they probably don't care because they're losing money on that plan but again, OpenAI offers a plan at the same price point with vastly superior usage limits so I just canceled Claude, subscribed to Codex and moved on with my life. Anthropic's profit margin or lack thereof isn't my problem as a consumer when alternatives exist.)
> Anthropic's profit margin or lack thereof isn't my problem as a consumer when alternatives exist.
For now. When a company is deliberately trying to be profitable and not subsidized by VC money; I'm more likely to buy their product. I have no desire to live further in a world run by monopolies.
But Anthropic very much is subsidized by VC money, just like OpenAI. They just raised another $30 billion this week. I'm not sure how Anthropic managed to position themselves as the "good guy" in many people's minds, but from my vantage they're much more similar to OpenAI than they are different.
So for the time being I'll stick with the option that isn't trying to profit from lock-in in the long term vs. competing on technical merits of their product. The great thing about basing a workflow on a tool like OpenCode is that if OpenAI enshittifies Codex, I don't have to worry about being trapped and can easily pivot to an open source model, or Anthropic via the API, etc depending on how the future turns out.
I've been paying for the ultra cheap Z.ai plan as my fallback for a while now anyway, or I can use my Github Copilot or Gemini AI Pro plan via OpenCode integrations (though the Gemini integration is probably the least stable of the four), so I certainly won't hesitate to drop OpenAI too if they give me sufficient cause.
No, you're not the only one. The outraged entitlement is pretty funny tbh. How dare they dictate that they'll only subsidize your usage if you use their software!!
Yes, I'm entitled because I didn't stick around paying for a subpar plan compared to their direct competitor OpenAI who supports my use case at the same price point.
Any reasonable person should be thanking Dario for lock-in that protects us from nefarious alternative clients and pledging to pay even more for the privilege of lower usage limits!
At this point subsidizing Chinese open-weights vendors by paying for them is just the right thing to do. Maybe they too might go closed-weights when they become SotA, but they're now pretty close and haven't done it.
The harness is effectively the agent's 'body'. Swapping the brain (model) is good, but if the body (tools/environment) is locked down or inefficient, the brain can't compensate. Local execution environments that standardize the tool interface are going to be critical for avoiding that lock-in.
> Like most things - assume the "20/100/200" dollar deals that are great now are going to go down the enshitification route very rapidly.
I don’t assume this at all. In fact, the opposite has been happening in my experience: I try multiple providers at the same time and the $20/month plans have only been getting better with the model improvements and changes. The current ChatGPT $20/month plan goes a very long way even when I set it to “Extra High” whereas just 6 months ago I felt like the $20/month plans from major providers were an exercise in bouncing off rate limits for anything non-trivial.
Inference costs are only going to go down from here and models will only improve. I’ve been reading these warnings about the coming demise of AI plans for 1-2 years now, but the opposite keeps happening.
> Inference costs are only going to go down from here and models will only improve. I’ve been reading these warnings about the coming demise of AI plans for 1-2 years now, but the opposite keeps happening.
This time also crosses over with the frontier labs raising ever larger and larger rounds. If Anthropic IPO (which I honestly doubt), then we may get a better sense of actual prices in the market, as it's unlikely the markets will continue letting them spend more and more money each year without a return.
> The current ChatGPT $20/month plan goes a very long way
It sure does and Codex is great, but do you think they'll maintain the current prices after/if it eventually dominates Claude Code in terms of marketshare and mindshare?
I think we'll always have multiple options providing similar levels of service, like we do with Uber and Lyft.
Unlike Uber and Lyft, the price of inference continues to go down as datacenter capacity comes online and compute hardware gets more powerful.
So I think we'll always have affordable LLM services.
I do think the obsession with prices of the entry-level plans is a little odd. $20/month is nothing relative to the salaries people using these tools receive. HN is full of warnings that prices are going to go up in the future, but what's that going to change for software developers? Okay, so my $20/month plan goes to $40/month? $60/month? That's still less than I pay for internet access at home.
The issue is when the file changed between when the LLM read the file and when it wrote to the file. Just using line numbers will clobber a file if that happens. The hashes prevent that from being an issue.
Blacksmiths pretty much existed until the ‘50s and ‘60s for most of the world, making bespoke tools and things. Then they just vanished, for the most part.
> Governments should be working on multi-generational scales. Not "fads" of what people want because they saw it in a movie or they grew up with it.
If the people disagree with you, then you're not talking about democracy, you're talking about "benevolent" authoritarianism ("we know what's good for you, and that's what you're going to get, like it or not").
No, what we need is not "democracy" as in "we get what every idiot thinks is good off the top of their head".
What we need is a representative democracy, where our representatives genuinely care about getting the best outcomes, so they enlist experts who actually know what they're talking about, and make policy based on that.
Yes, sometimes that will disagree with what the masses want—and in most of those cases, that means that our representatives need to enlist some communication experts to explain why it's actually best.
Democracy isn't an end in itself. It's supposed to be the means to an end of better governance for all. We don't have to accept things that are actively worse for us just because 50%+1 of the relevant voters think they're better right this second.
Since when is government a democracy? Roman times or something like that? Most? Some? Or at least a few government officials are elected. Pretty sure most are hired.
Since today. We elect our representatives and they are supposed to reflect the people's wishes as they go about their duties. Some city government staff might be hired employees, even most. But they are still fundamentally accountable to the elected representatives, and thus to the people.
They run an election based on a platform. You are voting for the person and the platform. They aren’t there to do your wishes, but to accomplish their agenda the people “agreed” was the best of all options that election cycle.
Sometimes this agenda is altruistic, like reducing crime. Sometimes it is populist, or social, or even fascist. Even then, elected officials are supposed to have limited power, not unlimited power. In some (many, depending on where you live) cases, they’re not even accountable to the people — the people can’t recall them, to remove them is a political act by other parts of government.
Yes. But a hill is easy to push up when it’s small and sticks to the snow. As it gets bigger, you can still go uphill but you just have to be strategic about it (as mentioned in the story). But small snowballs can go uphill all day long, they just have to make it to the top of the hill before they get too big.
And to add to this: virtually every programming language allows you to define multiple entry points. So you can have your workers in the exact same codebase as your api and even multiple api services. They can share code and data structures or whatever you need. So, if you do need this kind of complexity with multiple services, you don’t need separate repos and elaborate build systems and dependency hell.
reply