Hacker Newsnew | past | comments | ask | show | jobs | submit | ddp26's commentslogin

Because of unprofitability? ARR and growth are very high, and margins are either good or can soon become good.

Is the claim that coding agents can't be profitable?


> margins are either good or can soon become good.

Their margins are negative and every increase in usage results in more cost. They have a whole leaderboard of people who pay $20 a month and then use $60,000 of compute.

https://www.viberank.app


That site seems to date from the days before there were real usage limits on Claude Code. Note that none of the submissions are recent. As such, I think it's basically irrelevant - the general observation is that Claude Code will rate limit you long, long before you can pull off the usage depicted so it's unlikely you can be massively net-profit-negative on Claude Code.


Do you mind giving a bit more details in layman's terms about this assuming the $60k per subscriber isn't hyperbole? Is that the total cost of the latest training run amortized per existing subscriber plus the inference cost to serve that one subscriber?

If you tell me to click the link, I did, but backed out because I thought you'd actually be willing to break it down here instead. I could also ask Claude about it I guess.


It counted up the tokens that users on “unlimited” Max/Pro plans consumed through CC, and calculated what it would cost to buy that number of tokens through the API.

$60K in a month was unusual (and possibly exaggerated); amounts in the $Ks were not. For which people would pay $200 on their Max plan.

Since that bonanza period Anthropic seem to have reined things in, largely through (obnoxiously tight) weekly consumption limits for their subscription plans.

It’s a strange feeling to be talking about this as if it were ancient history, when it was only a few months ago… strange times.


So they're now putting in aggressive caps and the other two paths they have to address the gap is to drive the/their cost of those tokens way down and/or the user pays many multiples of their current subscription. That's not to say that's odd for any business to expect their costs to decrease substantially and their pricing power to increase, but even if the gap is "only" low thousands to $200 that's...significant. Thanks for the insight.


> margins are either good or can soon become good

This is always the pitch for money-losing IPOs. Occasionally, it is true.


let's see them then


This whole blog post is seemingly about Google, not about the user. "Why We Built Antigravity" etc. "We want Antigravity to be the home base for software development in the era of agents" - cool, why would I as the user care about that?


You wouldn’t. It’s made to suck out investor money and show that google does something, not to actually bring value.

My crystal ball says it will be shutdown next year.


Most of the AI products are not for the end user, they are just signaling shareholders and possible investors that the company is on the hype.


There is also mechanism inside Google that rewards teams that launches new products, more than the teams that actually maintain existing ones.


This kind of cynicism is wild to me. Of course most AI products (and products in general) are for end users. Especially for a company like Google--they need to do everything they can to win the AI wars, and that means winning adoption for their AI models.


http://killedbygoogle.com/ - most Google products are for the temporary career advancement of some exec or product lead.

Their only real product is advertising, everything else is a pretense to capture users attention and behaviors that they can auction off.


This is different. AI is an existential threat to Google. I've almost stopped using Google entirely since ChatGPT came out. Why search for a list of webpages which might have the answer to your question and then manually read them one at a time when I can instead just ask an AI to tell me the answer?

If Google doesn't adapt, they could easily be dead in a decade.


That's funny. I stopped using ChatGPT completely and use Gemini to search, because it actually integrates with Google nicely as opposed to ChatGPT which for some reason messes up sometimes (likely due to being blocked by websites while no one dares block Google's crawler lest they be wiped off the face of the internet), and for coding, it's Claude (and maybe now Gemini for that as well). I see no need to use any other LLMs these days. Sometimes I test out the open source ones like DeepSeek or Kimi but those are just as a curiosity.


If web-pages don't contain the answer, the AI likely won't either. But the AI will confidently tell me "the answer" anyway. I've had atrocious issues with wrong or straight up invented information that I must search up every single claim it makes on a website.

My primary workflow is asking AI questions vaguely to see if it successfully explains information I already know or starts to guess. My average context length of a chat is around 3 messages, since I create new chats with a rephrased version of the question to avoid the context poison. Asking three separate instances the same question in slightly different way regularly gives me 2 different answers.

This is still faster than my old approach of finding a dry ground source like a standards document, book, reference, or datasheet, and chewing through it for everything. Now I can sift through 50 secondary sources for the same information much faster because the AI gives me hunches and keywords to google. But I will not take a single claim for an AI seriously without a link to something that says the same thing.


Given how embracing AI is an imperative in tech companies, "a link to something" is likely to be a product of LLM-assisted writing itself. Entire concept of checking through the internet becomes more and more recursive with every passing moment.


Google is still looking for investors?


Of course, Alphabet exists to give returns to their shareholders.


I know Google is quick to shut things down but these ultra-cynical ultra-skeptical HN takes are so tiresome at this point.


I wonder why is that.


This is what Google is like though. It is practically part of their corporate DNA.


I do not believe that Google Antigravity is aimed at wooing investors. I believe it is intended to be a genuine superior alternative to Cursor and Kiro etc. and is attempting to provide the best AI coding experience for the average developer.

Most of the other people (so far) in this sub-thread do not think this. They essentially have a conspiratorial view on it.


> I do not believe that Google Antigravity is aimed at wooing investors.

There is no evidence to support any other motive.

Any experienced (as in, 10+ years) developer knows better than to trust google with dev tools.


What dev tools has Google shut down?

Colab is still going strong. Chrome inspector is still going strong.

They've never released a full-fledged IDE before, have they? Which I don't count Apps Script editor as one, but that's been around for a long time as well.

I think it's much more likely that Google believes this is the future of development and wants to get in on the ground floor. As they should.


> Google believes this is the future of development

This is hardly possible as this is definitely not the future of development which is obvious to developers who created this. Or to any developer for that matter.

This is a stakeholders' feature.


This worldview is so bizarre and uncharitable that I'd be rather concerned to hear what any of your takes on politics might be.

I've played with Antigravity for the past 48 hours for lots of different tasks. Is it revolutionizing development for me? No. Do I think they want it to do that and are working extremely hard to try to achieve that? I think the answer is very obviously: of course. Will it maybe get closer to that within a few months or a year? Maybe.


Agree to disagree, I guess. What you think is obvious, I think is false. And I think the rapidly growing success of Cursor is the proof of that. But I guess you must think Cursor is just a fad or something, since you don't see why Google would want to legitimately compete with it?


Cursor is obviously a fad (unlike Copilot - I'm not at all an AI hater, quite the opposite) and perhaps Google needs to present something to shareholders that will pretend to be competing.

None of that matters for actual development work.


Well, just so you know, there are lots of us who think Cursor is not a fad, and see that Google realizes this as well, and is genuinely competing with it.

A lot of people find it's actually quite valuable for "actual development work". If you want to ignore all that, then I guess go ahead.

But just know that what you're claiming is "obvious", is clearly not. There seems to be large disagreement over it, so it is objectively not obvious, but rather quite debatable.


Isn't it great that in this case we don't have to fight because the judgment of history will reveal itself quite soon on that matter, right? :-)


I wish I could save this comment in a way that we would both come back to it in 10 years ;)


!remindme 5 years


Cursor is very obviously not a fad (Copilot I'd say actually is!), nor are any of the Cursor clones/competitors. Of course it matters very much for actual development work. I feel like everything you're writing in this sub-thread is essentially the opposite of what exists in reality.


I would say it's the opposite. I believe there is zero evidence to support your allegation.


> I do not believe that Google Antigravity is aimed at wooing investors.

I think the comment you’re replying to was addressing the “shutting down” part, not the “investors” part.


exactly.

also i was alluding to the way their promotion policy encourages people to start rather than maintain projects.


Investor money?

Google is highly profitable. It's not looking for investment, it's the one investing.

Maybe you are confusing it with OpenAI?


I think they are referring to public market investors vs private investors. Meaning, their stock valuation.


funny my magic 8 ball says the same thing!


After using Google AI studio, Google Vertex, and Google Gemini Chat I honestly can't wait to use Google Antigravity!

edit: Also Jules...

snark off:

I think the Google PMs should have coffee together and see if all of this sprawl makes any sense.


It does?

Google AI studio is their developer dashboard.

Google Vertex is their equivalent of Amazon Bedrock.

Google Gemini Chat is their ChatGPT app for normies.

Google Antigravity is their Cursor equivalent.


I agree what you’ve listed makes sense as a product portfolio.

But AI Studio is getting vibe coding tools. AI Studio also has a API that competes with Vertex. They have IDE plugins for existing IDEs to expose Chat, Agents, etc. They also have Gemini CLI for when those don’t work. There is also Firebase Studio, a browser based IDE for vibe coding. Jules, a browser based code agent orchestration tool. Opal, a node-based tool to build AI… things? Stich, a tool to build UIs. Colab with AI for a different type of coding. Notebook LM for AI research (many features now available in Gemini App). AI Overviews and AI mode in search, which now feature a generic chat interface.

Thats just new stuff, and not including all the existing products (Gmail, Home) that have Gemini added.

This is the benefit of a big company vs startups. They can build out a product for every type of user and every user journey, at once.


Don't forget Gemini CLI

In another 2 years we'll probably be back to just "Google" as digital agent that can do any research, creative, or coding task you can imagine.


I concede.


> Google Vertex is their equivalent of Amazon Bedrock

Well, that clears that up.


In "real world" you don't use OpenAI or Anthropic API directly—you are forced to use AWS, GCP, or Azure. Each of these has its own service for running LLMs, which is conceptually the same as using OpenAI or Anthropic API directly, but with much worse DX. For AWS it's called Bedrock, for GCP—Vertex, and for Azure it's AI Foundry I believe. They also may offer complementary features like prompt management, evals, etc, but from what I've seen so far it's all crap.


And In practice, when I needed to use one of their models for a small project, it turned out that the only sane way is to go via OpenRouter…


Also gemini-cli (terrible)

Google ADK (agent development kit, awesome)


The also launched a coding agent Jules: https://jules.google/


Jules is the first and only one to add a full API, which I've found very beneficial. It lets you integrate agentic coding features into web apps quite nicely. (In theory you could always hack your own thing together with Claude Code or Codex to achieve a similar effect but a cloud agent with an API saves a lot of effort.)


Google ADK is real nice and gives you an API as well (also web browser and terminal prompt)


Jules is nifty. Weirdly heavy on the browser CPU.


Wasn't there something called Bard at some point?


Bard is the old name of Gemini


you forgot jules


Everybody forgets Jules.


Great point!

Remember took my a while early in my career from changing my resume away from saying "I want to do this at my next job and make a lot of money" and towards "here is how I can make money and save costs for your company".

Google didn't learn that lesson here. They are describing why us using Antigravity is good for Google, not why us using Antigravity is good for us.


More accurately, it should be neither about Google nor about the user, but about the product. Describe what the product is and does, don’t make assumptions about the user, and let the user be the judge of it.


I swear most of these pages is to sell this to companies so they can force it onto developers.

The whole webpage looks like something from Apple.


All these companies were built by self-referential narcissists, and it seems to be their culture at the core.


One interesting finding in Stockfisher data is that a lot of these business pivots are actually planned by managers years in advance, in their 10-K and 10-Q filings.

Yes, managers are not good forecasters. But they do get certain things right. And if you figure out the patterns of what types of manager promises tend to play out, and assess them individually for their reliability, you can reason about these business model changes decently well.


Constant iteration, mostly!

The most interesting aspect of this is backtesting. Quant models get run on past data to see if their predictions work.

When you use LLMs agents, though, you run into their memorized knowledge of the world. And then there's the fact that they do their research on the open internet. It makes backtesting hard - but not impossible.

We wrote about how we do our pastcasting validation here: https://stockfisher.app/backtesting-forecasts-that-use-llms


I agree, but it's funny to think that Project Chauffeur (as it was known then) was doing completely driverless freeway circuits in the bay area as far back as 2012! Back when they couldn't do the simplest things with traffic lights.

I think anyone back then would be totally shocked that urban and suburban driving launched to the public before freeway driving.


When it started, from what I've heard, the design goal was for part-time self-driving. In that case, let the human driver do the more variable things on surface streets and the computer do the consistent things on highways and prompt the user to pay attention 5 miles before the exit. They found that the model of part time automation wasn't feasible, because humans couldn't consistently take control in the timeframea needed.

So then they pivoted to full time automation with a safe stop for exceptions. That's not useful to start with highway driving. There are some freeway routed mass transit lines, but for the most part people don't want to be picked up and dropped off at the freeway. In many parts of freeways, there's not a good place to stop and wait for assistance, and automated driving will need more assistance than normal driving. So it made a lot f sense to reduce scope to surface street driving.


If you understand physics, it's easy. When you double the speed, you quadruple the kinetic energy. So you're definitely going to do slower speeds first, even if it's harder to compute.


Hi Parag, congrats on the launch. We'll try this out at FutureSearch.

I agree there is a need for such APIs. Using Google or Bing isn't enough, and Exa and Brave haven't clearly solved this yet.


Hi, author here, sorry I was unclear. This article does make more sense if you've listened to the Dwarkesh podcast linked, and read AI 2027 as was linked.

I realize now that it was presumptuous to assume people had done both of these things.


And to actually answer your question:

> Why would Karpathy's view be different for AI and non-AI-experts?

For people who understand AI, they can engage with the substance of his claims, about reinforcement learning, continuous learning, and his points about the 9s of reliability.

For people who don't, the article suggests thinking about AI as some black-box technology, and asking questions about base rates: how long does adoption normally take? What do the companies developing the technology normally do?

> It does not even give a statement about the reasoning behind why Karpathy said getting to https://ai-2027.com is unlikely.

That's the substance of the podcast, Karpathy justifies his views fairly well and at length.

> It also does not clearly define what AI 2027 is?

Dwarkesh covered AI 2027 when it came out, but for those who don't know, it's a deeply researched case of runaway AI that effectively destroys humanity in just 2-3 years after publication. This is what I mean by "short timelines".


Thanks a lot for helping to explain rather than taking my comments personally!


This seems as good a place as any for a mini obituary.

I'm 6 years older than Danya, and we shared the same beloved chess coach in the Bay Area. I played him in a tournament game when I was 17 and he was 11, at the Mechanics Club in SF. I was an NM, and he held me to a draw. (Afterward he told me my position was better when we agreed to a draw, which was news to me!)

Around that time Danya won the World Under 12 Championship. Americans almost never win those events, and it was a big big deal in the American chess community.

But to me, most impressive was when in 2007, at age 12, in 6th or 7th grade, he won a much easier tournament, the California High School championship. I had won it the previous year, as an 11th grader - my crowning achievement. We all knew then that Naroditsky was a generational talent, but it was something special that this child - very tall for his age, but still oh so young - beat up all the serious high schooler competitors.

He then went to Stanford, and took an introductory CS course taught by my brother. Everything I heard indicated he was an exceptional contribution to Stanford's culture. He had such wide interests and curiosity, and became a history major. He probably was the most erudite chess player of his generation, reading (and writing!) books at a huge clip.

I remember vividly in his early streaming days, long before Danya became an internet chess celebrity, he was taking challenges while I was watching, so I logged in to the site and played him. I managed to beat him in a blitz game in front of all of his viewers. He was mad! I'm a strong blitz player but he is world-class, consistently a top ~10 blitz player in the world for the last 10 years. (I used to watch him on the old terminal-like chess server, the Internet Chess Club, under the handle "Danya", as he destroyed everyone while still a preteen and largely unknown.)

I don't want to add to the speculation to what happened to him. Suffice to say, I am not convinced by the story people are jumping to.

He will be deeply missed, and he will not be forgotten. He was absolutely unique and a gem of the chess world. Farewell, Danya.


Predicted by the AI 2027 team in early April:

> Mid 2025: Stumbling Agents The world sees its first glimpse of AI agents.

Advertisements for computer-using agents emphasize the term “personal assistant”: you can prompt them with tasks like “order me a burrito on DoorDash” or “open my budget spreadsheet and sum this month’s expenses.” They will check in with you as needed: for example, to ask you to confirm purchases. Though more advanced than previous iterations like Operator, they struggle to get widespread usage.


Predicting 4-months into the future is not really that impressive


Especially when the author personally knows the engineers working on the features, and routinely goes to parties with them. And when you consider that Altman said last year that “2025 will be the agentic year”


It was common knowledge that big corps were working on agent-type products when that report was written. Hardly much of a prediction, let alone any sort of technical revolution.


The big crux of AI 2027 is the claims about exponential technological improvement. "Agents" are mostly a new frontend to the same technology openai has been selling for a while. Let's see if we're on track at the start of 2026


The same technology that checks notes has been in the wild for 7 months?


What is your point? We’re talking about the ai 2027 predictions here, which were made 4 months ago. 4 is “checks notes” less than 7


the point is that minimizing 4 months as an insufficient timeline along which to prove out ones ability to predict a sequence of events is dumb when the rate of progress is incredibly fast.


They predicted something completely unsurprising. Like "we'll see clouds next week".


You're calling me dumb? Kind of rude


They aren't predicting any new capabilities here: all things they mentioned already existed in various demos. They are basically saying that the next iteration of Operator is unlikely to be groundbreaking, which is rather obvious. I.e. "sudden breakthrough is unlikely" is just common sense.

Calling it "The world sees its first glimpse of AI agents" is just bad writing, in my opinion. People have been making some basic agents for years, e.g. Auto-GPT & Baby-AGI were published in 2023: https://www.reddit.com/r/singularity/comments/12by8mj/i_used...

Yeah, those had much higher error rate, but what's the principal difference here?

Seems rather weird "it's an agent when OpenAI calls it an agent" appeal to authority.


What's the base rate of human therapists giving dangerous advice? Whole schools, e.g. psychotherapy, are possibly net dangerous.

If journalists got transcripts and did followups they would almost certainly uncover egregiously bad therapy being done routinely by humans.


Therapist have professional standards that include a graduate degree and 1000's of hours of practice with supervision. Maybe a few bad ones fall through the cracks but I would be willing to bet that due to their standards most therapist are professional and do not give 'dangerous' advice or really any advice at all if they are following their professional standards.


Therapy gone wrong lead to wide scale witch hunts across the U.S. in the 1980's that dwarfed the Salem Witch trials. A huge number of therapists had come to believe the now mostly debunked "recovered memory" theory to construct the idea that there were networks of secret Satanists across the U.S. that needed to be weeded out. Countless lives were destroyed. I've yet to see therapy as a profession come to terms with the damage they did.

"These people are credentialed professionals so I'm sure they're fine" is an extremely dangerous and ahistorical position to take.


As somebody who has been through various forms of psychotherapy, knows trained professional psychotherapists, knows highly educated personell in the relevant educational institutions, etc. the very mild summary that I think when reading what to my mind is a generalized statements like "Whole schools, e.g. psychotherapy, are possibly net dangerous." is:

Citation needed.

Also: Psychotherapy is not a school but is divided in many different schools.


human therapists don't give advice


Well if she ain't human, what is she?


Someone raises safety concerns about LLM's interactions with people with delusions and your takeaway is maybe the field of therapy is actually net harmful?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: