Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If that's the case, then why are AI companies bleeding money?

Or: what are they bleeding money on?



They lose money on research and training and offering model trials for free (a marketing expenses).

That doesn't mean that when they do charge for the models - especially via their APIs - that they are serving them at a unit cost loss.


Depends on the vendor and how they charge. OpenAI loses money on subscriptions [1]. Maybe the people who pay 200 bucks on a subscription are exactly the kind of people that will try to use the maximum out of it, and if you go down to the 20 bucks tier you will find more of the type of user that pays but doesn't use it all that much?

I would presume that companies selling compute for AI inference either make some money or at least break even when they serve a request. But I wouldn't b surprised if they are subsidizing this cost for the time being.

[1]: https://finance.yahoo.com/news/sam-altman-says-losing-money-...


That "losing money on subscriptions" story is a one-off Sam Altman tweet from January 2025, when they were promoting their brand new $200 account and the first version of Sora. I wouldn't treat that as a universal truth.

https://twitter.com/sama/status/1876104315296968813

"insane thing: we are currently losing money on openai pro subscriptions!

people use it much more than we expected"


Sam Altman is a bullshitter. A liar cares about the truth and attempts to hide it. A bullshitter doesn't care if something is true of false, and is just using rhetoric to convince you of something.

I don't doubt that it is true that they lose money on a 200 subscription because the people that pay 200 are probably the same people that will max out usage over time, no matter how wasteful. Sam Altman was framing it in a way to say "it's so useful people are using it more than we expected!", because he is interested in having everyone believe that LLMs are the future. It's all bullshit.

If I had to guess, they probably at least break even on API calls, and might make some money on lower tier subscriptions (i.e.: people that pay for it but use it sparingly on a as-need basis).

But that is boring, and hints at limited usability. Investors won't want to burn hundreds of billions in cash for something that may be sort of useful. They want destructive amounts of money in return.


Ok, fine, but I think it's disindigenous to only mention energy expenditure. There's also infrastructure, necessary re-training and R&D - of which we don't know how much must be spent just to stay in the market.


Competitive, venture backed companies losing money when you take R&D into account in a high growth market is how the tech industry has worked for decades.

Shopify, Uber and Airbnb all hit profitability after 14 years. Amazon took 9.


The mentioned didn't require the sort of R&D AI does.

And this isn't something that will go away anytime soon. OpenAI for instance is projecting that in 2030 R&D will still account for 45% of their costs. They think they'll be profitable by that time, or so they're telling investors.


And none of those companies lost anywhere near as much money as "AI" is currently, and will continue to do. Just because they become profitable 5 or 10 or 15 years from now does not mean that they will be able to pay off the hundreds of billions to trillions spent getting them there anytime soon. And for what? AI slop ruining every fucking thing while heating the planet ever faster? Sounds like a great future we have ahead with "AI".



On building the next new feature/integration/whatever? I feel like this should be a rhetorical question, but the fact that it was asked I also feel it is not so...


btw this was DeepSeek-V3.2. If I'd been using Claude Sonnet 4.5, we'd be looking at a $2000 bill instead.


Okay, yikes. Good thing that you even can set up those controls, unlike with that other company in the compute infrastructure business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: