A lot of attention is aimed at the huge investment rounds, cash burned into training foundation models (the trillions mentioned by Sam A.) and not enough analysts explain the units of economics to understand a business.
If you wonder why investors still think it’s a good idea to part with their money, I tried to break down the economic units and long term potential how all this could make sense.
Partial TL;DR
- Cash burn is not a fair approximation for COGS. OpenAI spends mostly on R&D like a pharmaceutical company does.
- ChatGPT 4o could be making more than 12.8% in gross margin.
- ChatGPT OSS 120B could be making 89% gross margin. It is 90% cheaper than
4o-mini with equivalent reasoning and 3x faster inference.
- ChatGPT 5's gross margin is most likely to fall between 12.8% and 89%.
Full breakdown: https://medium.com/@brenoca/openais-road-to-profitability-8c7231f8494b
This stood out to me:
> ChatGPT 5 and ChatGPT OSS are here with the purpose of profitability
This is economically good, but it's also a signal that their capacity to moonshot is stalling either through lack of funding or lack of innovation. They're now pivoting to a more sustainable model.
Models have seen diminishing returns over the last 2 generations of model: GPT3.5 to 4o to 5.
Doubling parameter size does not double model ability/quality.
In the long term models will become commodities that can be interchanged with competitors and open source models, there's no moat, it's not likely anyone is going to sustainably have a hugely better model than the next company.
Claude Code is already showing that you can win in a niche with specialization.
I expect 3 things:
1. We won't see massive jumps on model performance again for a while without new techniques. 2. Model makers will specialize in specific use cases like claude code 3. Moonshot projects like stargate will not have outsized returns, the step change from o3/o4 models to whatever comes next will not be groundbreaking. Partly because of diminishing returns and partly because the average person is bad at explaining what they want an LLM to do.