Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Playing word games labeling inference narrowly as the cost per token rather than the per-X $ going to your llm api provider per customer/user/use/whatever is kinda silly?

The cost of inference -- ie $ that go to your llm api provider -- has increased and certainly appears to continue to increase.

see also https://ethanding.substack.com/p/ai-subscriptions-get-short-...



> The cost of inference -- ie $ that go to your llm api provider

This is the crux of it: when talking about "the cost of inference" for the purposes of the unit economics of the business, what's being discussed is not what they charge you. It's about their COGs.

That's not word games. It's about being clear about what's being talked about.

Talking about increased prices is something that could be talked about! But it's a different thing. For example, what you're talking about here is total spend, not about individual pricing going up or down. That's also a third thing!

You can't come to agreement unless you agree on what's being discussed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: