If that was genuinely happening here - Anthropic were selling inference for less than the power and data center costs needed to serve those tokens - it would indeed be a very bad sign for their health.
Those are estimates. Notice they didn’t assume 0% or a million %. They chose numbers that are a plausible approximation of the true unknown values, also known as an estimate.
This is pretty silly thing to say. Investment banks suffer zero reputational damage when their analysts get this sort of thing wrong. They don’t even have to care about accuracy because there will never be a way to even check this number, if anyone even wanted to go back and rate their assumptions, which also never happens.
I've seen a bunch of other estimates / claims of a %50-60 margin for Anthropic on serving. This was just the first one I found a credible-looking link I could drop into this discussion.
The leaders of Anthropic, OpenAI and DeepMind all hope to create models that are much more powerful than the ones they have now.
A large portion of the many tens of billions of dollars they have at their disposal (OpenAI alone raised 40 billion in April) is probably going toward this ambition—basically a huge science experiment. For example, when an AI lab offers an individual researcher a $250 million pay package, it can only be because they hope that the researcher can help them with something very ambitious: there's no need to pay that much for a single employee to help them reduce the costs of serving the paying customers they have now.
The point is that you can be right that Anthropic is making money on the marginal new user of Claude, but Anthropic's investors might still get soaked if the huge science experiment does not bear fruit.
> their investors might still take a bath if the very-ambitious aspect of their operations do not bear fruit
Not really. If the technology stalls where it is, AI still have a sizable chunk of the dollars previously paid to coders, transcribers, translators and the like.
They had pretty drastic price cuts on Opus 4.5. It's possible they're now selling inference at a loss to gain market share, or at least that their margins are much lower. Dario claims that all their previous models were profitable (even after accounting for research costs), but it's unclear that there's a path to keeping their previous margins and expanding revenue as fast or faster than their costs (each model has been substantially more expensive than the previous model).
It wouldn't surprise me if they found ways to reduce the cost of serving Opus 4.5. All of the model vendors have been consistently finding new optimizations over the last few years.
I've been wondering about this generally... Are the per-request API prices I'm paying at a profit or a loss? My billing would suggest they are not making a profit on the monthly fees (unless there are a bunch of enterprise accounts in group deals not being used, I am one of those I think)
but those AI/ML researchers aka LLM optimization staff are not cheap. their salaries have skyrocketed, and some are being fought for like top-tier soccer stars and actors/actresses
The bet, (I would have thought) obviously, is that AI will be a huge part of humanity’s future, and that Anthropic will be able to get a big piece of that pie.
This is (I would have thought) obviously different from selling dollars for $0.50, which is a plan with zero probability of profit.
Edit: perhaps the question was meant to be about how Bun fits in? But the context of this sub-thread has veered to achieving a $7 billion revenue.
The bet is that revenue keeps growing and unit economics turn positive (which you can't do if you sell a dollar, since no one will give you more than a dollar for it)
You are saying that you can raise $7b debt at double-digit interest rate. I am doubtful. While $7b is not a big number, the Madoff scam is only ~$70b in total over many years.
People are making fun of this statement here / are being sarcastic. But it's a totally legit suggestion. If you know in advance, you are going to make something where performance matters, strongly consider using something other than one of the slowest languages of them all.
It's kinda funny how uv is written in Rust and many Python libraries where performance is expected to matter (NumPy, Pandas, PyTorch, re, etc.) are implemented in C. Even if you call into fast code from Python you still have to contend with the GIL which I find very limiting for anything resembling performance.
Python's strong native story has always been one of its biggest draws: people find it ironic that so much of the Python ecosystem is native code, but it plays to Python's strength (native code where performance matters, Python for developer joy/ergonomics/velocity).
> Even if you call into fast code from Python you still have to contend with the GIL which I find very limiting for anything resembling performance.
It depends. A lot of native extension code can run without the GIL; the normal trick is to "detach" from the GIL for critical sections and only reconnect to it once Python needs to see your work. PyO3 has a nice collection of APIs for holding/releasing the GIL and for detaching from it entirely[1].
I didn't know about detaching from the GIL... I'll look into that.
> native code where performance matters, Python for developer joy/ergonomics/velocity
Makes sense, but I guess I just feel like you can eat your cake and have it too by using another language. Maybe in the past there was a serious argument to be made about the productivity benefits of Python, but I feel like that is becoming less and less the case. People may slow down (a lot) writing Rust for the first time, but I think that writing JavaScript or Groovy or something should be just as simple, but more performant, do multi-threading out of the box, and generally not require you to use other languages to implement performance critical sections as much. The primary advantage that Python has in my eyes is: there are a lot of libraries. The reason why there are a lot of libraries written in Python? I think it's because Python is the number 1 language taught to people that aren't specifically pursuing computer science / engineering or something in a closely related field.
Yes, I think Python is excellent evidence that developer ecosystems (libraries, etc.) are paramount. Developer ergonomics are important, but I think one of the most interesting lessons from the last decade is that popular languages/ecosystems will converge onto desirable ergonomics.
Python is the ultimate (for now) glue language. I'd much rather write a Python script to glue together a CLI utility & a C library with a remote database than try to do that all in C or Rust or BASH.
In my analysis, the lion's share of uv's performance improvement over pip is not due to being written in Rust. Pip just has horrible internal architecture that can't be readily fixed because of all the legacy cruft.
And for numerical stuff it's absolutely possible to completely trash performance by naively assuming that C/Rust/Fortran etc. will magically improve everything. I saw an example in a talk once where it superficially seemed obvious that the Rust code would implement a much more efficient (IIRC) binary search (at any rate, some sub-linear algorithm on an array), but making the data available to Rust; as a native Rust data structure, required O(N) serialization work.
> So they should be able to get similar results in Python then?
I'm making PAPER (https://github.com/zahlman/paper) which is intended to prove as much, while also filling some under-served niches (and ignoring or at least postponing some legacy features to stay small and simple). Although I procrastinated on it for a while and have recently been distracted with factoring out a dependency... I don't want to give too much detail until I have a reasonable Show HN ready.
But yeah, a big deal with uv is the caching it does. It can look up wheels by name and find already-unpacked data, which it hard-links into the target environment. Pip unpacks from the wheel each time (which also entails copying the data rather than doing fast filesystem operations, and its cache is an HTTP cache, which just intercepts the attempt to contact PyPI (or whatever other specified index).
Python offers access to hard links (on systems that support them) in the standard library. All the filesystem-related stuff is already implemented in C under the hood, and a lot of the remaining slowness of I/O is due to unavoidable system calls.
Another big deal is that when uv is asked to precompile .pyc files for the installation, it uses multiple cores. The standard library also has support for this (and, of course, all of the creation of .pyc files in CPython is done at the C level); it's somewhat naive, but can still get most of the benefit. Plus, for the most part the precompiled files are also eligible for caching, and last time I checked even uv didn't do that. (I would not be at all surprised to hear that it does now!)
> It totally depends on the problem that you're trying to solve.
My point was more that even when you have a reasonable problem, you have to be careful about how you interface to the compiled code. It's better to avoid "crossing the boundary" any more than absolutely necessary, which often means designing an API explicitly around batch requests. And even then your users will mess it up. See: explicit iteration over Numpy/Pandas data in a Python loop, iterative `putpixel` with PIL, any number of bad ways to use OpenGL bindings....
> explicit iteration over Numpy/Pandas data in a Python loop
Yeah, I get it. I see the same thing pretty often... The loop itself is slow in Python so you have APIs that do batch processing all in C. Eventually I think to myself, "All this glue code is really slowing down my C." haha
If it's a module doing cryptography, maybe...Not suggesting C or rust in places where it doesn't make sense. Python itself make choices about whether the things it ships with are pure python or not. They often are not.
I love Perl. I think, though, that the crazy mix of sigils and braces/brackets to work with complex data structures was one of the main reasons.
Especially for the type of users were Perl had gained some ground in the past...data science, DNA related stuff, etc. Non programmers.
If you look at just about any other language and how you would pull data in and out of json yaml, manipulate individual elements, etc... the Perl is just hard to decipher if you don't have immediate recall of all the crazy syntax for dereferencing things.
Pretty sure they went down for a while because I have 4xx errors they returned but apparently it was short-lived. I wonder if their workers infra. failed for a moment and that let to a total collapse of all of their products?
It looks like rather than hiring a designer, they let one of their engineers (or worse, the CEO) design the Transmeta logo. I don’t know what that font is, but it might be even worse than Papyrus.
Sometimes it goes the other direction. PCI SSL accelerator cards were a thing for a long time before CPUs got faster, got various crypto acceleration opcodes, web servers rewrote SSL logic in ASM, etc.
The fancy Mellanox NVIDIA Connect-X cards have kTLS support which offloads encryption to the NIC, Netflix has blogged about how they use it to send 100 Gbps encrypted traffic of a single box (Their OpenConnect infrastructure is really cool).