A Hollywood production often involves a huge number of distinct business entities in a complicated network of relationships with each other. Oftentimes, a specific business entity is set up to coordinate a single production, and that is the entity that reports the profit/loss for the entire project -- the project's revenues can be spread across lots of other entities, while its expenses are concentrated in that central entity, producing a nominal loss in their accounting records even if the movie as a whole was extremely profitable.
Production doesn't include advertising or distribution. Advertising can often equal production costs. And box office revenue gets distributed to several groups, not just going to the people who own the movie.
Presumably there's a fair amount of additional costs involved in marketing and distribution that aren't accounted for in the production figures and this ends up being a good baseline for a fuller accounting based on industry trends.
> This photonic-architectured cement achieved a temperature drop of 5.4°C during midday conditions with a solar intensity of 850 watts per square meter. This supercool cement featured intrinsic high strength, armored abrasive resistance, and optical stability, even when exposed to harsh conditions, such as corrosive liquids, ultraviolet radiation, and freeze-thaw cycles. A machine learning–guided life-cycle assessment indicated its potential to achieve a net-negative carbon emission profile.
Of course this is not indicative of actual performance or quality per $ spent, but according to my own testing, their performance does seem to scale in line with their cost.
O5-pro is available through the ChatGPT UI with a “Pro” plan. I understand that like o3 pro it is a high compute large context invocation of underlying models.
I think it's more appropriate to compare GPT 5 Thinking to o3. You will find that the response times are actually quite similar (at least in my experience over hundreds of identical prompts with each model).
the project isn't AI at all, but the writeup is definitely AI. It overuses clickbait / hijacking / hook patterns that makes it really jarring:
- poses a lot of questions: "Me? I turned mine into a server that saves me money" / "Could I have just run this on my Mac like a normal person? Absolutely. But where’s the fun in that?"
- it's not just X, it's Y: "it’s not just dumping power into devices; it’s managing charging curves properly"
- creates scenarios and juxtapositions: "The workflow is beautifully simple: My image processing service sends images to the phone for OCR processing using Apple’s Vision framework. The phone processes the text, sends it back, and updates its dashboard with processing stats. All while I watch birds outside my window and feel smug about my setup."
I think this kind of writing borrows from twitter threads and youtube videos. I think we're going to be so sick of these patterns soon. And also, I don't think this is necessarily what the LLMs do natively, I think it might just come from bad RLHFs