Hacker Newsnew | past | comments | ask | show | jobs | submit | rbinv's commentslogin

How does that work?


A Hollywood production often involves a huge number of distinct business entities in a complicated network of relationships with each other. Oftentimes, a specific business entity is set up to coordinate a single production, and that is the entity that reports the profit/loss for the entire project -- the project's revenues can be spread across lots of other entities, while its expenses are concentrated in that central entity, producing a nominal loss in their accounting records even if the movie as a whole was extremely profitable.


Production doesn't include advertising or distribution. Advertising can often equal production costs. And box office revenue gets distributed to several groups, not just going to the people who own the movie.


Presumably there's a fair amount of additional costs involved in marketing and distribution that aren't accounted for in the production figures and this ends up being a good baseline for a fuller accounting based on industry trends.


cost doesnt include advertising which tron had a lot of and studio has to split box office with theaters.


The editing and added audio certainly add to the realism, but this is really impressive nonetheless.


Agreed. Big tech is trying to become the media industry.


From the abstract:

> This photonic-architectured cement achieved a temperature drop of 5.4°C during midday conditions with a solar intensity of 850 watts per square meter. This supercool cement featured intrinsic high strength, armored abrasive resistance, and optical stability, even when exposed to harsh conditions, such as corrosive liquids, ultraviolet radiation, and freeze-thaw cycles. A machine learning–guided life-cycle assessment indicated its potential to achieve a net-negative carbon emission profile.


Afaik, there is currently no "GPT-5 Pro". Did you mean o3-pro or o1-pro (via API)?

Currently, GPT-5 sits at $10/1M output tokens, o3-pro at $80, and o1-pro at a whopping $600: https://platform.openai.com/docs/pricing

Of course this is not indicative of actual performance or quality per $ spent, but according to my own testing, their performance does seem to scale in line with their cost.


GPT-5 Pro is only available on ChatGPT with a ChatGPT Pro subscription.

Supposedly it fires off multiple parallel thinking chains and then essentially debates with itself to net a final answer.


O5-pro is available through the ChatGPT UI with a “Pro” plan. I understand that like o3 pro it is a high compute large context invocation of underlying models.


Thanks, I was not aware! I thought they offered all their models via their API.


o3 can be re-enabled in the settings ("Show additional models") if you're a paid (Plus) user.


I think it's more appropriate to compare GPT 5 Thinking to o3. You will find that the response times are actually quite similar (at least in my experience over hundreds of identical prompts with each model).


Yup, ASP's "__VIEWSTATE" hidden form parameter comes to mind. It was base64-encoded and POSTed because it could get loooong (hundreds of KB).

Terrible for browser navigation/refresh though, because pretty much everything was a form POST. Thus no URL state sharing, either.


Also a terrible idea to execute code from the client, even if it's supposedly signed.

https://darkatlas.io/blog/critical-sharepoint-vulnerability-...


> merdeitocracy

Not sure if typo or intentional (likely?), but that's an amazing new word.


It's AI slop. In fact, most (if not all) of this blog's recent posts are AI slop.


That's not what slop means. This is anything but low-effort or low-quality.


the project isn't AI at all, but the writeup is definitely AI. It overuses clickbait / hijacking / hook patterns that makes it really jarring:

- poses a lot of questions: "Me? I turned mine into a server that saves me money" / "Could I have just run this on my Mac like a normal person? Absolutely. But where’s the fun in that?" - it's not just X, it's Y: "it’s not just dumping power into devices; it’s managing charging curves properly" - creates scenarios and juxtapositions: "The workflow is beautifully simple: My image processing service sends images to the phone for OCR processing using Apple’s Vision framework. The phone processes the text, sends it back, and updates its dashboard with processing stats. All while I watch birds outside my window and feel smug about my setup."

I think this kind of writing borrows from twitter threads and youtube videos. I think we're going to be so sick of these patterns soon. And also, I don't think this is necessarily what the LLMs do natively, I think it might just come from bad RLHFs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: