Hacker Newsnew | past | comments | ask | show | jobs | submit | lumost's commentslogin

Curious if part of this was the overall decline in government compensation relative to the private sector. The president makes roughly what the typical SV engineer makes after 5 years in big tech or as a fresh grad from a top PhD program. Meanwhile the people the president deals with have become unfathomably wealthy.

In 1909, the US president made 75k - roughly 2.76 Million in today's dollars. This is in comparison to the current 400k dollar salary of the president. As the president is the highest paid government employee by law/custom - this applies downward pressure on the rest of the governments payroll.

I see no reason why the president shouldn't be modestly wealthy given the requirements or the role and the skill required to do it well. Cutting the payscale to less than some new grads seems like a recipe for corruption.


Since 1958 with the Former Presidents Act [1] the Presidency guarantees you'll live very comfortably for the rest of your life with a lifetime pension (and even a small pension for your wife), funding for an office/staff, lifetime secret service protection, funded travel, and more. It was passed precisely because of the scenario you describe playing out with Truman who was rather broke, and ran into financial difficulties after leaving office.

[1] - https://en.wikipedia.org/wiki/Former_Presidents_Act


> Truman who was rather broke, and ran into financial difficulties after leaving office.

Nope[0]. He was a shameless grifter just like Trump.

[0] https://www.lawyersgunsmoneyblog.com/2026/01/the-immortal-le...


Are most fresh grads from a top PhD program really making $400k/year? Sure, the ones hired by OpenAI are making at least that much, but the vast majority are not. However the broader point remains, that the president’s (and the rest of government’s) pay structure has not kept up with the private sector.

It's quite plausible to me that the difference is inference configuration. This could be done through configurable depth, Moe experts, layers etc. Even beam decoding changes can make substantial performance changes.

Train one large model, then down configure it for different pricing tiers.


I dont think thats plausible because they also just launched a high-speed variant which presumably has the inference optimization and smaller batching and costs about 10x

also, if you have inference optimizations why not apply them to all models?


This really points to a world where all services are too cheap to meter. The compute side of AI is a commodity, the usage of AI is a commodity, the model development of AI is a commodity. So far there is no evidence that a provider with heavy usage has any long-term advantage over a vendor with no usage. New top tier models come out every week from relative unknowns.

Other than a vast consolidation of what parts of the economy are "digital", what is going to have margin other than orphaned capital and "creative" efforts within 10 years?

EDIT: the top ranked model on openrouter based on traffic changes almost weekly now, I can't see how Amy claim of “stickiness” exists in this space.

https://openrouter.ai/rankings


It's a good way for a grocer to minimize waste. When raw chicken gets close to sell by date, turn it into rotisserie chicken, when it doesn't sell - turn it into sandwiches and other products.

Personal agents disrupt OpenAI’s revenue plan. They had been planning to put ads in ChatGPT to make revenue. If users rapidly move to personal agents which are more resistant to ads, running on a blend of multiple models/compute providers - then they won’t be able to deliver their revenue promises.

Firstly, OpenAI has lacked focus so they're pursuing lots of different paths despite the obvious one (ads in chatgpt), like hiring Johnny Ive - a move that feels more WeWork than anything.

But secondly, personal agents can be great for OpenAI, if the user isn't even interacting with the AI and is just letting it go off autonously then you're basically handing your wallet to the AI, and if the model underlying that agent is OpenAI, you're handing your wallet to them.

Imagine for a second that a load of stuff is now being done through personal agents, and suddenly OpenAI release an API where vendors can integrate directly with the OpenAI agent. If OpenAI control that API and control how people integrate with it, there's a potential there that OpenAI could become the AppStore for AI, capturing a slice of every penny spent through agents. There's massive upside to this possibility.


> had been planning to put ads in ChatGPT

As per the new terms of service, the ads are already in


Anthropic's primary Capex partner is AMZN. AMZN is presently willing to drop 200 billion a year into Capex for compute to rent to anthropic and others. This 30 billion only needs to fund their rental rates - unlike openai and google who need to put in the upfront capex for their compute as MSFT stopped footing the bills.

An interesting question is whether anthropic's capex needs may grow to the point that they can take down AMZN with them should they fail.


Does it really? Or does it become a panacea to make people feel like they’ve done something when they haven’t?

Perhaps if these non-solutions didn’t exist to appease our fears then there would be more pressure for real solutions.


I don’t understand why most cloud backend designs seem to strive for maximizing the number of services used.

My biggest gripe with this is async tasks where the app does numerous hijinks to avoid a 10 minute lambda processing timeout. Rather than structure the process to process many independent and small batches, or simply using a modest container to do the job in a single shot - a myriad of intermediate steps are introduced to write data to dynamo/s3/kinesis + sqs/and coordination.

A dynamically provisioned, serverless container with 24 cores and 64 GB of memory can happily process GBs of data transformations.


The history of software has been that once it becomes cheap enough for teams to flood the market with “existing product” + x feature for y users. The market consolidates around a leader who does all features for all customers.

I’d bet that we skip SaaS entirely and go to Anthropic directly. This means the ai has to understand that there are different users with conflicting requirements and that we all need the same exact copy of the burn rate report.


No known mechanism, but cross species checks would imply that the schedule was evolved and has some control mechanism.

Species that evolved before the Devonian period tend not to age and instead grow through their entire lives. There is no mechanistic understanding for the wild variation in species lifespans.

So the natural question in these studies is what would happen if we simply told the muscles not to age this way. It’s plausible that this aging schedule evolved due to other factors independent of the biological constraints. It’s also plausible that evolution removed some other important components for longer lived stem cells.


Interesting, the Devonian also appears to be the period at which fish started sporting limb like appendages and muscle structures, and other animals started to explore land. Perhaps unlimited body growth doesn't work well for animals not entirely supported by water.


Interestingly, the limbless boa constrictor's growth slows, but never stops.


I misread that as the "Denisovan period" and found it interesting that in addition to Homo Floresiensis Hobbits, there might have been arbitrarily large Denisova Hominin giants. Oh well.

I will have to be satisfied with Andre the Giant.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: