Yes, they can 'prove' what is in something to the limits of the physics.
However, the human in the loop is quite frail for most operators. In that, these very fancy and very expensive instruments are mostly run by high school grads and serviced by field engineers with a huge backlog.
For a first time one off test of something's composition, I'd go for at least 3 companies and preferably you have a history with them. This stuff is terribly complicated and misinterpretation is shockingly common. If the tech hasn't used the standards before then your at the mercy of fate.
Like, we have 5 (!) places on the home screen that do the exact same function of ending a run because when we try to consolidate it to just 1, our customers freak out and can't find where the button went to. Granted they pay $100k+ per instrument plus service plan, so we add it back in no question ( and this is life critical equipment in many cases), but I hope that shows how embedded to routine these operators get.
Janoshik is the primary company people use here, and their business is basically entirely peptides and anabolic steroids, and the GLP-1 stuff exploded their business from gym bros to soccer moms everywhere. https://janoshik.com/
But they basically test the same 12ish compounds day in and day out, with another couple of dozen making up the remainder. They don't have most of the worries that you are referring to - first time for a tech running a specific set of standards, limited experience interpreting them, etc., and when people head to head their results against different labs, they are consistent.
Had a family member pass away and was part of their long term care team. The exact same thoughts came through my head too.
The 'funny' thing is that for the first few days, you can do a lot, but with medical stuff, it's mostly just waiting anyway. Even the first month, you can power through a lot. You become an expert fairly quickly at the little health thing. And then find that we know next to nothing about biology.
But after weeks, it's supprising how little you can do that is 'extra'. The grind really gets to you fast. And your putting your own needs away for just that little time catches up on you. You end up needing support quickly too. Not wanting support, needing it.
In the end I was able to hold my head high and say I absolutely did everything I possibly could, even to the point of needing help myself. I was just surprised at how little ways that went towards affecting the outcome.
Range goes into this. Epstein talks about Kind and Unkind learning environments.
In Kind environments, the feedback is quick and ranking is easy to know. So the evidence says that the optimal strategy is drill ans kill.
In Unkind learning environments, the feedback is slow and ranking is difficult and untimely. So the optimal strategy is to learn as much as you can in as many very different disciplines as possible.
The paper that the Economist talks about extends this and (paraphrasing) says that the very top elite level, even Kind learning environments turn back into Unkind ones again as you try to push the field more.
--Even if "only" 10% of elite kids go on to become elite adults, 10% is orders of magnitude larger than the base percentage of adults who are elite athletes, musicians, etc. This doesn't sound "uncorrelated" to me so much as "not as strongly correlated as one might expect."
The way that I read the original study was that only 10% of elite adults were also elite youth.
Not that 10% of elite youth become elite adults.
That distinction is the key and surprising. Elite level talent and training and dollar spending in the youth is not then well correlated with elite level practice in adults across many disciplines.
As in your country's elite youth training centers (science, music, futbol, Olympic sports, etc) are mostly wasting money.
I'll double up here. For me, audiobooks are 'the same' as reading a book. Yes, I know they are not exactly the same, but the experience for me is pretty darn close enough.
Now, I use audiobooks because I can then take the dog for a walk and do other chores while listening to them. For me, it's a good way to get my mind working while my body is too. Plus, you can speed up the narration to some multiple so you're at the same pace as if you were reading anyway.
For me, it's no different than if I were a cigar roller with someone reading out a book to the lot of us.
Is it exactly the same? No, of course not. But if the alternatives to doing chores is 1) doing them with no auditory enjoyment 2) doing them with some podcast/radio station blaring topical news 3) doing them with a classic book
then, I'm going for 3. It's just the best use of time
Obesity drugs are in the top 25 for 2025, but don't make up the largest plurality. That goes to oncology drugs at ~1/3rd. Obesity drugs are at ~14%.
I want to mention here that these oncology drugs are mostly antibody methods. Which, what the hell? We're making antibody drugs at scale now?! And that's like some of the highest selling drugs out there?
For comparison, though not in the linked article here, Acetaminophen (Tylenol) only comes in at ~$4.3B, which would put it way down in 13th place, out of the top 10.
Granted, this is sales numbers, and in the US, that's practically taking the savings of very sick people and turning it into stocks. Something that elicits no small reaction here on HN or just about anywhere.
Still, to the point of the main article, yes, we live in an age of medical miracles, and it arrived quite suddenly, only in the last 7 years or so, and we have a lot of gas in this tank.
Initial thought about 1/5th of the way through: Wow, that's a lot of em-dashes! i wonder how much of this he actually wrote?
Edit:
Okay, section 3 has some interesting bits in it. It reminds me of all those gun start-ups in Texas that use gyros and image recognition to turn a C- shooter into an A- shooter. They all typically get bought up quite fast by the government and the tech shushed away. But the ideas are just too easy now to implement these days. Especially with robots and garage level manufacturing, people can pretty much do what they want. I think that means we have to make people better people then? Is that even a thing?
Edit 2:
Wow, section 4 on the abuse by organizations with AI is the most scary. Yikes, I feel that these days with Minneapolis. They're already using Palantir to try some of it out, but are being hampered by, well, themselves. Not a good fallback strat for anyone that is not the government. The thing about the companies just doing it before releasing it, that I think is underrated. Whats to stop sama from just, you know, taking one of these models and taking over the world? Like, is this paper saying that nothing is stopping him?
The big one that should send huge chills down the spines of any country is this bit:
"My worry is that I’m not totally sure we can be confident in the nuclear deterrent against a country of geniuses in a datacenter: it is possible that powerful AI could devise ways to detect and strike nuclear submarines, conduct influence operations against the operators of nuclear weapons infrastructure, or use AI’s cyber capabilities to launch a cyberattack against satellites used to detect nuclear launches"
What. The. Fuck. Is he saying that the nuclear triad is under threat here from AI? Am I reading this right? That alone is reason to abolish the whole thing in the eyes of nuclear nations. This, I think, is the most important part of the whole essay. Holy shit.
Edit 3:
Okay, section 4 on the economy is likely the most relevant for all of us readers. And um, yeah, no, this is some shit. Okay, okay, even if you take the premise as truth, then I want no part of AI (and I don't take his premise as truth). He's saying that the wealth concentration will be so extreme that the entire idea of democracy will break down (oligarchies and tyrants, of course, will be fine. Ignoring that they will probably just massacre their peoples when the time is right). So, combined with the end of a nuclear deterrence, we'll have Elon (lets be real here, he means sama and Elon and those people that we already know the names of) taking all of the money. And everyone will then be out of a job as the robots do all the work that is left. So, just, like if you're not already well invested in a 401k, then you're just useless. Yeah, again, I don't buy this, but I can't see how the intermediate steps aren't ust going to tank the whole thought exercise. Like, I get that this is a warning, but my man, no, this is unreasonable.
Edit 4:
Section 5 is likely the most interesting here. It's the wild cards, the cross products, that you don't see coming. I think he undersells this. The previous portions are all about 'faster horses' in the world where the cars is coming. It's the stuff we know. This part is the best, I feel. His point about robot romances is really troubling, because, like, yeah, I can't compete with a algorithmically perfect robo-john/jane. It's just not possible, especially if I live in a world where I never actually dated anyone either. Then add in an artificial womb, and there goes the whole thing, we're just pets for the AI.
One thing that I think is an undercurrent in this whole piece is the use of AI for propaganda. Like, we all feel that's already happening, right? Like, I know that the crap my family sees online about black women assaulting ICE officers is just AI garbage like the shrimp jesus stuff they choke down. But I kinda look at reddit the same way. I've no idea if any of that is AI generated now or manipulated. I already index the reddit comments at total Russian/CCP/IRG/Mossad/Visa/Cokeacola/Pfiser garbage. But the images and the posts themselves, it just feels increasingly clear that it's all just nonsense and bots. So, like Rao said, it's time for the cozy web of Discord servers, and Signal groups, and Whatsapp, and people I can actually share private keys with (not that we do). It's already just so untrustworthy.
The other undercurrent here, that he can't name for obvious reasons, is Donny and his rapid mental and physical deterioration. Dude clearly is unfit at this point, regardless of the politics. So the 'free world' is splintering at the exact wrong time to make any rational decisions. It's all going to be panic mode after panic mode. Meaning that the people in charge are going to fall to their training and not rise to the occassion. And that training is from like 1970/80 for the US now. So, in a way, its not going to be AI based, as they won't trust it or really use it at all. Go gen-z I think?
Edit 5:
Okay, last bit and wrap up. I think this is a good wrap up, but overall, not tonally consistent. He wants to end on a high note, and so he does. The essay says that he should end on the note of 'Fuck me, no idea here guys', but he doesn't.
Like he want 3 things here, and I'll speak to them in turn:
Honesty from those closest to the technology _ Clearly not happening already, even in this essay. He's obviously worried about Donny and propaganda. He;s clearly trying but still trying to be 'neutral' and 'above it all.' Bud, if you're saying that nuclear fucking triad is at stake, then you can't be hedging bets here. You have to come out and call balls and strikes. If you;re worried about things like MAGA coming after you, you already have 'fuck you' money. Go to New Zealand or get a security detail or something. You're saying that now is the time, we have so little of it left, and then you pull punches. Fuck that.
Urgent prioritization by policymakers, leaders, and the public _ Clearly also not going to happen. Most of my life, the presidents have been born before 1950. They are too fucking old to have any clue of what you're talking about. Again, this is about Donny and the Senate. He's actually talking about like 10 people here max. Sure, Europe and Canada and yadda yadda yadda. We all know what the roadblocks are, and they clearly are not going anywhere. Maybe Vance gets in, but he's already on board with all this. And if the author is not already clear on this here: You have 'fuck you' money, go get a damn hour of their time, you have the cash already, you say we need to do this, so go do it.
Courage to act on principle despite economic and political pressure _ Buddy, show us the way. This is a matter of doing what you said you would do. This essay is a damn good start towards it. I'm expecting you on Dwarkesh any day this week now. But you have to go on Good Morning America too, and Joe Rogan, and whatever they do in Germany and Canada too. It;s a problem for all of us.
Overall: Good essay, too long, should be good fodder for AstralCodexTen folks. Unless you get out and on mainstream channels, then I assume this is some hype for your product to say 'invest in me!' as things are starting to hit walls/sigmoids internally.
Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.
As an ordinary human with no investment in the game, I would not expect LLMs to magically work around the well-known physical phenomena that make submarines hard to track. I think there could be some ability to augment cybersecurity skill just through improved pattern-matching and search, hence real teams using it at Google and the like, but I don't think this translates well to attacks on real-world targets such as satellites or launch facilities. Maybe if someone hooked up Claude to a Ralph Wiggum loop and dumped cash into a prompt to try and "fire ze missiles", and it actually worked or got farther than the existing state-sponsored black-hat groups at doing the same thing to existing infrastructure, then I could be convinced otherwise.
> Dario and Anthropic's strategy has been to exaggerate the harmful capabilities of LLMs and systems driven by LLMs, positioning Anthropic themselves as the "safest" option. Take from this what you will.
Yeah, I've been feeling that as well. It's not a bad strategy at all, makes sense, good for business.
But on the nuclear issue, it's not a good sign that he's explicitly saying that this AGI future is a threat to nuclear deterrence and the triad. Like, where do you go up from there? That's the highest level of alarm that any government can have. This isn't a boy crying wolf, it's the loudest klaxon you can possibly make.
If this is a way to scare up dollars (like any tyre commercial), then he's out of ceiling now. And that's a sign that it really is sigmoiding internally.
> But on the nuclear issue, it's not a good sign that he's explicitly saying that this AGI future is a threat to nuclear deterrence and the triad. Like, where do you go up from there? That's the highest level of alarm that any government can have. This isn't a boy crying wolf, it's the loudest klaxon you can possibly make.
This is not new. Anthropic has raised these concerns in their system cards for previous versions of Opus/Sonnet. Maybe in slightly more dryer terms, and buried in a 100+ page PDF, but they have raised the risk of either
a) a small group of bad actors w/ access to frontier models, technical know-how (both 'llm/ai how to bypass restrictions' and making and sourcing weapons) to turn that into dirty bombs / small nuclear devices and where to deploy them.
b) the bigger, more scifi threat, of a fleet of agents going rogue, maybe on orders of a nation state, to do the same
I think option a is much more frightening and likely. option b makes for better scifi thrillers, and still could happen in 5-30ish(??) years.
I agree that it is not a good sign, but I think what is a worse sign is that CEOs and American leaders are not recognizing the biggest deterrent to nuclear engagement and war in general, which is globalism and economic interdependence. And hoarding AI like a weapons stockpile is not going to help.
The reality is, LLMs to date have not significantly impacted the economy nor been the driver of extensive job destruction. They dont want to believe that and they dont want you to believe it either. So theyll keep saying "its coming, its coming" under the guise of fear mongering.
For your Edit 2 - yes. Being discussed and looked at actively in both the open and (presumably being looked at) closed communities. Open communities being, for example : https://ssp.mit.edu/cnsp/about. They just published a series of lectures with open attendance if you wanted to listen in via zoom - but yeap - that's the gist of it. Spawned a huge discussion :)
This was pretty much an open conference deepdive into the causes and implications of what you - and some sibling threads - are saying - having to do with submarine localization, TEL localization, etc etc etc..
If AI makes humans economically irrelevant, nuclear deterrents may no longer be effective even if they remain mechanically intact. Would governments even try to keep their people and cities intact once they are useless?
Is that paper in print? I can't seem to find if it was peer reviewed.
If the paper is true, then, yeesh! That's a pretty big miss on the part of Güllich et al.
Reading through the very short paper there, it seems to not have gone through review yet (typos, mispellings, etc). Also, it's not clear that the data in the tables or the figure are from Güllich's work or are simulations meant to illustrate their idea (" True and estimated covariate effects in the presence of simulated collider bias in the
full and selected samples"). Being more clear where the data is coming from may help the argument, but I likely just missed some sentence or something.
I'll be interested to see where this goes. That Güllich managed to get the paper into Science in the first place lends credence to them having gone through something as simple as Berkson's Paradox and have accounted for that. It's not everyday you get something as 'soft' as that paper into Science, after all. If not, then wow! standards for review really have slipped!
I'd be interested to know what the controls were for those studies. Were the participants already addicted or was the 30mg+ dosing done on non-addicted people? It's a lot of studies to pour through.
Also, that is a lot of metrics!
And it seems that the athletic performance increase to get statistical validation (for any person) is in the grams range. I ... I just can't see any reason to take that much caffeine unless I'm at the Olympics. I'd be jumping out of my skin!
Yes, they can 'prove' what is in something to the limits of the physics.
However, the human in the loop is quite frail for most operators. In that, these very fancy and very expensive instruments are mostly run by high school grads and serviced by field engineers with a huge backlog.
For a first time one off test of something's composition, I'd go for at least 3 companies and preferably you have a history with them. This stuff is terribly complicated and misinterpretation is shockingly common. If the tech hasn't used the standards before then your at the mercy of fate.
Like, we have 5 (!) places on the home screen that do the exact same function of ending a run because when we try to consolidate it to just 1, our customers freak out and can't find where the button went to. Granted they pay $100k+ per instrument plus service plan, so we add it back in no question ( and this is life critical equipment in many cases), but I hope that shows how embedded to routine these operators get.
reply