I can honestly understand both positions. The U.S. military must be able to use technology as it sees fit; it cannot allow private companies to control the use of military equipment. Anthropic must prevent a future where AIs make autonomous life and death decisions without humans in the loop. Living in that future is completely untenable.
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
> it cannot allow private companies to control the use of military equipment.
The big difference here is that Claude is not military equipment. It's a public, general purpose model. The terms of use/service were part of the contract with the DoD. The DoD is trying to forcibly alter the deal, and Anthropic is 100% in the clear to say "no, a contract is a contract, suck it up buttercup."
We aren't talking about Lockheed here making an F-35 and then telling the DoD "oh, but you can't use our very obvious weapon to kill people."
> Surely autonomous murderous robots is something U.S. government has interest in preventing
After this fiasco, obviously not. It's quite clear the DoD most definitely wants autonomous murder robots, and also wants mass domestic surveillance.
Because the current government wants unquestioning obedience, not a discussion (assuming they were capable of that level of nuanced thought in the first place). The position of this government is "just do what I say or I will hit you with the first stick that comes to hand".
If the government doesn't want to sign a deal on Anthropic's terms, they can just not sign the deal. Abusing their powers to try to kill Anthropic's ability to do business with other companies is 10000% bullshit.
> What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
Consider the government. It’s Hegseth making this decision, and he considers the US military’s adherence to law to be a risk to his plans.
I can see both sides as pertains to Trump's initial decision to stop working with Claude, but now, this over-the-top "supply chain risk" designation from Hegseth is something else. It's hard to square it with any real principle that I've seen the admin articulate.
> What I don’t understand is why the two parties couldn’t reach agreement.
Someday we'll have to elect a POTUS who is known for his negotiation and dealmaking skills.
Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".
I've been using Racket to work through The Little Learner[1] and it's been a good experience. You need minimal Racket to work through the book (lambda, let, define, map; I think that's about it). But I branched out to learn more about the language and the standard library, and it's a fun and surprisingly powerful system to explore.
The biggest downside of Racket is that you can't build up your environment incrementally the way you can with Common Lisp/Sly. When you change anything in your source you reload REPL state from scratch. After CL it feels incredibly limiting in a Lisp. Incremental buildup is so valuable, if I wanted to do any Lisp work again I'd reach for CL before Racket just for this reason.
BTW, the book is _great_. Quick, easy to get through, very easy to understand, and teaches you everything from soup to nuts. If you're familiar with lisps you can get through the book in two weeks. It's then easy to get into any deep learning tutorial or project you want, or even start implementing some papers. The book manages not to water down the material despite not using any math at all. Although if you know some linear algebra or multivariable calculus you'll appreciate the beauty of the field more.
> The biggest downside of Racket is that you can't build up your environment incrementally the way you can with Common Lisp/Sly. When you change anything in your source you reload REPL state from scratch.
I don’t quite understand… I’m using Racket in emacs/SLIME and I can eval-last-sexp, regions, etc.
Ah, I'm using racketmode which doesn't support live state buildup (and the builtin GUI doesn't either). What exactly is your setup? SLIME only has a Common Lisp backend, it doesn't support Racket to my knowledge.
EDIT: ok with geiser and geiser-racket incremental state buildup works really well. I rescind my objection!
I think that should work in racket-mode as well. You can easily send individual sexps to the repl and add to the live state. However, one thing that CL does that Racket doesn't do (afaik) is when you change a data type (e.g. alter a struct), it automatically ensures live code uses the new types. In Racket by contrast I have to either carefully go through all affected forms and send them to the repl, or send the whole buffer to the repl. This does make the whole experience feel more static than in CL.
> The biggest downside of Racket is that you can't build up your environment incrementally the way you can with Common Lisp/Sly. When you change anything in your source you reload REPL state from scratch.
I think no Lisp is a "true" Lisp if it doesn't provide two critical components of the Lisp experience:
- Live Images
- REPL-driven development
That's why Clojure/Racket and even Scheme are Lisp-y but not a true Lisp. The only true Lisp languages I've found are CL and Janet.
Is this not ultimately a late-binding issue? Maybe I'm missing something, but I've absolutely been able to incrementally build up an environment without resetting using nrepl and Clojure
The Little Learner is a great book. I tried rewriting all the code in Python/JAX while following the Scheme code style as closely as possible, and it worked out great.
The appendix on autodiff is a bit rushed, in my opinion. But in all fairness, the number of pages would probably need to be doubled to give a proper exposition of autodiff.
Yeah, it's the easiest way to get a beachhead in deep learning and then expand from there. I dislike their heavy use of currying, it's elegant in theory but bad error messages make it confusing and inconvenient in practice. But it's a small tradeoff for an otherwise excellent book.
Also ex-Stripe. This suggests an opportunity to build an exchange that addresses these problems. Could one build an exchange with deliberate "turn-based" liquidity to avoid the problem of daily stock price distraction, for example? (This is hard because there will always be secondary markets, but presumably this is already the case.)
I thought about it for only a few seconds, but here is one way to do it. Have users self-report an "addiction factor", then fine the company based on the aggregate score using a progressive scale.
There is obviously a lot of detail to work out here-- which specific question do you ask users, who administers the survey, what function do you use scale the fines, etc. But this would force the companies to pay for the addiction externality without prescribing any specific feature changes they'd need to make.
Specifying the requirement in terms of measured impact is a good strategy because it motivates the app companies to do the research and find effective ways to address addiction, not just replace specific addictive UI patterns with different addictive UI patterns.
Building measurement into the law also produces a metric for how well the law is working and helps inform improvements to the law.
From an execution standpoint you can't work on experimental mobility due to path dependence. How are they going to convince municipal governments to open golf cart lanes? That would require solving two problems (autonomy and overcoming path dependence), and solving just one is hard enough. Once they saturate the market as it is with autonomous driving, then everything will change and opportunities to experiment will open up.
In the Midwest, golf carts are exactly what people use to get around in small towns. It's not unreasonable that neighborhoods might be closed to large vehicles and use other forms of transit within their boundaries.
I use an electric scooter to get around areas where a car would be inappropriate or undesired. I keep it in the back of my car always (along with my helmet, gloves and goggles) so that I can pull it out when needed.
Pretty convenient when I unexpectedly find myself needing to use a parking garage and such. The scooter can take me out of the parking garage and into the building with no issue. And then I can keep it with me in the building until it's time to get back to the car.
It's also probably cheaper than a golf cart - mine was just about $3,600 brand new. Though used carts are probably cheaper still, and there are also much cheaper scooters.
I actually used to use only an electric scooter for transit, but then I got hit by a pickup truck who didn't check the bike lane before turning. So I did driver's ed, got my license and leased a BEV.
Cool!
I thought this is more a thing of elderly care centers. I like the drivingfeeling of golf carts, so I would clearly do this as well if it would be allowed on public streets.
Though on most streets with all these SUV around, it will feel unsafe for me.
The state reduced regulation around vehicle registration so farmers can drive their SxSs and ATVs on the street (with some restrictions, obviously they don't go on the interstate) and then people in town registering their golf carts or whatever as second cars for around town stuff.
Oh golf carts were awesome in small lake communities in PA. Was much better than driving cars down those narrow roads and made much more sense for shorter distances. Plus kids got more freedom since we were allowed to drive the carts well before we could get drivers licenses (Might not be good to be as lax in a larger city though)
yeah, I can say that except for elder areas (not necessarily dedicated facilities, but there are things like "RV parks" which cater mostly to older folks but also families; they usually have 10mph speed limits), I've never seen someone driving a golf cart around town while I've lived in MI, OH, or PA.
I do see people driving horse-drawn carriages, ATVs (probably illegally), snowmobiles (legally in some parts of MI during Winter or condition-dependent), and riding mowers (probably illegally) in and around towns, though. Very rarely, I see someone driving an e-bike; this rareness is mostly because they aren't allowed on the sidewalks here and there's no bike lane, so you need to drive and signal like a car, which is pretty awkward given how many e-bikes don't even come with real brake lights (though many falsely advertise red rear running lights as a brake light, which'd be illegal to drive unless you hand-signal whenever you brake).
Coronado Island, near San Diego, California, for one.
Sun City, Arizona, though these are golf communities/mega-master-planned communities. Coronado is a better example of a mixed vehicle environment with golf carts bopping around all the time on the same streets.
Coronado isn't a good example. Or at least not one that scales, that's a VERY affluent neighborhood.
The golf cart isn't a replacement for a car, it's one you have on the side. I would argue that its partially because they're easier to park in a very touristy environment
Interesting that this seems like a slam-dunk argument for why reusable rockets and other improvements are practically impossible (e.g. "we might be able to achieve a microscopic improvement in efficiency or reliability, but to make any game-changing improvements is not merely expensive; it's a physical impossibility"), and wouldn't matter in any case for structural reasons (e.g. "market inelasticity (cutting launch cost in half wouldn't make much of a difference)", yet in the fifteen years since it was written launch costs have fallen to a third of what they were, continue to fall, and the number of payloads to orbit has gone up by an order of magnitude or more (so much for "market inelasticity").
To be intellectually honest about it, you have to answer a bunch of questions:
1. Awful compared to what?
2. Was there an equivalent transfer outside America?
3. What is the cause? What ratio rent-seeking/shady activity vs a consequence of natural forces (e.g. technological change)
Azure revenue is growing at 39% year over year. If Microsoft can sustain this growth, in four years Azure will be ~3.73x its current size. This is of course very difficult, but you really don’t need a deus ex machina to hit 4x growth under your assumptions.
The issue in the late-90s was all the investment created a lot of real revenue for telecoms and other companies. Even though there were a lot of shenanigans with revenue, a lot of real money was spent on fiber and tech generally.
But the real money was investment that didn’t see a return for the investor. The investments needed to have higher final consumption (such as through better productivity or through displacing other costs) to pay back the investment.
I've recently switched to Claude for chat. GPT 5.2 feels very engagement-maxxed for me, like I'm reading a bad LinkedIn post. Claude does a tiny bit of this too, but an order of magnitude less in my experience. I never thought I'd switch from ChatGPT, but there is only so much "here's the brutal truth, it's not x it's y" I can take.
GPT likes to argue, and most of its arguments are straw man arguments, usually conflating priors. It's ... exhausting; akin to arguing on the internet. (What am I even saying, here!?) Claude's a lot less of that. I don't know if tracks discussion/conversation better; but, for damn sure, it's got way less verbal diarrhea than GPT.
Yes, GPT5-series thinking models are extremely pedantic and tedious. Any conversation with them is derailed because they start nitpicking something random.
But Codex/5.2 was substantially more effective than Claude at debugging complex C++ bugs until around Fall, when I was writing a lot more code.
I find Gemini 3 useless. It has regressed on hallucinations from Gemini 2.5, to the point where its output is no better than a random token stream despite all its benchmark outperformance. I would use Gemini 2.5 to help write papers and all, can't see to use Gemini 3 for anything. Gemini CLI also is very non-compliant and crazy.
To me ChatGPT seems smarter and knows more. That’s why I use it. Even Claude rates gpt better for knowledge answers. Not sure if that itself is any indication. Claude seems superficial unless you hammer it to generate a good answer.
The vertical integration argument should apply to Grok. They have Tesla driving data (probably much more data than Waymo), Twitter data, plus Tesla/SpaceX manufacturing data. When/if Optimus starts on the production line, they'll have that data too. You could argue they haven't figured out how to take advantage of it, but the potential is definitely there.
Agreed. Should they achieve Google level integration, we will all make sure they are featured in our commentary. Their true potential is surely just around the corner...
"Tesla has more data than Waymo" is some of the lamest cope ever. Tesla does not have more video than Google! That's crazy! People who repeat this are crazy! If there was a massive flow of video from Tesla cars to Tesla HQ that would have observable side effects.
The key metric is more unusual situations. That scales with miles driven, not gigabytes. With onboard inference the car simply logs anything 'unusual' (low confidence) to selectively upload those needle-in-a-haystack rare events.
What I don’t understand is why the two parties couldn’t reach agreement. Surely autonomous murderous robots is something U.S. government has interest in preventing.
reply