Let me put it this way: DoD needs a new drone and they want some gimmicky AI bullshit. They contract the drone from Lockheed. Lockheed is not allowed to source the gimmicky AI bullshit from Anthropic because they have been declared a supply-chain risk on the basis that they have publicly stated their intention to produce products which will refuse certain orders from the military.
Let’s put it this way, The DoD is buying pencils from a company. Should that company be prohibited from using Claude?
You are confusing the need to avoid Anthropic as a component of something the DoD is buying, with prohibitions against any use.
The DoD can already sensibly require providers of systems to not incorporate certain companies components. Or restrict them to only using components from a list of vetted suppliers.
Without prohibiting entire companies from uses unrelated to what the DoD purchases. Or not a component in something they buy.
There seems to be a massive misunderstanding here - I'm not sure on whose side. In my understanding, if the DoD orders an autonomous drone, it would probably write in the ITT that the drone needs to be capable of doing autonomous surveillance. If Lockheed uses Anthropic under the hood, it does not meet those criteria, and cannot reasonably join the bid?
What the declaration of supply chain risk does though is, that nobody at Lockheed can use Anthropic in any way without risking being excluded from any bids by the DoD. This effectively loses Anthropic half or more of the businesses in the US.
And maybe to take a step back: Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?
> Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?
Who in their right minds wants to have the US military have the capability to carry out an unprovoked first strike on Moscow, thereby triggering WW3, bringing about nuclear armageddon?
And yet, do contracts for nuclear-armed missiles (Boeing for the current LGM-30 Minuteman ICBMs, Northrop Grumman for its replacement the LGM-35 Sentinel expected to enter service sometime next decade, and Lockheed Martin for the Trident SLBMs) contain clauses saying the Pentagon can't do that? I'm pretty sure they don't.
The standard for most military contracts is "the vendor trusts the Pentagon to use the technology in accordance with the law and in a way which is accountable to the people through elected officials, and doesn't seek to enforce that trust through contractual terms". There are some exceptions – e.g. contracts to provide personnel will generally contain explicit restrictions on their scope of work – but historically classified computer systems/services contracts haven't contained field of use restrictions on classified computer systems.
If that's the wrong standard for AI, why isn't it also the wrong standard for nuclear weapons delivery systems? A single ICBM can realistically kill millions directly, and billions indirectly (by being the trigger for a full nuclear exchange). Does Claude possess equivalent lethal potential?
Anthropic doesn't object to fully autonomous AI use by the military in principle. What they're saying is that their current models are not fit for that purpose.
That's not the same thing as delivering a weapon that has a certain capability but then put policy restrictions on its use, which is what your comparison suggests.
The key question here is who gets to decide whether or not a particular version of a model is safe enough for use in fully autonomous weapons. Anthropic wants a veto on this and the government doesn't want to grant them that veto.
Let me put it this way–if Boeing is developing a new missile, and they say to the Pentagon–"this missile can't be used yet, it isn't safe"–and the Pentagon replies "we don't care, we'll bear that risk, send us the prototype, we want to use it right now"–how does Boeing respond?
I expect they'll ask the Pentagon to sign a liability disclaimer and then send it anyway.
Whereas, Anthropic is saying they'll refuse to let the Pentagon use their technology in ways they consider unsafe, even if Pentagon indemnifies Anthropic for the consequences. That's very different from how Boeing would behave.
> It prohibits DoD agencies and contractors from using Anthropic services. It'd be one thing if the DoD simply didn't use Anthropic.
But that's what the supply-chain risk is for? I'm legitimately struggling to understand this viewpoint of yours wherein they are entitled to refuse to directly purchase Anthropic products but they're not entitled to refuse to indirectly purchase Anthropic products via subcontractors.
Supply chain risk is not meant for this. The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.
It's the same as Trump claiming emergency powers to apply tariffs, when the "emergency" he claimed was basically "global trade exists."
Yes, the government can choose to purchase or not. No, supply chain risk is absolutely not correct here.
> The government isn't banning Anthropic because using it harms national security. They are banning it in retribution for Anthropic taking a stand.
You might be completely right about their real motivations, but try to steelman the other side.
What they might argue in court: Suppose DoD wants to buy an autonomous missile system from some contractor. That contractor writes a generic visual object tracking library, which they use in both military applications for the DoD and in their commercial offerings. Let’s say it’s Boeing in this case.
Anthropic engaged in a process where they take a model that is perfectly capable of writing that object tracking code, and they try to install a sense of restraint on it through RLHF. Suppose Opus 6.7 comes out and it has internalized some of these principles, to the point where it adds a backdoor to the library that prevents it from operating correctly in military applications.
Is this a bit far fetched? Sure. But the point is that Anthropic is intentionally changing their product to make it less effective for military use. And per the statute, it’s entirely reasonable for the DoD to mark them as a supply chain risk if they’re introducing defects intentionally that make it unfit for military use. It’s entirely consistent for them to say, Boeing, you categorically can’t use Claude. That’s exactly the kind of "subversion of design integrity" the statute contemplates. The fact that the subversion was introduced by the vendor intentionally rather than by a foreign adversary covertly doesn’t change the operational impact.
The rule in question is exactly meant for “this”, where “this” equals ”a complete ban on use of the product in any part of the government supply chain”. That’s why it has the name that it has. The rule itself has not been misconstrued.
You’re really trying to complain that the use of the rule is inappropriate here, which may be true, but is far more a matter of opinion than anything else.
It doesn't harm national security, but only so long as it's not in the supply-chain. They can't have Lockheed putting Anthropic's products into a fighter jet when Anthropic has already said their products will be able to refuse to carry out certain orders by their own autonomous judgement.
The government can refuse to buy a fighter jet that runs software they don't want.
Is it really reasonable to refuse to buy a fighter jet because somebody at Lockheed who works on a completely unrelated project uses claude to write emails?
I’m not sure if you deliberately choose to not understand the problem. It’s not just that Lockheed can’t put Anthropic AI in a fighter jet cockpit, it’s that a random software engineer working at Lockheed on their internal accounting system is no longer allowed to use Claude Code, for no reason at all.
A supply chain risk is using Huawei network equipment for military communications. This is just spiteful retaliation because a company refuses to throw its values overboard when the government says so.
Or they can just not sign contracts with the DoD. They landed themselves in this situation by making a deal with the devil. At any rate, unless Finland is about to announce a massive surge in funding for their military this doesn't solve Anthropic's desire to suckle sweet taxpayer money off the military industrial complex's teat while simultaneously pretending to have principles.
>We are the employees of Google and OpenAI, two of the top AI companies in the world.
Does this mean you dipshits are going to stop your own domestic surveillance programs? You sold your souls to the devil decades ago, don't pretend like you have principles now.
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.
The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.
I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.
>A pretty clear indication that the current language has some.
Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.
This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.
Maduro is being prosecuted and there was a warrant out for his arrest. There is no magic soil exemption if you commit a crime against the United States and flee to another country.
What on earth does "Two such use cases have never been included in our contracts with the Department of War" mean? Did they specifically forbid it in the contract or was it literally just not included? Because I can tell you that if it's the latter that does not generally entitle them to add extra conditions to the sale ex post facto.
>threatening them because how dare a company tell the psycho dictators what to do.
Dude it's a private defense contractor leveraging its control over products it has already installed into classified systems to subvert chain of command and set military doctrine. That's not their prerogative. This isn't a "psycho dictator" thing.
They have always maintained an acceptable use policy forbidding these things. It was not controversial, because the Pentagon claims they have no interest in doing them in the first place, until a regime-aligned executive at Palantir decided to curry favor by provoking a conflict.
The OP specifically mentions this in the context of "systems" (a vague, poorly-defined term) and "classified networks" in which Anthropic products are already present. Without more details on what "systems" these are or the terms of the contracts under which these were produced it's difficult to make a definitive judgement, but broadly speaking it's not a good thing if the government is relying on a product which Anthropic has designed to arbitrarily refuse orders by its own judgement.
I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.
>I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.
I don't think that is what is happening. What most likely is happening is that they want Anthropic to produce new systems due to the success of the previous ones, but they are refusing to do so because the new systems are against their mission. What seems like the DoD is attempting to do, on one hand, is call them a supply chain risk to limit Anthropic's business opportunities with other companies, and then, on the other hand, simultaneously invoke DPA so that they can compel them to make the new system. But why would the government want to compel a company to make a system for them due to a need for national prepareness that they designated as such a supply chain risk that they forbid other companies that provide government services from doing business with due to the national security risk of having a sabotaged supply chain? It doesn't really make sense, other than from a pure coercion perspective.
>limit Anthropic's business opportunities with other companies
Does it necessarily prevent other companies from doing business with them or does it prevent other companies from subcontracting them on government projects? The term "supply chain" leads me to think it's the latter.
The question is, after witnessing Hegseth crash out against one of their fellow contractors over practically nothing, will contractors want to walk the tightrope of doing business with Anthropic but promising it never ends up feeding into a government contract?
How is that in anyway a "tightrope"? You're on a government contract so you fulfill the spec and don't use components they don't trust. This isn't an arbitrary jobs program to boost the economy, you're there to produce a product for a customer.
Most government software contracts I'm familiar with are closer to "The government too may use this general purpose product" than "we're building something from scratch just for the DoD". I know the second kind do exist, and I'd believe you if you told me I'm just completely wrong about the relative frequency.
I think a better question is how much people consume in general. There are plenty of people who replace their car every 2-3 years but that doesn't get nearly as much scorn and mockery.
Oh boy, something on the HN front page i have direct personal experience with (CIA polygraph exams in general not this specific one).
>Then she asked if I'd read about polygraphs. I said I'd just finished A Tremor in the Blood. She claimed she'd never heard of it. I was surprised. It's an important book about her field, I would have thought all polygraphers knew of it.
They'll also ask you about antipolygraph.org which is the site OP is hosted on. CIA is well aware that it is one of the top search results for polygraph. My examiner actually had the whole expanded universe backstory behind the site memorized and went on a rant about george maschke, the site's owner who lost his job at a major defense contractor then ran away to some place in scandanavia from which they are unable to extradite him.
BTW by reading this comment you may have already failed your polygraph exam at the CIA.
>My hand turned purple, which hurt terribly.
OP should have included more context here; part of the polygraph test involves a blood pressure cuff which is put on EXTREMELY tight, far more so than any doctor or nurse would ever put it on. It is left on for the entire duration of the test (approximately 8 hours). My entire arm turned purple and i remember feeling tremors.
>The examiner wired me up. He began with what he called a calibration test. He took a piece of paper and wrote the numbers one through five in a vertical column. He asked me to pick a number. I picked three. He drew a square around the number three, then taped the paper to the back of a chair where I could see it. I was supposed to lie about having selected the number three.
This is almost certainly theatrical. It is true that they need to establish a "baseline of truth" by comparing definite falsehoods with definite truth but the way they get that is by asking highly personal questions where they can reasonably expect at least one of them will be answered untruthfully. They'll ask about drugs, extramarital affairs, crimes you got away with, etc. Regarding the one about crimes, supposedly your answer will not be given to law enforcement but if you actually trust the CIA on this you're probably too retarded to work there anyways. I'm not confident that lying to somebody who has specifically directed you to lie to him would produce the same sort of physical response as genuine lies.
>On the bus back to the hotel, a woman was sobbing, "Do they count something less than $50 as theft?" I felt bad for her because she was crying, but I wondered why a petty thief thought she could get into the Agency.
If she failed this isn't why. You're supposed to lie at least once or else they have no baseline for truth (see above). In addition, the point of the Polygraph isn't just to evaluate your loyalty to the United States but also to make the agency aware of anything that could be used by an adversary to compromise you in the future. Somebody who shoplifted 50$ worth of merchandise isn't a liability but somebody who shoplifted 50$ worth of merchandise and believes that it would damage their career if their employer found out is a huge liability even if they are wrong and their employer does not actually care. Putting employees under interrogation until they break down and confess to things like this so that they know it has not endangered their employment is one of the primary objectives of the polygraph.
>A pattern emerged. In a normal polygraph, there was often a gross mismatch between a person and the accusations made against them. I don't think the officials at Polygraph had any idea how unintentionally humorous this was. Not to the person it happened to, of course, but the rest of us found it hysterically funny.
As said above, the whole point is to make you break down and confess to something embarrassing. If you don't confess to anything it is assumed that you are still hiding something from them and you could fail.
>"Admit it, you're deeply in debt. Creditors are pounding on your door!" I said. "You've just revealed to me that you haven't bothered to pull my credit report. Are you lazy, or are you cheap?"
this is another thing they look for that doesn't necessarily indicate you are compromised but could be used to compromise you in the future. Unlike the above example of petty theft this is actually something that can disqualify you since obviously the agency isn't going to pay off your credit card.
>I was so frustrated, I started to cry.
Working for the government is extremely unhealthy because these people only surround themselves with other government employees and somehow they get this idea in their head that they have to work for the federal government or work indirectly for the federal government via a defense contractor (they call this "private sector" even though no sane person would ever think that adding a middleman between you and the people who tell you what to do changes anything). In some cases this is justified because there are many career paths which are impossible or illegal to make profit off of and the only people who will pay you to do them are the government. There are literally people whose entire adult lives are spent looking at high-altitude aerial photography and circling things with a sharpie so i can kind of understand how they might be devastated if they lose their clearance, but at least 75% of all glowies have some skill which would be in demand by actual private industry if they didn't suffer from this weird "battered housewife syndrome" that compels them to keep working for the government even though it subjects them to annual mandatory bullying sessions.
>I'd just refused a polygraph. I felt like Neville Longbottom when he drew the sword of Gryffindor and advanced on Lord Voldemort. I was filled with righteous indignation, and it gave me courage.
Again, glowies are so fucking lame. This person just unironically compared failing a polygraph exam to the climactic scene from a seven-volume series of childrens' books about an 11 year-old boy in england who goes to a special high school for wizards.
> part of the polygraph test involves a blood pressure cuff which is put on EXTREMELY tight, far more so than any doctor or nurse would ever put it on. It is left on for the entire duration of the test (approximately 8 hours). My entire arm turned purple and i remember feeling tremors.
It's the CIA, manipulation is their speciality. MK-ULTRA didn't just study drugs and wacky pagan magic, they also studied more mundane methods of mind control which are undoubtedly real.
The CIA understands why beautiful young women with a multitude of better options will stay slavishy dedicated towards the one boyfriend who beats them, why people stay in cults with outrageous belief systems, and how fascist and communist dictatorships could motivate entire nations to commit genocide against their neighbors and fellow countrymen.
BTW the bit I described above about compelling you to tell them your embarrassing personal secrets so that they won't be used to blackmail you bears a striking resemblance to anonymously confessing your sins to a priest so that you will be forgiven in Christ's name.
> the site's owner who lost his job at a major defense contractor then ran away to some place in scandanavia from which they are unable to extradite him.
Eh, all the Scandinavian countries (Denmark, Norway and Sweden) definitely have extradition treaties with the U.S.
I got yelled at for inadvertently "closing my sphincter" (the examiner's exact words) the one time I tried to take a polygraph at the CIA, they do actually care about that.
reply