Nott really, the dispute is that Anthropic wanted to keep restrictions against domestic mass surveillance and fully autonomous weapons, while the Pentagon reportedly wanted the models available for any "lawful" use
This feels bad for the industry. If every AI company learns that having explicit red lines gets you blacklisted, the incentive is to keep safety language vague and negotiable
reply