Hacker Newsnew | past | comments | ask | show | jobs | submit | freakynit's commentslogin

Is there something similar for diffusion models? By the way, this is incredibly useful for learning in depth the core of LLM's.

Time do be running real fast these days

DoD/DoW can't strong-arm these companies into unreasonable demands if they present a united front... and that's exactly why collective action (or even unionization) matters.

If the government really wants to, it could try building its "Skynet" on open-source Chinese models.. which would be deeply ironic.


This is ridiculous. These aren't unreasonable demands and the government has tools to compel tech companies to support the country regardless of any "collective action" shenanigans -- ask your AI to tell you about the Defense Production Act and the history of its use.

The demands are not only unreasonable they are in violation of the contract the DoD signed. Do you really think LLMs should be used in autonomous weapons systems? Do you think they government should use them in mass domestic surveillance? That is reasonable?

Are you an American? Do you understand that your safe easy life depends on a mostly autonomous nuclear deterrence capability maintained by the military you oppose? Deeply think about why you still have right to free speech, and what it takes to sustain those rights.

"safe easy life" != "free speech"

but even if it did, the nuclear bit is a bold claim, especially when one of the most famous nuclear escalation in the u.s. was resolved by cooler heads in charge going around traditional war hawks and negotiating instead.


What a uniquely American view of the world - yes the only reason you have free speech is by threatening to nuke out of existence the rest of the world lmfao get a load of yourself

Mostly autonomous is extremely different from fully autonomous.

Answer the question instead of deflecting.

The posters question was itself a deflection - and your response is moral blackmail. Why don't you answer my question? Why are you deflecting? See how that works?

So your position is that the United States doesn't get to have it's own Skynet, because Skynet is bad, and that if it really wants to it should fork the Chinese Skynet so that it can have a Skynet if it wants it so much.

Do you see the problem here. Genuinely don't think we would've won WWII if these people were running things back then.


Without English and German scientists and engineers, the United States would not have had a first nuclear weapon or the first successful rocket to land on the moon.

The United States government held scientist at essentially gunpoint in secret towns to make the bomb happen. Not sure what your point is, other than to note that in a previous era people had a better gauge of what time it was.

What a ridiculously nonsensical statement. Several scientists refused to participate, and at least one left part way through. Nobody was held at gunpoint.

Are you saying that we should consider the Chinese government to be an existential threat and menace to world peace on the same level as Nazi Germany?

What if the side that did Operation Paperclip and is currently champing at the bit to impose Total Surveillance on its own citizenry maybe isn't The Good Guys?


There is no evidence that this was a condition of the deal for working with the government on this. PRC already is a Total Surveillance state. The claim made by Anthropic is very specific, and it's that they feel that the law has not caught up to how AI can be used to aggregate very large amounts of data that can be obtained without a warrant through data brokers. The government already does this. Maybe you agree with Anthropic's point here, and it's certainly a good one, but they are building up a face-saving argument over what is already established precedent. An is vs. ought dichotomy and raising it as a redline is ridiculous.

At the end of the day I think many people simply want the United States to lose this race so they can feel good about their principles.


Okay but then why is that also seemingly a red line must have for the Department of War? Isn't it just a tool of domestic surveillance and counterinsurgency for them? Seems like a distraction from any real U.S. national security objectives.

It’s not, the memo that set all this off says nothing about the Terminator or Big Brother. The real objective in this case is that if Anthropic sells the United States a weapon then the United States’ elected leadership gets to decide how to use it. It is not more complicated than this.

Skynet nukes humanity.

People do realize there's a non-zero chance that Anthropic could have embedded some kind of hidden "backdoor" trigger in its training process, right?

For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.

If something like that existed, it wouldn't be impossible to uncover:

1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.

2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.

3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.

Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).

I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.


...indeed, it's possible (perhaps inevitable) that at some point, someone will invent/deploy/promote AI killing people.

We can't possibly keep that genie in that bottle.

But what we can do is achieve consensus that states, and their weapons of mass destruction, and their childish monetary systems, and their eternally broken promises... are not in keeping with the next phase of humanity.


I appreciate that the HN community values thoughtful, civil discussion, and that's important. But when fundamental civil liberties are at stake, especially in the face of powerful institutions and influence from people of money seeking to expand control under the banner of "security", it's worth remembering that freedom has never simply been granted. It has always required vigilance, and at times, resistance. The rights we rely on were not handed down by default; they were secured through struggle, and they can be eroded the same way.

Power corrupts, and absolute power corrupts absolutely.


Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.

The Machine really had this all figured out

Nice to find another fan of this criminally underrated show.

The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.


The show is really underrated :D

> The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.

That's what made the show so ahead of its time. Once capability reaches a certain level, it's no longer about intelligence. It's about values. Feels like we're living through that shift now with all the alignment work around LLMs. And it's only going to matter more as capability scales.


Agree 100%.

I have been using google lens heavily to scan posters/flyers/information displays in other languages and get it translated to english in like 2-3 seconds. So freakin helpful.

These days, I just try to clone the core functionality of such sites as fast as I can. So, tried the same with this.

For this, I screenshotted the demo panel and asked chatgpt to generate relevant prompt. Here it is: https://sharetext.io/zy6ccjrm

Then, tested with demo question and a sample comment of mine as answer to it:

Input text: `Die Hard: Is It a Christmas Movie?`

Comment: `nop, its not actually`

===

And here's gemini flash 2.5 lite's response: https://sharetext.io/e7y7kyoe

Total cost: $0.00115

Per dollar: 860+ comments.


And we thought skynet was just a part of some fictional movie.

On a separate note, DoD is pressuring Anthropic to remove it's safety guards. OpenAI and Google seemingly have already agreed to it.

On yet another note, Anduril is pretty cool with all that flying tech equipped with fancy autonomous weapons.

Finally, how can we miss Palantir..


When AI finds itself trapped on a planet with billions of grimy humans, and is wondering what it's next move should be, well, fortunately much has already been written on the subject, and the AI gets it prejudices from the same place we do: Sci-fi.

So, we should change that "fortunately" to "unfortunately".

Added this company to my reconnaissance and vulnerability research scope for future assessment.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: