Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I truly wonder what "unsafe" scenarios an image generator could be used for? Don't we already have software that can do pretty much anything if a professional human is using it?


The major risky use cases for image generators are (a) sexual imagery of kids and (b) public personalities in various contexts usable for propaganda.


I would say the barrier to entry is stopping a lot of ‘candid’ unsafe behaviour. I think you allude to it yourself in implying currently it requires a professional to achieve the same results.

But giving that ability to _everyone_ will lead to a huge increase in undesirable and targeted/local behaviour.

Presumably it enables any creep to generate what they want by virtue of being able to imagine it and type it, rather than learn a niche skill set or employ someone to do it (who is then also complicit in the act)


"undesirable local behavior"

Why don't you just say you believe thought crime should be punishable?


I imagine they might talk about things like students making nudes of their classmates and distributing them.

Or maybe not. It's hard to tell when nobody seems to want to spell out what behaviors we want to prevent.


Would it be illegal for a student who is good at drawing to paint a nude picture of an unknowing classmate and distribute it?

If yes, why doesn't the same law apply to AI? If no, why are we only concerned about it when AI is involved?


IANAL but that sounds like harrassment, I assume the legality of that depends on the context (did the artist previously date the subject? lots of states have laws against harassment and revenge porn that seem applicable here [1]. are you coworkers? etc), but I don't see why such laws wouldn't apply to AI generated art as well. It's the distribution that's really the issue in most cases. If you paint secret nudes and keep them in your bedroom and never show them to anyone it's creepy, but I imagine not illegal.

I'd guess that stability is concerned with their legal liability, also perhaps they are decent humans who don't want to make a product that is primarily used for harassment (whether they are decent humans or not, I imagine it would affect the bottom line eventually if they develop a really bad rep, or a bunch of politicians and rich people are targeted by deepfake harassment).

[1] https://www.cagoldberglaw.com/states-with-revenge-porn-laws/...

^ a lot of, but not all of those laws seem pretty specific to photographs/videos that were shared with the expectation of privacy and I'm not sure how they would apply to a painting/drawing, and I certainly don't know how the courts would handle deepfakes that are indistinguishable from genuine photographs. I imagine juries might tend to side with the harassed rather than a bully who says "it's not illegal cause it's actually a deepfake but yeah i obviously intended to harass the victim"


Because AI lowers the barrier to entry; using your example, few people have the drawing skills (or the patience to learn them) or take the effort to make a picture like that, but the barrier is much lower when it takes five seconds of typing out a prompt.

Second, the tool will become available to anyone, anywhere, not just a localised school. If generating naughty nudes is frowned upon in one place, another will have no qualms about it. And that's just things that are about decency, then there's the discussion about legality.

Finally, when person A draws a picture, they are responsible for it - they produced it. Not the party that made the pencil or the paper. But when AI is used to generate it, is all of the responsibility still with the person that entered the prompt? I'm sure the T's and C's say so, but there may still be lawsuits.


Right, these are the same arguments against uncontrolled empowerment that I imagine mass literacy and the printing press faced. I would prefer to live in a society where individual freedom, at least in the cognitive domain, is protected by a more robust principle than "we have reviewed the pros and cons of giving you the freedom to do this, and determined the former to outweigh the latter for the time being".


You seem to be very confused about civil versus criminal penalties....

Feel free to make an AI model that does almost anything, though I'd probably suggest that it doesn't make porn of minors as that is criminal in most jurisdiction, short of that it's probably not a criminal offense.

Most companies are only very slightly worried about criminal offenses, they are far more concerned about civil trials. There is a far lower requirement for evidence. AI creator in email "Hmm, this could be dangerous". That's all you need to lose a civil trial.


Why do you figure I would be confused? Whether any liability for drawing porn of classmates is civil or criminal is orthogonal to the AI comparison. The question is if we would hold manufacturers of drawing tools or software, or purveyors of drawing knowledge (such as learn-to-draw books), liable, because they are playing the same role as the generative AI does here.


Because you seem to be very confused on civil liabilities in most products. Manufactures are commonly held liable for the users use of products, for example look at any number of products that have caused injury.


Surely those are typically when the manufacturer was taken to have made an implicit promise of safety to the user and their surroundings, and the user got injured. If your fridge topples onto you and you get injured, the manufacturer might be liable; if you set up a trap where you topple your fridge onto a hapless passer-by, the manufacturer will probably not be liable towards them. Likewise with the classic McDonalds coffee spill liability story - I've yet to hear of a case of a coffee vendor being held liable over a deliberate attack where someone splashed someone else with hot coffee.


> You seem to be very confused about civil versus criminal penalties....

Nah, I think it's a disagreement over whether a tool's maker gets blamed for evil use or the tool's user.

It's a similar argument over whether or not gun manufacturers should have any liability for their products being used for murder.


>It's a similar argument over whether or not gun manufacturers

This is really only a debate in the US and only because it's directly written in the constitution. Pretty much no other product works that way.


Are we on the same HN that bashes Facebook/Twitter/X/TikTok/ads because they manipulate people, spread fake news or destroyed attention span?


Photoshop also lowers that barrier of entry compared to pen and pencil. Paper also lowers the barrier compared to oil canvas.

Affordable drawing classes and YouTube drawing tutorials lower the barrier of entry as well.

Why on earth would manufacturers of pencils, papers, drawing classes, and drawing software feel responsible for censoring the result of combining their tool with the brain of their customer?

A sharp kitchen knife significantly lowers the barrier of entry to murder someone. Many murders are committed everyday using a kitchen knife. Should kitchen knife manufacturers blog about this every week?


I agree with your point, but I would be willing to bet that if knives were invented today rather than having been around awhile, they would absolutely be regulated and restricted to law enforcement if not military use. Hell, even printers, maybe not if invented today but perhaps in a couple years if we stay on the same trajectory, would probably require some sort of ML to refuse to print or "reproduce" unsafe content.

I guess my point is that I don't think we're as inconsistent as a society as it seems when considering things like knives. It's not even strictly limited to thought crimes/information crimes. If alcohol were discovered today , I have no doubt that it would be banned and made schedule I


> Hell, even printers, maybe not if invented today but perhaps in a couple years if we stay on the same trajectory, would probably require some sort of ML to refuse to print or "reproduce" unsafe content.

Fun fact: Many scanners and photocopiers will detect that you're trying to scan/copy a banknote and will refuse to complete the scan. One of the ways is detecting the EURion Constellation.

https://en.wikipedia.org/wiki/EURion_constellation


Can you point to other crimes that are based on skill or effort?


That's not even necessarily a bad thing (as a whole - individually it can be). Now, any leaked nudes can be claimed to be AI. That'll probably save far more grief than it causes.


You’re very welcome to ask for clarification - I kept it abstract because there is a lot of grey area and it’s something we need to understand and discuss as technology and society evolves.

To spell out one such instance: I would like to live in a world where it is not trivial to depict and misrepresent me (or anyone) in a way that is photorealistic to the point that it can be used to mislead others.

Whether that means we need to outright prevent it, or have some kind of authenticity mechanism, or some other yet-to-be-discovered solution? I do not know, but you now have my goalposts.


Such activity is legal per Ashcroft v Free Speech Coalition (2002). Artwork cannot be criminalized because of the contents of it.


Stability is not an American company. The US is not the only country in the world.


Artwork is currently criminalized because of its contents. You cannot paint nude children engaged in sex acts.


The case I literally just referenced allows you to paint nude children engaged in sex acts.

> The Ninth Circuit reversed, reasoning that the government could not prohibit speech merely because of its tendency to persuade its viewers to engage in illegal activity.[6] It ruled that the CPPA was substantially overbroad because it prohibited material that was neither obscene nor produced by exploiting real children, as Ferber prohibited.[6] The court declined to reconsider the case en banc.[7] The government asked the Supreme Court to review the case, and it agreed, noting that the Ninth Circuit's decision conflicted with the decisions of four other circuit courts of appeals. Ultimately, the Supreme Court agreed with the Ninth Circuit.


I appreciate you taking the time to lay that out, I was under the opposite impression for US law.


Students already share nudes every day.

Where are the Americans asking about Snapchat? If I were a developer at Scnapchat I could prolly open a few Blob Storage accounts and feed a darknet account big enough to live off of. You people are so manipulatable.


Students don’t share photorealistic renders of nude classmates getting gangbanged though


What do you mean should be... it 100% is.

In a large number of countries if you create an image that represents a minor in a sexual situation you will find yourself on the receiving side of the long arm of the law.

If you are the maker of an AI model that allows this, you will find yourself on the receiving side of the long arm of the law.

Moreso, many of these companies operate in countries where thought crime is illegal. Now, you can argue that said companies should not operate in those countries, but companies will follow money every time.


I think it's pretty important to specify that you have to willingly seek and share all of these illegal items. That's why this is so sketch. These things are being baked with moral codes that'll _share_ the information, incriminating everyone. Like why? Why not just let it work and leave it up to the criminal to share their crimes? People are such authoritarian shit-stains, and acting like their existence is enough to justify their stance is disgusting.


>I think it's pretty important to specify that you have to willingly seek and share all of these illegal items.

This is not obvious at all when it comes to AI models.

>People are such authoritarian shit-stains

Yes, but this is a different conversation altogether.


Once it is outside your mind and in a physical form, is it still just a thought sir?


In my country there is legal precedent setting that private, unshared documents are tantamount to thought.


[Edited: I'm realizing the person I'm responding to is kinda unhinged, so I'm retracting out of the convo.]


Similar to why Google's latest image generator refuses to produce a correct image of a 'Realistic, historically accurate, Medieval English King'. They have guard rails and system prompts set up to force the output of the generator with the company's values, or else someone would produce Nazi propaganda or worse. It (for some reason) would be attributed to Google and their AI, rather than the user who found the magic prompt words.


Yeah this is probably the most realistic reason


Eh, a professional human could easily lockpick the majority of front doors out there. Nevertheless I don't think we're going to give up on locking our doors any time soon.


For some scenarios, it's not the image itself but the associations that the model might possibly make from being fed a diet of 4chan and Stormfront's unofficial YouTube channel. The worry is over horrible racist shit, like if you ask it for a picture of a black person, and it outputs a picture of a gorilla. Or if you ask it for a picture of a bad driver, and it only manages to output pictures of Asian women. I'm sure you can think up other horrible stereotypes that would result in a PR disaster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: