Hacker Newsnew | past | comments | ask | show | jobs | submit | Altern4tiveAcc's commentslogin

> it leans on an uncharitable coloring of everybody who sees problems with copyright as "anti-copyright"

That's the charitable coloring. Owning concepts or ideas, and trying to police others' use of ideas you """own""" is absurd.


Always have been broken.

Hopefully, future legislation will cater less to publishers and copyright trolls. I'm not optimistic though. While certain kinds of publishers are indeed becoming less powerful, sports-related media conglomerates are successfully lobbying for more surveillance.

The general population will likely get the worst of both worlds, with copyright trolls getting to enforce unjust laws against regular people, while big tech gets to pay their way out.


> Prosecutors say they are now investigating whether X has broken the law across multiple areas.

This step could come before a police raid.

This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.


> and no crime was prevented by harassing local workers.

Siezing records is usually a major step in an investigation. Its how you get evidence.

Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.


> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.


Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?


This is like comparing the danger of a machine gun to that of a block of lead.


May be. We do have codified in law definition of machine gun which clearly separates it from a block of lead. What codified in law definitions are used here to separate photoshop from Grok in the context of those deepfakes and CSAM?

Without such clear legal definitions going after Grok while not going after photoshop is just an act of political pressure.


Why do you think France doesn’t have such laws that delineate this legal definition?

What you’re implying here is that Musk should be immune from any prosecution simply because he is right wing, which…


Take a step back and look at what you’re defending, man.


i don't understand why you're defending CSAM creation in photoshop.


They don’t provide a large platform for political speech.

This isn’t about AI or CSAM (Have we seen any other AI companies raided by governments for enabling creation of deepfakes, dangerous misinformation, illegal images, or for flagrant industrial-scale copyright infringement?)


No because most of those things aren't illegal and most of those companies have guard rails and because a prosecution requires a much higher standard of evidence than internet shitposting, and only X was stupid enough to make their illegal activity obvious.


Deepfakes have been around long before X (and other chatbots) allowed undressing of real people.

The difference is that the entire political Left hate and fear Elon and are desperately trying to destroy him.


Don't forget polaroid in that.


I'm of two minds about this.

One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.

On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.

The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.


It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.

If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.


How does that mesh with all the safe harbour provisions we've depended on to make the modern internet, though?


The safe harbor provisions largely protect X from the content that the users post (within reason). Suddenly Grok/X were actually producing the objectionable content. Users were making gross requests and then an LLM owned by X, using X servers and X code would generate the illegal material and then post it to the website. The entity responsible is no longer done user but instead the company itself.


Yes, and that was a very stupid product decision. They could have put the image generation into the post editor, shifting responsibility to the users.

I'd guess Elon is responsible for that product decision.


So, if someone hosts an image editor as web app, are they liable if someone uses that editor to create CP?

I honestly don't follow it. People creating nudes of others and using the Internet to distribute it can be sued for defamation, sure. I don't think the people hosting the service should be liable themselves, just like people hosting Tor nodes shouldn't be liable by what users of the Tor Network do.


Note that is a US law, not a French one.

Also, safe harbor doesn't apply because this is published under the @grok handle! It's being published by X under one of their brand names, it's absurd to argue that they're unaware or not consenting to its publication.


Before a USER did create content. So the user was / is liable. Now a LLM owned by a company does create content. So the company is liable.


I'm not trying to make excuses for Grok, but how exactly isn't the user creating the content? Grok doesn't have create images on its own volition, the user is still required to give it some input, therefore "creating" the content.


X is making it pretty clear that it is "Grok" posting those images and not the user. It is a separate posting that comes from an official account named "Grok". X has full control over what the official "Grok" account posts.

There is no functionality for the users to review and approve "Grok" responses to their tweets.


Does an autonomous car drive the car from point A to point B or does the person who puts in the destination address drive the car?


Until now, webserver had just been like a post service. Grok is more like a CNC late.


It's not like the world benefited from safe harbor laws that much. Why don't just amend them so that algorithms that run on server side and platforms that recommend things are not eligible.


If you are thinking about section 230 it only applies to user–generated content, so not server–side AI or timeline algorithms.


So if a social network tool does the exact same thing, but uses the user's own GPU or NPU to generate the content instead, suddenly it's fine?


If a user generates child porn on their own and uploads it to a social network, the social network is shielded from liability until they refuse to delete it.


Changes who's technically the perp. Offer managed porn generation service -> service provider is responsible for generating porn, like, literally literally


This might be an unpopular opinion but I always thought we might be better off without Web 2.0 where site owners aren’t held responsible for user content

If you’re hosting content, why shouldn’t you be responsible, because your business model is impossible if you’re held to account for what’s happening on your premises?

Without safe harbor, people might have to jump through the hoops of buying their own domain name, and hosting content themselves, would that be so bad?


What about webmail, IM, or any other sort of web-hosted communication? Do you honestly think it would be better if Google were responsible for whatever content gets sent to a gmail address?


Messages are a little different than hosting public content but sure, a service provider should know its customers and stop doing business with any child sex traffickers planning parties over email.

I would prefer 10,000 service providers to one big one that gets to read all the plaintext communication of the entire planet.


In a world where hosting services are responsible that way, their filtering would need to be even more sensitive than it is today, and plenty of places already produce unreasonable amounts of false positives.

As it stands, I have a bunch of photos on my phone that would almost certainly get flagged by over-eager/overly sensitive child porn detection — close friends and family sending me photos of their kids at the beach. I've helped bathe and dress some of those kids. There's nothing nefarious about any of it, but it's close enough that services wouldn't take the risk, and that would be a loss to us all.


They'd all have to read your emails to ensure you don't plan child sex parties. Whenever a keyword match comes up, your account will immediately be deleted.


Any app allowing any communication between two users would be illegal.


https://en.wikipedia.org/wiki/EncroChat

You have to understand that Europe doesn't give a shit about techbro libertarians and their desire for a new Lamborghini.


EncroChat was illegal because it was targeted at drug dealers, advertised for use in drug dealing. And they got evidence by texting "My associate got busted dealing drugs. Can you wipe his device?" and it was wiped. There's an actual knowledge component which is very important here.


You know this site would not be possible without those protections, right?


On the contrary HN is one of the most heavily moderated forums out here and serves as a great example of a right-size community where sex trafficking is not happening under the nose of an ambivalent host

(Snark aside, in your opinion are there comments on HN that dang would be criminally liable for if it weren't for safe harbor?)


The 3D printers don't generate the plans for the gun for you though. If someone sold a printer that would – happily with no guardrails – generate 3D models of CSAM from thin air and then print them, I bet they'd be investigated too. Or for that matter a 3D printer that came bundled with a built-in library of gun models you could print with very little skill...


I don't have an answer, but the theme that's been bouncing around in my head has been about accessibility.

Grok makes it trivial to create fake CSAM or other explicit images. Before, if someone spent a week on photoshop to do the same, It won't be Adobe that gets the blame.

Same for 3D printers. Before, anyone could make a gun provided they have the right tools (which is very expensive), now it's being argued that 3D printers are making this more accessible. Although I would argue it's always been easy to make a gun, all you need is a piece of pipe. So I don't entirely buy the moral panic against 3D printers.

Where that threshold lies I don't know. But I think that's the crux if it. Technology is making previously difficult things easier, to the benefit of all humanity. It's just unfortunate that some less-nice things have also been included.


I think a company which runs a printing business would have some obligations to make sure they are not fulfilling print orders for guns. Another interesting example are printers and copiers, which do refuse to copy cash. Which is partly facilitated with the EURion constellation (https://en.wikipedia.org/wiki/EURion_constellation) and other means.


Grok is publishing the CSAM photos for everyone to see. It is used as a tool for harassment and abuse, literally.


Sure, and the fact that they haven't voluntarily put guard rails up to stop that is absolutely vile. But my personal definition of "absolutely vile" isn't a valid legal standard. So, the issue is, like I said, how do you come up with a principled approach to making them do it that doesn't have a whole bunch of unintended consequences?


Courts are able or should be able to distinguish "tool that creates an item in the privacy of your home" and "tool that disseminates nonconsensual pornographic picture to wide public". Legal standards with that level or definition are fairly normal.


i don't see any need for guardrails, other than making the prompter responsible for the output of the bot, particularly when it's predictable

you cannot elaborately use a software to produce an effect that is patently illegal and accurate to your usage, and then pretend the software is to blame


No other "AI" companies released tools that could do the same?


In fact, Gemini could bikinify any image just like Grok. Google added guardrails after all the backlash Grok received.


And they should face consequences for that, somewhat mitigated by their good faith response.


[flagged]


Not really, they put a shit ton of effort in to make sure you can't create any kind of nude/suggestive pictures of anyone. I imagine they have strict controls on making images of children, but I don't feel inclined to find out.


you understand that the brush tool in photoshop exists?

> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.


That's why this is an investigation looking for evidence and not a conviction.

This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.

On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.


Note that the raid itself is a punishment. It's normal for them to seize all electronic devices. How is X France supposed to do any business without any electronic devices? And even when charges are dropped, the devices are never returned.


Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...

For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.


First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:

https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab

The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.


For more evidence:

https://www.bbc.co.uk/news/articles/cvg1mzlryxeo

Also, X seem to disagree with you and admit that CSAM was being generated:

https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...

Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

This is because of government pressure (see Ofcom link).

I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.


> Also, X seem to disagree with you and admit that CSAM was being generated

That post doesn't contain such an admission, it instead talks about forbidden prompting.

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.


> That post doesn't contain such an admission, it instead talks about forbidden prompting.

In response to what? If CSAM is not being generated, why aren't X just saying that? Instead they're saying "please don't do it."

> which contradicts your claim that there were no guardrails before.

From the linked post:

> However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards

Which was posted a full week after the initial story broke and after Ofcom started investigative action. So no, it does not contradict my point which was:

> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:

As you quoted.

I really can't decide if you're stupid, think I and other readers are stupid, or so dedicated to defending paedophilia that you'll just tell flat lies to everyone reading your comment.


Leave your accusations for yourself. Grok already didn't generate naked pictures of adults months ago when I tested it for the first time. Clearly the "additional safeguards" are meant to protect the system against any jailbreaks.


Just to be clear, I'm to ignore:

* Internet Watch Foundation

* The BBC

* The Guardian

* X themselves

* Ofcom

And believe the word of an anonymous internet account who claims to have tried to undress women using Grok for "research."


> First of all, the Guardian is known to be heavily biased again Musk.

Says who? Musk?


That is only "known" to intellectually dishonest ideologues.


>First of all, the Guardian is known to be heavily biased again Musk.

Biased against the man asking Epstein which day would be best for the "wildest" party.


>First of all, the Guardian is known to be heavily biased again Musk.

Which is good, that is the sane position to take these days.


Grok do seem to have tons of useless guardrails. Reportedly you can't prompt it directly. But also reportedly they tend to go for almost nonsensically off-guardrail interpretation of prompts.


Well, there is evidence that this company made and distributed CSAM and pornographic deepfakes to make a profit. There is no evidence lacking there for the investigators.

So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.

See also [0] for a legal discussion in the German context.

[0] https://arxiv.org/html/2601.03788v1


> Well, there is evidence that this company made and distributed CSAM

I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.

And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)

And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.

[0] https://www.legifrance.gouv.fr/codes/section_lc/LEGITEXT0000...


It wouldn't be called CSAM in France because it would be called a French word. Arguing definitions is arguing semantics. The point is, X did things that are illegal in France, no matter what you call them.


> It wouldn't be called CSAM in France because it would be called a French word. Arguing definitions is arguing semantics.

The most common French word is Pédopornographie. But my impression is the definition of that word under French law is possibly narrower than some definitions of the English acronym “CSAM”. Canadian law is much broader, and so what’s legally pédopornographie (English “child pornogaphy”) in Canada may be much closer to broad “CSAM” definitions

> The point is, X did things that are illegal in France, no matter what you call them.

Which French law are you alleging they violated? Article 227-23 du Code pénal, or something else? And how exactly are you claiming they violated it?

Note the French authorities at this time are not accusing them of violating the law. An investigation is simply a concern or suspicion of a legal violation, not a formal accusation; one possible outcome of an investigation is a formal accusation, another is the conclusion that they (at least technically) didn’t violate the law after all. I don’t think the French legal process has reached a conclusion either way yet.

One relevant case is the unpublished Court of Cassation decision 06-86.763 dated 12 septembre 2007 [0] which upheld a conviction of child pornography for importing and distributing the anime film “Twin Angels - le retour des bêtes célestes - Vol. 3". [0] However, the somewhat odd situation is that it appears that film is catalogued by the French national library, [1] although I don’t know if a catalogue entry definitively proves they possess the item. Also, art. 227-23 distinguishes between material depicting under 15s (illegal to even possess) and material depicting under 18s (only illegal to possess if one has intent to distribute); this prosecution appears to be have been brought under the latter category only-even though the individual was depicted as being under 15-suggesting this anime might not be illegal to possess in France if one has no intent to distribute it.

But this is the point - one needs to look at the details of exactly what the law says and how exactly the authorities apply it, rather than vague assertions of criminality which might not actually be true.

[0] https://www.legifrance.gouv.fr/juri/id/JURITEXT000007640077/

[1] https://catalogue.bnf.fr/ark:/12148/cb38377329p


> And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law

True, but outright child porn is illegal everywhere (as you said) and the borderline legal stuff is something most of your audience is quite happy to have removed. I cannot imagine you are going to get a lot of complaints if you remove AI generated sexual images of minors, for example so it seems reasonable to play it safe.

> That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has.

This is also common, but it is irritating too as it means the rest of the world is stuck with silly American attitudes about things like nudity and alcohol - for example Youtube videos blurring out bits of Greek statues because they are scared of being demonetised. These are things people take kids to see in museums!


To me, the most worrying part of the whole discussion is that your comment is pretty much the most "daring", if you can call it that, attempt to question if there even is a crime. Everyone else is worried about raids (which are normal whenever there is an ongoing investigation, unfortunate as it may be to the one being investigated). And no one dares to say, that, uh, perhaps making pictures on GPU should not be considered a crime in the same sense as human-trafficking or production of weapons are... Oh, wait. The latter is legal, right.


France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.


> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?


They've already broken the law by creating and hosting CSAM. Now let's see what else prosecutors will find.


No, that's not at all how this works.

They have a court order obviously to collect evidence.

You have offered zero evidence to indicate there is 'political pressure' and that statement by prosecutors doesn't hint at that.

'No crime was prevented by harassing workers' is essentially non sequitor in this context.

It could be that that this is political nonsense, but there would have to be more details.

These issues are really hard but we have to confront them. X can alter electoral outcomes. That's where we are at.


Lmao they literally made a broad accessible CSAM maker.


[flagged]


It would be an interesting idea that people would have get a "drivers license" before they are allowed to use an AI.


Car manufacturers are required to add features to make it less likely that cars kill babies.

What would happen if Volvo made a special baby-killing model with extra spikes?


Tesla did, the main reason, why there are no Cybertrucks in europe. They are not allowed, because they are to dangerous.


Comparing Apples and Oranges. Defending this company is becoming cringe and ridiculous. X effed up, and Musk did it on purpose. He uses CSAM to strongman the boundaries of the law. That's not worth defending unless you also say eff the rule of law.


Aren't a lot of US pickup trucks basically that? Sure, maybe there's a mechanism for preventing you from installing a baby seat in reverse to position in front of an airbag, but they're also built so that you can't see anything adult human sized 15m in front of the car, let alone anything child-sized.


Those are illegal in France so what's your point here


The US would spend 20 years arguing about which agency's jurisdiction it was, and ignore the dead babies?

No, wait, Volvo is European. They'd impose a 300% tariff and direct anyone who wanted a baby-killing model car to buy one from US manufacturers instead.


Let's raid car companies too. We were all born into this. We never had a vote. Thomas Jefferson is said to have written Constitutions ought to be re-written every so often or the dead rule by fiat decree. Let's.

The rich can join in the austerity too. No one voted for them. We been conditioned to pick acquiescence or poverty. We were abused into kowtowing to a bunch of pants shitting dementia addled olds educated in religious crack pottery. Their economic and political memes are just that, memes, not immutable physical truth.

In America, as evidenced by the public not in the streets protesting for single payer comprehensive healthcare, we clearly don't want to be on the hook for each other's lives. That's all platitudes and toxic positivity.

Hopes and prayers, bloodletting was good enough for the Founders!

So fuck the poor and the rich. Burn it all down.


People in France don’t give a stuff about the u.s constitution.


Focused on a tree and not the forest.

Treat that part like a variable and insert relevant French history.


Cars have uses and aren't primarily used or build,to kill babies. So what's a viable use for CSAM in your opinion?


[flagged]


"The EU doesn't tolerate dissenting views."

The dissenting views: naked little kids


[flagged]


Why didn't they do this until X started publishing naked little kids?


[flagged]


“They hated me and waited until I did something wrong to have an excuse to go after me.”

And

“They waited until I did something wrong to go after me.”

Both contain an admission of wrong doing


[flagged]


My guy, what do you think the complaint against X is?


> I can’t imagine holding a job where I had to do work that I expect will fail. Sounds absolutely depressing. What keeps you motivated?

The paycheck. I had never expected work to NOT be depressing by definition, though. The only reason I'm working on what my employee wants me to is because I can't afford to live otherwise. They'll get the minimal effort needed for me to not get fired, but not a single minute more.


> the idea of deporting people who have no legal status in this country is immediately branded Nazi

Because that idea consists of harming someone over their birth circumstances, rather than any objective harm they may have done.


People who are in the country illegally undermine the rule of law and make people feel unheard leaving people to elect demagogues like trump. I predicted exactly this outcome with the election of Obama and his policies. How can the supposedly educated be so bad at applying history.


> People who are in the country illegally undermine the rule of law

No, they do not. Quite the opposite, they want to live their lives quietly and normally like everyone else.

The only law they "undermine" is the one forbidding them to living in a place due to a birth right. Civil disobedience to unjust laws like that is correct and moral.

> make people feel unheard leaving people to elect demagogues like trump

Disagree. IMO the biggest reasons for Trump's election were inflation and raising living costs. Under those conditions, picking a group of people to elect as the "enemy" is a tact to win elections, but has nothing to do with the underlying cause for the bad conditions in the first place.


Fair enough, they had (specially their executives and the engineers working on ad tech) a negative impact in the world as well.


Given the choice, end users choose free or cheap and ad supported over full price in huge majorities. You have to weigh "I don't like ads" against 200 million (!) people on Netflix's ad supported plan and how much enjoyment they get that they might not otherwise. Not to mention things like Google that are ad supported and genuinely useful. In the real world things have pros and cons.


I used to buy this thinking, but no longer. People are incredibly resourceful, and instead of innovating towards exploiting and manipulating people, we could choose to innovate towards conserveration of important things, just like we have done in the past.

We don't fund out national parks with advertisements. We don't fund our libraries with advertisements. We could create the same structures for the internet as well, where crucial internet resources are protected and stewarded. They don't necessarily need to be in the hands of ad companies.

Sure, I will not deny that having things be "free" (and paying for them in other ways) has been a huge boon from one perspective, but we can also evolve to put "free" things in different places. Because things are never free. Advertisements are funding mass surveillance. They are encroaching our civil liberties and normalizing it. There is a total cost to things that extens beyond money. What we don't pay out of pocket we pay as a society.


Has been for over a century.


> It’s possible to simultaneously believe that ICE has a clear and ethical mandate

... "We" (a lot of people, not everyone who posts here) don't believe that. Lots of people disagree with immigration control as a concept period.

The existence of that app is an abomination; the fact tax payer money is being allocated to it is tragicomic. Not spending it and just giving it as tax returns to the population would be so much better than kidnapping people over being born in the wrong place.


> ... "We" (a lot of people, not everyone who posts here) don't believe that. Lots of people disagree with immigration control as a concept period.

I mean sure but you have to acknowledge that is an extremely fringe belief that basically no one in the USA supports. The debate is on "how" it's being done not that we shouldn't have immigration control.


> that is an extremely fringe belief that basically no one in the USA supports

Clearly is it not a belief that no one in the USA supports, as seem in the discourse against ICE and immigration contrl.

> The debate is on "how" it's being done not that we shouldn't have immigration control.

Not necessarily, no. "The debate" is too vague to elaborate in favor or against what you're saying.

But yes, there are people against immigration control period, and period in favor of reforms to make immigration easier for workers, not harder. But propaganda will keep putting workers against each other, instead of companies lobbying against workers.


All of this misses the point of the moment, which is that the federal government is completely lawless and is incapable of responding to democratic or popular will. There is no debate happening. It does not matter which ordinary people "support" which position. Any political project (other than the current regime) in the USA in 2026 must contend with the fact that just establishing a democracy must be our first step. This is as true for Socialists as it is for non-regime-approved stripes of Fascist. It's the same for Chamber of Commerce Republicans and ex-hippie boomer liberals. Any talk of what we will do with a democracy once we have it is premature, because at the moment it simply does not matter what the opinions of the citizenry are.


> which is that the federal government is completely lawless and is incapable of responding to democratic or popular will.

Trump won the election mostly on a strong anti-immigration policy this is the popular will of the people.


Why is this being downvoted? The primary reason Trump was able to win is because Biden waited until it was far too late to address the surge of illegal immigration at the southern border. We don't have to wonder or argue about whether Americans support open borders, we already had something mildly in that direction (that still didn't remotely approach the idea of "no immigration control, period"), and in response Americans voted into office Donald Trump.


Not everyone believes this is about "the border" but rather a Christian conservative takeover of society.

The immigrants are just an excuse for the fascist goons in the street.


There are borders are imaginary and everyone should be able to move anywhere they want people on this post.


> someone will collect rent from IP anyways

We should work on fixing that, then.

I agree with your point about big tech companies salivating at opportunities to collect rent. IP is part of the problem.


> You know good and well that what is being discussed is the _use_ of LLMs

Not the person you're replying to, but I've found that some people do argue against LLMs themselves (as in, the tech, not just the usage). Specially in humanities/arts cycles which seem to have a stronger feeling of panic towards LLMs.

Clarifying which one you're talking about can save a lot of typing/talking some times.


> I've found that some people do argue against LLMs themselves (as in, the tech, not just the usage). Specially in humanities/arts cycles which seem to have a stronger feeling of panic towards LLMs.

Maybe?

The person I responded to said "LLMs are just a concept, an abstraction."

Were that true, were they simply words in some dusty CS textbook, it's hardly likely that the humanities/arts people you describe would even know about them.

No, it's the fact that these people have seen regurgitated pictures and words that makes it an issue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: