Hacker Newsnew | past | comments | ask | show | jobs | submit | manuelmoreale's commentslogin

You are not alone and fuck all the people that say that everything is doomed and that there's no way to still have a good internet full of wonderful content made by people.

> I've been using it for decades now but these days, I almost always pause for a second.

Wrote about this before [0] but my 2c: you shouldn't pause and you should keep using them because fuck these companies and their AI tools. We should not give them the power to dictate how we write.

[0]: https://manuelmoreale.com/thoughts/on-em-dashes


That's not really how it works.

Gemini tells me that for thousands of years, the swastika was used as "a symbol of positivity, luck and cosmic order". Try drawing it on something now and showing it to people. Is this an effective way to fight Nazism?

I think it's brave to keep using em dashes, but I don't think it's smart, because we human writers who like using them (myself very much included) will never have the mindshare to displace the culturally dominant meaning. At least, not until the dominant forces in AI decide of their own accord that they don't want their LLMs emitting so many of them.


When you say "show it to people" I guess you don't mean the people in India, Japan, etc who still use the symbol for its original purpose?

I think it's safe to assume they meant it within their specific cultural context. They the symbol has different connotations in other cultures doesn't really change the point being made.

I'm not confident that the average person is aware of an em dash nor that it is widely associated with AI; I think the current culturally dominant meaning is just a fat hyphen (which most people just call a dash anyway).

My wife was working from home recently and I overheard a meeting she was having. It's a very non technical field. She and her team were working on a presentation and her boss said "let's use one of those little AI dashes here."

I find that amusing but I know somewhere an English major is crying.

> Gemini tells me that for thousands of years, the swastika was used as "a symbol of positivity, luck and cosmic order". Try drawing it on something now and showing it to people. Is this an effective way to fight Nazism?

I'm happy to change my position when some 13 million people are killed by lunatics that used the em dash as the symbol of their ideology. Until then, I'll keep using it everywhere it's appropriate.

Also, if we don't have the guts to resist even when the stakes are this low and the consequences for our resistance are basically non existent, then society is doomed. We might as well roll on our side and die.

> At least, not until the dominant forces in AI decide of their own accord that they don't want their LLMs emitting so many of them.

It's not a power I'm willing to give them. What if tomorrow they tweak something and those tool start to use a specific word more often? Or a different punctuation sign? What do we do then? Do we constantly adapt, playing whack-a-mole? What if AI starts asking a lot more questions in their writing? Do we stop asking them as a result?

You feel free to adapt and bend. I'm personally not going to do it and if someone starts thinking that I'm using AI to write my thoughts and as a result that's on them.


Hooked cross is Nazi, historians apropriated it to diffent culture to save 'cross'

Curious: are you ok with the other laws that are in place in the world to prevent underage people to engage with all sorts of activities? Like, for example, having to show an ID to being able to purchase alcohol?

The difference is the internet is forever. A one-time unrecorded transaction like showing your ID at the bar is not. It is a false equivalence.

Not only is the internet forever, but what is on it grows like a cancer and gets aggregated, sold, bundled, cross-linked with red yarn, multiplied, and multiplexed. Why would you ever want cancer?


> It is a false equivalence

It's a false equivalence only if you decide to equate the two. My question wasn't worded that way. I'm curious to know if someone who oppose this type of laws is also for or against other laws that are dealing with similar issues in other contexts.

Also, as I said in another post, there are plenty of places, online, where you have to identify yourself. So this is already happening. But again, I'm personally interested in people's intuitions when it comes to this because I find it fascinating as a subject.


Personally, I am pro-both. Even if it helps a single child not fall in to a bad situation, it's worth the many other cons that come with it. <tinfoilhat>I believe that the original concept had good intent, then flowed through a monetization process before delivery.</tinfoilhat>. If our weird reality eventually balances out, at least we'll have this on our side. People > Money.

They aren't comparable. Showing an ID to a staff member isn't stripping my anonymity. I know the retailer won't have that on file forever, tied to me on subsequent visits. Also they stop ID'ing you after a certain age ;)

There isn't any way to achieve the same digitally.


Actually there is, various age verification systems exist where the party asking for it does not need to process their ID, like the Dutch iDIN (https://www.idin.nl/en/) that works not unlike a digital payment - the bank knows your identity and age, just like they know your account balance, and can sign off on that kind of thing just like a payment.

I hope this becomes more widespread / standardized; the precursor for iDIN is iDEAL which is for payments, that's being expanded and rebranded as Wero across Europe at the moment (https://en.wikipedia.org/wiki/Wero_(payment)), in part to reduce dependency on American payment processors.


The privacy issue has two facets, when I show ID to get in to a club or buy alcohol, the entire interaction is transient, the merchant isn't keeping that information and the issuer of the credential doesn't know that happened (i.e. the government).

Just allowing a service provider to receive a third party attestation that you "allowed" still allows the third party to track what you are doing even if the provider can't. That's still unacceptable from a privacy standpoint, I don't want the government, or agents thereof, knowing all the places I've had to show ID.


> Just allowing a service provider to receive a third party attestation that you "allowed" still allows the third party to track what you are doing even if the provider can't. That's still unacceptable from a privacy standpoint, I don't want the government, or agents thereof, knowing all the places I've had to show ID.

Isn't this solvable by allowing you to be the middle man? A service asks you to prove your age, you ask the government for a digital token that proves your age (and the only thing the government knows is that you have asked for a token) and you then deliver that to the service and they only know the government has certified that you are above a certain age.

The service gets a binary answer to their question. The government only knows you have asked for a token. Wouldn't a setup like that solve the issue you're talking about?


We have a similar system in Italy so the age verification process itself doesn't personally concerns me that much since the verification process is done by the government itself and they obviously already have my information.

I'm personally more interested in the intuition people have when it comes to squaring rejecting age verification online while also accepting it in a multitude of other situations (both online and offline)


My main issue is trust.

In real world scenarios, I can observe them while they handle my ID. And systematic abuse(e.g. some video that gets stored and shows it clearly) would be a violation taken serious

With online providers it's barely news worthy if they abuse the data they get.

I'm not against age verification (at least not strongly), but I'd want it in a 2 party 0 trust way. I.e. one party signs a jwt like thing only containing one bit, the other validates it without ever contacting the issuer about the specific token.

So one knows the identity, one knows the usage But they are never related


> So one knows the identity, one knows the usage But they are never related

I could be wrong but I think this is how the system we have in place in Italy works. And I agree that it's how it should work.


I know they're not compatible. I'm asking if you're also ok with those. There are also plenty of situations where you are asked to provide an ID, digitally, when above a certain age. For example booking hotels and other accommodations.

Personally I'm still trying to figure out where my position is when it comes to this whole debate because both camps have obvious pros and cons.


Which hotel asks for id online..? I've only ever had to provide it once on-site and checking in.

And when then, only when I'm in foreign countries.


Happens quite often with Airbnb for example. You often don't meet the host in person so there's no way to show them a physical ID.

I'm a lot more okay with that because alcohol purchasing doesn't have free speech implications.

It's weird how radicalized people get about banning books compared to banning the internet.


> It's weird how radicalized people get about banning books compared to banning the internet.

I don't think asking for age verification is the same as banning something. Which connection do you see between requiring age and free speech?


First, children also have a right to free speech. It is perhaps even more important than for adults, as children are not empowered to do anything but speak.

Second, it's turn-key authoritarianism. E.g. "show me the IDs of everyone who has talked about being gay" or "show me a list of the 10,000 people who are part of <community> that's embarrassing me politically" or "which of my enemies like to watch embarrassing pornography?".

Even if you honestly do delete the data you collect today, it's trivial to flip a switch tomorrow and start keeping everything forever. Training people to accept "papers, please" with this excuse is just boiling the frog. Further, even if you never actually do keep these records long term, the simple fact that you are collecting them has a chilling effect because people understand that the risk is there and they know they are being watched.


> First, children also have a right to free speech.

Maybe I'm wrong (not reading all the regulations that are coming up) but the scope of these regulations is not to ban speech but rather to prevent people under a certain age to access a narrow subset of the websites that exist on the web. That to me looks like a significant difference.

As for your other two points, I can't really argue against those because they are obviously valid but also very hypothetical and so in that context sure, everything is possible I suppose.

That said something has to be done at some point because it's obvious that these platforms are having profound impact on society as a whole. And I don't care about the kids, I'm talking in general.


> narrow subset of the websites on the web

Under most of these laws, most websites with user-generated content qualify.

I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky), but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.


> but AFAIK a simple forum where kids can talk to each other about familial abuse or whatever would also qualify.

I'm currently scrolling through this list https://en.wikipedia.org/wiki/Social_media_age_verification_... and it seems to me these are primarily focused on "social media" but missing from these short summaries is how social media is defined which is obviously an important detail.

Seems to me that an "easy" solution would be to implement some sort of size cap this way you could easily leave old school forums out.

It would no be a perfect solution, but it's probably better than including every site with user generated content.


> I'd be a lot more fine with it if it was just algorithms designed for addiction (defining that in law is tricky)

An alternative to playing whac-a-mole with all the innovative bad behavior companies cook up is to address the incentives directly: ads are the primary driving force behind the suck. If we are already on board with restricting speech for the greater good, that's where we should start. Options include (from most to least heavy-handed/effective):

1) Outlaw endorsing a product or service in exchange for compensation. I.e. ban ads altogether.

2) Outlaw unsolicited advertisements, including "bundling" of ads with something the recipient values. I.e. only allow ads in the form of catalogues, trade shows, industry newsletters, yellow pages. Extreme care has to be taken here to ensure only actual opt-in advertisements are allowed and to avoid a GDPR situation where marketers with a rapist mentality can endlessly nag you to opt in or make consent forms confusing/coercive.

3) Outlaw personalized advertising and the collection/use of personal information[1] for any purpose other than what is strictly necessary[2] to deliver the product or service your customer has requested. I.e. GDPR, but without a "consent" loophole.

These options are far from exhaustive and out of the three presented, only the first two are likely to have the effect of killing predatory services that aren't worth paying for.

[1] Any information about an individual or small group of individuals, regardless of whether or not that information is tied to a unique identifier (e.g. an IP address, a user ID, or a session token), and regardless of whether or not you can tie such an identifier to a flesh-and-blood person ("We don't know that 'adf0386jsdl7vcs' is Steve at so-and-so address" is not a valid excuse). Aggregate population-level statistics are usually, but not necessarily, in the clear.

[2] "Our business model is only viable if we do this" does not rise to the level of strictly necessary. "We physically can not deliver your package unless you tell us where to" does, barely.


The chilling effect of tying identity to speech means it directly effects free speech. The Founding Fathers of the US wrote under many pseudonyms. If you think you may be punished for your words, you might not speak out.

We know we cannot trust service providers on the internet to take care of our identifying data. We cannot ensure they won't turn that data over to a corrupt government entity.

Therefore, we can not guarantee free speech on these platforms if we have a looming threat of being punished for the speech. Yes these are private entities, but they have also taken advantage of the boom in tech to effectively replace certain infrastructure. If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.

https://en.wikipedia.org/wiki/List_of_pseudonyms_used_in_the...


> If we need smart phones and apps to interact with public services, we should apply the same constitutional rights to those platforms.

Are private social media platforms "public services"? And also, you mentioned constitutional rights. Which constitution are we talking about here? These are global scale issues, I don't think we should default on the US constitution.

> We know we cannot trust service providers on the internet to take care of our identifying data.

Nobody needs to trust those. I can, right now, use my government issues ID to identify myself online using a platform that's run by the government itself. And if your rebuttal is that we can't trust the government either then yeah, I don't know what to say.

Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.


> Because at some point, at a certain level, society is built on at least some level of implicit trust. Without it you can't have a functioning society.

This is somewhat central to being remain anonymous.

Protesters and observers are having their passports cancelled or their TSA precheck revoked due to speech. You cannot trust the government to abide by the first amendment.

Private services sell your data to build a panopticon, then sell that data indirectly to the government.

Therefore, tying your anonymous speech to a legal identity puts one at risk of being punished by the government for protected speech.


> You cannot trust the government to abide by the first amendment.

Again, this is a global issue. There is no first amendment here where I live. But the issue of the power these platforms have at a global level is a real one and something has to be done in general to deal with that. The problem is what should we do.


I happily watch adless YouTube on an iPhone. There are definitely choices available. I agree on the awareness though.

> While this is definitely a crime

"Definitely a crime" based on what? "I strongly believe that these things" who gets to decide what "these things" are?


They deemed it one right in the article so it is a crime, there is no questions about it.

The problem is that there's a bunch of these what you can call "entry" csam that people with mental issues are drawn to and having this all around the internet is definitely not doing anyone a favor especially the ones that are not right in the head. But you also have to take into account that a bunch of media also put "illegal content" in firms and books so what I was suggesting is to make this a properly recognized crime so there can't be any questions about it rather than "oh look there's people talking about murder in firms and books!!!".


> The problem is that there's a bunch of these what you can call "entry" csam that people with mental issues are drawn to and having this all around the internet is definitely not doing anyone a favor

I can make that same argument for people with other mental health issues and religious texts. Are we ok in making those also illegal?


> "The reader is left with a description that creates the visual image in one's mind of an adult male engaging in sexual activity with a young child."

So, why are we stopping at CSAM then? If a book leaves the reader with a description that creates the image of a dog being tortured is that animal abuse? This is a completely insane line of reasoning.


There's a word for that mindset: https://en.wikipedia.org/wiki/Karoshi

That doesn't scale in the world of capitalism. Because you need to increase revenues year after year and there are only so many people willing to pay. So you either keep increasing the price (and that has a limit) or you find other ways to monetize and the current meta seems to be pay + ads.

> Several of the biggest companies today are fueled by ads, and OpenAI has the perfect ad vehicle. What else were you expecting?

I'm old enough to remember when these people were claiming AI was as important and as revolutionary as fire and electricity. I don't know about you, but I pay for my electricity and the power companies don't have to run ads on my power lines in order to run their business.


They probably would if they could. That gives me some bad ideas - you could vary the line frequency to play the McDonald's 'I'm Lovin It' jingle, etc.- good thing I'm not involved with either ads or power delivery.

Don't forget morse code

You could make this work! I saw a project once that used some open uk power grid information to triangulate videos by listening to the background hum. Genius but also a real "oh god what have we done" moment for me.

Good luck with your new ad business lol


Annnd I’m old enough to remember last weeks Superbowl ad controversy :)

It’s not like “OpenAI == AI”

One could possibly make the case that currently that is the public perception (outside the SV bubble). But is it really true?

Last time I checked Dario staked Anthropic’s future and reputation, on paid subscription.


> Last time I checked Dario staked Anthropic’s future and reputation, on paid subscription.

Tech CEOs might be wealthy and powerful but there are two things they definitely don't have anymore: trust and the benefit of the doubt. Who knows, maybe Dario is gonna be the exception to the rule but I doubt it.


Fair and share same concerns

The track record of the really big CEOs has been less than super good nervous sweating

Maybe I’m being naive, but there a few good ones (smaller and mid tier size) out there imo


> Maybe I’m being naive, but there a few good ones (smaller and mid tier size) out there imo

Don't disagree. I think scale and ambitions play a huge role.


And that's all you really need to know about this AI grift.

That’s honestly just sad. Not the fact you doing it, but rather the fact you have nobody to talk to about those things.

I feel like this kind of response is a good example of why someone wouldn't talk to others about things.

When I say nerdy and/or obscure, I mean things like "are quantum fluctuations ergodic and how does this affect affect the probability of a quantum fluctuation triggering a new big bang?"

Call me weird, I know absolutely nothing about what you just wrote but I'm super intrigued now and would love to know more, ah. But that's just because I'm generally curious about pretty much everything.

But I can see why it can be hard to find people to talk about that. Heck it might be hard to find people who even know about that topic in general.


Not many people without at least a masters degree in physics know what "ergodic" means.

I can imagine. Well, in that case I hope 2026 will bring you some human connection that does have a masters in physics.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: