GDPR still holds, so I don’t see why not if that’s what your request is under.
However, it’s out there- and you have no idea where, so there’s not really a moral or feasible way to get rid of it everywhere. (Please don’t nuke the world just to clean your rep.)
The law (at least, in the EU) grants a legal right to privacy, and the motivation behind it is really none of anyone’s business.
Maybe commenters face threats to safety. Maybe commenters didn’t think AI companies profiting off of their non-commercial conversations would ever exist and wouldn’t have put data out there if that was disclosed ahead of time.
Corporations have an unlimited right to bully and threaten to take down embarrassing content and hide their mistakes, they have greatly enhanced leverage over copyright enforcement compared to individuals, but then if individuals do a much less egregious thing to try and take down their content they don’t even get paid for it’s immoral.
This community financially benefits YCombinator and its portfolio companies. Without our contributions, readership, and comments, their ability to hire and recruit founders is diminished. They don’t provide a delete button for profit-motivated reasons, and privacy laws like GDPR guard against that.
(As you might guess, I am personally quite against HN’s policy forbidding most forms of content deletion. Their policy and solution involving manual modifications via the moderation team makes no sense - every other social media platform lets you delete your content)
Finally someone mentioned it. I'm surprised all the "tech enthusiasts" here turn a blind eye when it's their own community, but if it's someone else's then it's atrocious.
Because the first couple major iterations looked like exponential improvements, and, because VC/private money is stupid, they assumed the trend must continue on the same curve.
And because there's something in the human mind that has a very strong reaction to being talked to, and because LLMs are specifically good at mimicking plausible human speech patterns, chatGPT really, really hooked a lot of people (including said VC/private money people).
If you re--calibrate from any lofty idea of their motives to "get investor money now", this and other moves/announcements make more sense: anything that could look good to an investor.
User count going up? Sure.
New browser that will deeply integrate chatGPT into users lives and give OAI access to their browsing/shopping data? Sure
Several new hardware products that are totally coming in the next several months? Sure
We're totally going to start delivering ads? Sure
We're making commitments to all these compute providers because our growth is totally going to warrant it? Sure
Oh, since we're investing in all of that compute, we're also going to become a compute vendor! Sure
None of it is particularly intentional, strategic, or sound. OAI is a money pit, they can always see the end of the runway, and must secure funding now. That is their perpetual state.
Did you, the company who built and sold this SaaS product, offer and agree to provide the service your customers paid you for?
Did your product fail to render those services? Or do damage to the customer by operating outside of the boundaries of your agreement?
There is no difference between "Company A did not fulfill the services they agreed to fulfill" and "Company A's product did not fulfill the services they agreed to fulfill", therefore there is no difference between "Company A's product, in the category of AI agents, did not fulfill the services they agreed to fulfill."
Well, that depends on what we are selling. Are you selling the service, black-box, to accomplish the outcome? Or are you selling a tool. If you sell a hammer you aren't liable as the manufacturer if the purchaser murders someone with it. You might be liable if when swinging back it falls apart and maims someone - due to the unexpected defect - but also only for a reasonable timeframe and under reasonable usage conditions.
I don't see how your analogy is relevant, even though I agree with it. If you sell hammers or rent them as a hammer providing service, there's no difference except likely the duration of liability
There difference isn't renting or selling a hammer. The difference is providing a hammer (rent/sell) VS providing a handyman that will use the hammer.
In the first case the manufacturer is only liable for defects, for normal use of the tool. So the manufacturer is NOT liable for misuse.
In the second case, the service provider IS liable for misuse of the tool. If they say, break down a whole wall for some odd reasons when making a repair, they would be liable.
In both cases there is a separation between user/manufacturer liability - but the question relevant to AI and SaaS is just that. Are you providing the tool, or delivering the service in question? In many cases, the fact the product provided is SaaS doesn't help - what you are getting is "tool as a service."
Exactly. I have said several times that the largest and most lucrative market for AI and agents in general is liability-laundering.
It's just that you can't advertise that, or you ruin the service.
And it already does work. See the sweet, sweet deal Anthropic got recently (and if you think $1.5B isn't a good deal, look at the range of of compensation they could have been subject to had they gone to court and lost).
Remember the story about Replit's LLM deleting a production database? All the stories were AI goes rogue, AI deletes database, etc.
If an Amazon RDS database was just wiped a production DB out of nowhere, with no reason, the story wouldn't be "Rogue hosted database service deletes DB" it would be "AWS randomly deletes production DB" (and, AWS would take a serious reputational hit because of that).
It's not reasonable to claim inference is profitable when they've also never released those numbers. Also the price they charge for inference is not indicative of the price they're paying to provide inference. Also, at least in openAI's case, they are getting a fantastic deal on compute from Microsoft, so even if the price they charge is reflective of the price they pay, it's still not reflective of a market rate.
OpenAI hasn't released their training cost numbers but DeepSeek has, and there's dozens of companies offering inference hosting of open weight models for the very large models that keep up with OpenAI and Anthropic, so we can see what market rates are shaking out to be for companies that have even less economies of scale. You can also make some extrapolations from AWS Bedrock pricing and can also investigate inference costs yourself on local hardware. Then look at quality measures of quantizations that hosting providers do and you get a feel for what hosting providers are doing to manage costs.
We can't pinpoint the exact dollar amount OpenAI categorically spends but we can make a lot of reasonable and safe guesses, and all signs points to inference hosting being a profitable venture by itself, with training profitability being less certain or being a pursuit of a winner-takes-all strategy.
Don't even need to get too fancy with it. Open AI has publicly committed to ~$500B in spending over the next several years (nevermind even they don't expect to actually bring that much revenue in)
$500B/$100,000 is 5 million, or 167k 30-year careers.
The math is ludicrous, and the people saying it's fine are incomnprehensible to me.
Another comment on a similar post just said, no hyperbole, irony, or joke intended: "Just you switching away from Google is already justifying 1T infrastructure spend."