Maybe I’m reading this wrong, but commercial use of comments is prohibited by the HN Privacy and data Policy. So is creating derivative works (so technically a vector representation)
> Commercial Use: Unless otherwise expressly authorized herein or in the Site, you agree not to display, distribute, license, perform, publish, reproduce, duplicate, copy, create derivative works from, modify, sell, resell, exploit, transfer or upload for any commercial purposes, any portion of the Site, use of the Site, or access to the Site.
> The buying, exchanging, selling and/or promotion (commercial or otherwise) of upvotes, comments, submissions, accounts (or any aspect of your account or any other account), karma, and/or content is strictly prohibited, constitutes a material breach of these Terms of Use, and could result in legal liability.
From [1] Terms of Use | Intellectual Property Rights:
> Except as expressly authorized by Y Combinator, you agree not to modify, copy, frame, scrape, rent, lease, loan, sell, distribute or create derivative works based on the Site or the Site Content, in whole or in part, except that the foregoing does not apply to your own User Content (as defined below) that you legally upload to the Site.
> In connection with your use of the Site you will not engage in or use any data mining, robots, scraping or similar data gathering or extraction methods.
Certainly it is literally derivative. But so are my memories of my time on the site. And in fact I do intend to make commercial use of some of those derivations. I believe it should be a right to make an external prosthesis for those memories in the form of a vector database.
That’s not the same as using it to build models. You as an individual have the right to access this content as this is the purpose of this website. The content becoming the core of some model is not.
You mean like free speech for concepts and ideas? It's OK to think them but not to tell other people about them? LLMs are another media of thought exchange, in some ways worse and others better. Of course it's out of bounds from them to produce literal copies of copyrighted work. But as with a human brain it should be OK for artificial neural nets to learn from them and generate new work.
I hired a company called OpenAI to do it for me. They're done, and brand new comments are also in its search, at least within a few minutes, try it. Is now good?
But they are not doing it for free. It's not like if you are on a paid account that they remove the HN portion of the training data that is used.
For a forum of users that's supposed to be smarter than Reddit users, we sure do make our selves out to be just as unsmart as those Reddit users are purported. To not be able to understand the intent/meaning of "for commercial use" is just mind boggling to the point it has to be intentional. The purpose is what I'm still unclear though
> I hired a company called OpenAI to do it for me.
>>> If it's OK to encode it in your natural neural net, why is it not OK to put it in your artificial one?
Well I guess that lines up. With that line of reasoning I have zero issue believing you outsourced your reading to them. You clearly aren't getting your money's worth.
Embeddings are encodings of shared abstract concepts statistically inferred from many works or expressions of thoughts possessed by all humans.
With text embeddings, we get a many-to-one, lossy map: many possible texts ↝ one vector that preserves some structure about meaning and some structure about style, but not enough to reconstruct the original in general, and there is no principled way to say «this vector is derived specifically from that paragraph by authored by XYZ».
Does the encoded representation of the abstract concepts represent the derivate work? If yes, then every statement ever made by a human being represents the work derivative of someone else's by virtue of learning how to speak in the childhood – they create a derivative work of all prior speakers.
Technically, the3re is a strong argument against treating ordinary embedding vectors as derivative works, because:
- Embeddings are not uniquely reversible and, in general, it is not possible reconstruct the original text from the embedding;
- The embedding is one of an uncountable number of vectors in a space where nearby points correspond to many different possible sentences;
- Any individual vector is not meaningfully «the same» as the original work in the way that a translation or an adaptation is.
Please do note that this is the philosophical take and it glosses over the legally relevant differences between human and machine learning as the legal question ultimately depends on statutes, case law and policy choices that are still evolving.
Where it gets more complicated.
If the embeddings model has been trained on a large number of languages, it makes the cross-lingual search easily possible by using an abstract search concept in any language that the model has been trained on. The quality of such search results across languages X, Y and Z will be directly proportional to the scale and quality of the corpus of text that was used in the model training in the said languages.
Therefore, I can search for «the meaning of life»[0] in English and arrive at a highly relevant cluster of search results written in different languages by different people at different times, and the question becomes is «what exactly it has been statistically[1] derived from?».
[0] The cross-lingual search is what I did with my engineers last year to our surprise and delight of how well it actually worked.
[1] In the legal sense, if one can't trace a given vector uniquely back to a specific underlying copyrighted expression, and demonstrate substantial similarity of expression rather than idea, the «derivative work» argument in the legal sense becomes strained.
To think that any company anywhere actually removes all data upon request is a bit naive to me. Sure, maybe I'm too pessimistic, but there's just not enough evidence these deletes are not soft deletes. The data is just too valuable to them.
But see, the requires two totally different workflows. It would just be easier to soft delete for everything and tell everyone that it's a hard delete.
I've never been convinced that my data will be deleted from any long term backups. There's nothing preventing them from periodically restoring data from a previous backup and not doing any kind of due diligence to ensure hard delete data is deleted again.
Who in the EU is actually going in and auditing hard deletes? If you log in and can no longer see the data because the soft delete flag prevents it from being displayed and/or if any "give me a report of data you have on me" reports empty because of soft delete flag, how does anyone prove their data was not soft deleted only?
What would a company that does that, hypothetically, then tell a user that requests their data held by the company reply? With their soft-deleted data, or would they say they have no data?
They would obviously say we don't have the data. And to keep that person from "lying", the people that have the role to be able to make this request would have their software obey the soft delete flag and show them "no data available" or something like "on request of user, data deleted on YYYY-MM-DD HH:MM:SS" type of message. who would know any different?
That’s fake news from a hacker. Just look at the data we have. The data they say we have, we don’t. They clearly made it up. It works in politics, so why not in tech?
No, they own it because you said so. You "provide them with a global, non-revocable license to do things with the content you submit" as per the agreement you accepted. You're not required to enter into this agreement, this was a totally optional thing you opted to do.
I just don't understand the public outrage. Why is everyone so worried about this? I write stuff knowing it's publicly available, and I don't give a crap about HN or Reddit or whomever's claims to my writings.
As far as I'm concerned it's all public domain, so what if OpenAI trains on it? Why should that bother me? I just don't understand, it really just feels like a witch hunt, like everyone just wants to hate AI companies and they'll jump on any bandwagon that's against them no matter how nonsensical it is.
If you got replaced at the job you needed by ‘AI’, isn’t it salt in the wound that they used your comments that you wrote without it in mind (in part) to do it?