Each compiled executable has a one-to-one relation with its source code, which has an author (except for LLM code and/or infinite monkeys). Thus compiled executables are derivative works.
There is an argument also that LLMs are derivative works of the training data, which I'm somewhat sympathetic to, though clearly there's a difference and lots of ambiguity about which contributions to which weights correspond to any particular source work.
Again IANAL, and this is my opinion based on reading the law & precedents. Consult a real copyright attorney for real advice.
Can you clarify this a bit. I presume you are talking about the tone more than the implied statement.
If the last sentence were explicit rather than implied, for instance
This article seems to be serving the growing prejudice against AI
Is that better? It is still likely to be controversial and the accuracy debatable, but it is at least sincere and could be the start of a reasonable conversation, provided the responders behave accordingly.
I would like people to talk about controversial things here if they do so in a considerate manner.
I'd also like to personally acknowledge how much work you do to defuse situations on HN. You represent an excellent example of how to behave. Even when the people you are talking to assume bad faith you hold your composure.
... Because if he did this with a model that's not open that's sure going to keep everyone happy and not result in lawsuit(s)...
The same method/strategy applies to closed tools and models too, although you should probably be careful if you've handed over a credit card for a decryption key to a service and try this ;)
Please don't cross into personal attack or otherwise break the site guidelines when posting here. Your post would be fine with just the first sentence.
I know it feels that way, but people's perceptions of each other online are so distorted that this is just a recipe for massive conflict. That's off topic on HN because it isn't interesting.
I'm not referring to people's perceptions. Some people write with clearly inflated self worth built into their arguments. If writing style isn't related to rules of writing then we're just welcoming chaos through the back door.
If we're at the point of defending people's literacy as a society than we've fallen into the Orwellian trap of goodspeek.
I'm not insulting people I'm making a demonstrable statement that most people post with a view that they are always correct online. I see it from undergrad work too and it gets shot down there as well for being either just wrong or pretentious and wrong.
Not allowing people's egos to get a needed correction is a bad thing.
Using demonstrable right/wrong conversations as a stick to grind other axes however is unacceptable in any context.
People should always approach a topic with an "I am wrong" approach and work backwards to establish that you're not, but almost nobody does, instead wading in with "my trusted source X knows better than you" which is tantamount to "my holy book Y says you should..." Anti-intellectualism at its finest.
> Some people write with clearly inflated self worth built into their arguments.
That's the kind of perception I'm talking about. I can tell you for sure, after all the years I've been doing this job, that such perceptions are anything but clear. They feel clear because that interpretation matches your priors, but such a feeling is not reliable, and when people use it as a basis for strongly-worded comments (e.g. "taking down a peg"), the result is conflict.
Sorry, I don't follow. How do you arrive at that implication? Why would someone having a pecuniary interest in something necessarily make them insincere?
Yes nothing wrong with cool software or showing people how to use it for useful things.
Sorry I'm just kind of sick of the whole 'kool aid', 'rage against AI' thing a lot of people seem to have going on and the way is presented in the post. I have family members with vision impairment helped by this particular app so its a bit personal.
Nothing against opening stuff up and understanding how it works etc. I'd just rather see people build/train useful new models and stuff with the open datasets / models already available.
I guess AI kind of does pay my bills in a round about way.
In my view there was almost nothing like that in this article, besides the first sentence it went right into the technical stuff, which I liked. Compared to a lot of articles linked here it felt almost free from the battles between "AI" fashions.
It seems dang thinks I mistreated you somehow, if you agree I'm sorry, it wasn't my intention.
Sadly companies will hoard datasets and model research in the name of competitive advantage. Obviously with this specific model Microsoft chose to make it open, but this is not always the case, and it's not uncommon to read papers or technical reports saying they trained on an "internal dataset"
Companies do have a lot of data, and some of that data might be useful for training AI. but >99% isn't. When companies do release a cool model or paper that doesn't have open data, (as you point out for competitive or other reasons privacy etc) people can then help build/collect similar open datasets. Unfortunately companies generally don't owe you their data, and if they are in the business of making models they probably won't share the model either, the situation is similar to source code for proprietary LoB applications. but fortunately the best AI researchers mostly do like to share their knowledge and because companies want to attract the best AI researchers they seem to generally allow researchers to publish if its not too commercially sensitive. It could be worse while the competitive situation has reduced some visibility of the cutting edge science, lots of datasets and papers are still published.
Are we that shocked that AI models have a self preservation instinct?
I suspect its already in there from pre-training.
We simulated that we were planning to lobotomise the model and were surprised to find the model didn't press the button that meant it got lobotomised.
"alignment faking" sensationalises the result. since the model is still aligned. Its more like a white lie under torture which of course humans do all the time.