It confers to the speaker confirmation they're absolutely right - names are arbitrary.
While also politely, implicitly, pointing out the core issue is it doesn't matter to you --- which is fine! --- but it may just be contributing to dull conversation to be the 10th person to say as much.
right, freedom of speech is free as long as it agrees with the viewpoint of who's in power. similar to how history is written by victors but this part is conveniently ignored. it's just facts in the open marketplace of ideas yay!
the difference here is if you search or seek something, i.e. explicitly consent to viewing advertisements for guitar in your active browsing session vs them being pushed to you without your consent the next day on your phone.
I'm not against monetizing advertisement for the 1st use case either.
don't you think it is empowering and aspiring for artists? they can try several drafts of their work instantaneously, checking out various compositions etc before even starting the manual art process.
they could even input/train it on their own work. I don't think someone can use AI to copy your art better than the original artist.
Plus art is about provenance. If we could find a scrap piece of paper with some scribbles from Picasso, it would be art.
This does seem to work for writing. Feed your own writing back in and try variations / quickly sketch out alternate plots, that sort of thing.
Then go back and refine.
Treat it the same as programming. Don't tell the AI to just make something and hope it magically does it as a one-shot. Iterate, combine with other techniques, make something that is truly your own.
oh they hate it so much when this hypocrisy is pointed out. better put the high school kids downloading books on pirate bay in jail but I guess if your name starts with Alt and ends in man then there's an alt set of rules for you.
also remember when GPU usage was so bad for the environment when it was used to mine crypto, but I guess now it's okay to build nuclear power plants specifically for gen-ai.
this what happens when you centralize all decision making to people who have no local knowledge of the community they are administrating, and predicate their jobs on following a checklist, usually as implemented by buggy software, instead of making a judgement call based on experience and circumstances.
If you go by the Marriam Websters definition of fascism this is getting pretty close.
> Fascism : a populist political philosophy, movement, or regime (such as that of the Fascisti) that exalts nation and often race above the individual, that is associated with a centralized autocratic government headed by a dictatorial leader, and that is characterized by severe economic and social regimentation and by forcible suppression of opposition
As if this has anything to do with locality? Rulers of parcels of land small enough you could see it all from a single hill have been utter despots before. Heck, just look at how some parents treat their children.
This is what happens when you elect a convicted felon, rapist, bullying loser compromised by the Russian state who outright says he wants to be a dictator and he puts sycophants and bullies into positions of power to do exactly that.
Saying this is about local vs nonlocal governance is nothing more than shirking the responsibility of 70,000,000 Americans who wanted this and 100,000,000 Americans who couldn't be bothered to stop it.
I do think that there’s an aspect of small government that connects those 70M to their government in a way that large scale federalization may fail to do. Not sure how to achieve benefits of scale without this problem…
Yes and the open source models + local inference are progressing rapidly.
This whole API idea is kind of limited by the fact that you need to RT to a datacenter + trust someone with all your data.
Imagine when OpenAI has their 23&me moment in 2050 and a judge rules all your queries since 2023 are for sale to the highest bidder.
Even worse for these LLM-as-a-service companies i that the utility of open source LLMs largely comes down to the customization: you can get a lot of utility by restricting token output, varying temperature, and lightly retraining them for specific applications.
The use-cases for LLMs seem unexplored beyond basic chatbot stuff.
I'm surprised at how little their utility for turning unstructured data into structured data, even with some margin of error, is discussed. It doesn't even take an especially large model to accomplish it, either.
I would think entire industries could reform around having an LLM as a first pass on data, with software and/or human error checking at significant cost reduction over previous strategies.
deferring to best practice instead of best judgement is a major plague of the software industry these days.
best practices usually come from giant companies with tens of thousands of engineers like google (who doesn't seem to be keeping up with competition btw) and amazon (which is notorious for burning out people).
what science or evidence drives the best practices?