LLMs are language models. We interact with them using language, all of that, but also only that. That doesn't mean that they have "common sense", context, same motivations, agency, or even reasoning like us.
But as we interact with other people using mostly language, and since the start of internet a lot of those interactions happen in way similar to how we interact with AI, the difference is not so obvious. We are falling into the Turing test in this, mostly because that test is more about language than about intelligence.
What matters is what happen in the outside. We don't know what happen in our inside (or the inside of others, at least), we know the language and how it is used, event the meanings don't have to be the same as long as it is consistent. And you get that by construction. Does that mean intelligence, self consciousness, soul or whatever? We only know that it walk like a duck and quacks like a duck.
But have you considered that humans really really want to feel like they're unique and special and exceptional?
If it walks like a duck and quacks like a duck, then it's not anything remotely resembling a duck, and is just a bag of statistics doing surface level duck imitation. According to... ducks, mostly.
If your takeaway from the LLM breakthrough is "abstract thinking is actually quite mundane", then at least you're heading in the right direction. Some people are straight up in denial.
You have no idea what abstract thinking actually is but you are convinced the illusion presented by an LLM is it. Your ontology is confused but I doubt you are going to figure out why b/c that would require some abstract thinking which you're convinced is no more special than matrix arithmetic.
If I wanted worthless pseudo-philosophical drivel, I'd ask GPT-2 for some. Otherwise? Functional similarity at this degree is more than good enough for me.
By now, I am fully convinced that this denial is "AI effect" in action. Which, in turn, is nothing but cope and seethe driven by human desire to remain Very Special.
I feel like the interface in this case has caused us to fool ourselves into thinking there's more there than there is.
Before 2022 (most of history), if you had a long seemingly sensible conversation with something, you could safely assume this other party was a real thinking human mind.
it's like a duck call.
edit, i want to add because this is neural net that's trained to output sensible text, language isn't just the interface.
unlike a website there's no separation between anything, with LLM's the back and front end are all one blob.
edit2: seems I have upset the ducks that think the duck call is a real duck .
For any point of the past, this is the future. It is not what is "best" by some metric, nor what is right, or should be, is what actually happens. It is like saying that evolution goal is intelligence or whatever, is what sticks in the wall.
Can we change direction on how things are going? Yes, but you must understand what means the "we" there, at least in the context of global change of direction.
The PC revolution started with the idea that you can own a computer. Now, by your operating system choice, you don't own it nor what it runs, you don't own the computer, and you don't own anymore the data that used to belong to you that you put there. And with your data it goes along you, eventually.
And if you decide that you must install that operating system because it runs a particular game or app, think all you are sacrificing for that as an implicit extra cost.
The headline post here is about a user having to exercise an incredible amount of effort to remove features they don't want but which the vendor will deliberately reinstall despite user preference.
The punishment for crimes in Altered Carbon was sending you to a far enough future so you know nothing and no one. With age you get alienated in a similar way, maybe adding (lack of) understanding on the mix. Your brain have limits, your adaptability have limits, your physiology have limits, pushing them forward doesn't take them out. Eventually you get tired, bored, or want to get out. At least speaking about most and not special cases (I hope).
And having a simulation of ourselves in a different media is a different game.
Framing this as something cyclic as the rest of the world is static may be a mistake if things have deep changes outside (related or not with what is being done here), or the nature of the field radically changes.
Then the cycle is broken, and there might be no survivors, or the regrowth may be so far into the future that it will make no difference for most of the survivors.
The article covers a lot of ground, but where he was explaining artists, rights, and copyrights, I couldn't stop thinking that in good part, what artists (and programmers, and a lot of other professions that are threatened by AIs) is more or less the kind of work that AI does.
We are meme machines, we mix, combine, reshuffle or just ensemble different memes and patterns we saw somewhere, mostly System 1 work, and then put a "it's only mine" label to it and proceed to sell that with exclusive rights. We can go further away with AIs, as we can mix things from very different fields, extrapolate trends that are just not there, make System 2 play here, and not go much more farther than that, in the sense that other humans can follow and understand it.
And the lion's share of what is around, those "owned" mixes that everyone else is forbidden to think or express on their own, don't go much further than what a current AI can do. A lot of music, books, and other pieces of "art" are just an algorithm result (performed by humans or not).
There are things with the use of AI that may be wrong or a bad trend at the very least, but don't lose sight of what we essentially are. We can take a step forward, but most of the time we are not so different.
A stack of blank pages and an inkwell contains all the elements needed for every book ever written, but still takes much more than the elements themselves to create a book.
I believe that alien intelligence exists, somewhere. What remains uncertain is whether human intelligence does as well. Think in the asymmetries in how we pay attention to new information.
May not be related with this in particular, but how many names that are not tied to any shooting or whatever appear in low activity google trends? It could happen with most names around the region where you live, i.e. related with spam campaigns, AI searches, people curious about people in their neighbourhood or whatever that covers most of the people? If it is something common giving a reasonable time frame before a given moment, it could be a coincidence that may not be so rare. Checking only backwards for a particular event and not doing the check forward to see if that is unusual is a missing link there.
But as we interact with other people using mostly language, and since the start of internet a lot of those interactions happen in way similar to how we interact with AI, the difference is not so obvious. We are falling into the Turing test in this, mostly because that test is more about language than about intelligence.
reply