I think the framing of these models are being "intelligent" is not the right way to go. They've gotten better at recall and association.
They can recall prior reasoning from text they are trained on which allows them to handle complex tasks that have been solved before, but when working on complex, novel, or nuanced tasks there is no high quality relevant training data to recall.
Intelligence has always been a fraught word to define and I don't think what LLMs do is the right attribute for defining it.
I agree with a good deal of the article but because it keeps using loaded works like "intelligent" and "smarter", it has a hard time explaining what's missing.
They can recall prior reasoning from text they are trained on which allows them to handle complex tasks that have been solved before, but when working on complex, novel, or nuanced tasks there is no high quality relevant training data to recall.
Intelligence has always been a fraught word to define and I don't think what LLMs do is the right attribute for defining it.
I agree with a good deal of the article but because it keeps using loaded works like "intelligent" and "smarter", it has a hard time explaining what's missing.