Far too much in the way of "maybe in a few years" LLM prediction relies on the unspoken assumption that there will not be any gains in the state of the art in the existing, non-LLM tools.
"In a few years" you'd have the benefit of the current, bespoke tools, plus all the work you've put into improving them in the meantime.
And the LLM would still be behind, unless you believe that at some point in the future, a radically better solution will simply emerge from the model.
That is, the bet is that at some point, magic emerges from the machine that renders all domain-specialist tooling irrelevant, and one or two general AI companies can hoover up all sorts of areas of specialism. And in the meantime, they get all the investment money.
Why is it that we wouldn't trust a generalist over a specialist in any walk of life, but in AI we expect one day to be able to?
> That is, the bet is that at some point, magic emerges from the machine that renders all domain-specialist tooling irrelevant, and one or two general AI companies
I have a slightly more cynical take: Those LLMs are not actually general models, but niche specialists on correlated text-fragments.
This means human exuberance is riding on the (questionable) idea that a really good text-correlation specialist can effectively impersonate a general AI.
Even worse: Some people assume an exceptional text-specialist model will effectively meta-impersonate a generalist model impersonating a different kind of specialist!
> Even worse: Some people assume an exceptional text-specialist model will effectively meta-impersonate a generalist model impersonating a different kind of specialist!
Specialists exist because the human generalist can no longer possibly learn and perfect all there is to learn in the world not because the specialist has magic powers the generalist does.
If there were some super generalist that could then the specialist would have no power.
"In a few years" you'd have the benefit of the current, bespoke tools, plus all the work you've put into improving them in the meantime.
And the LLM would still be behind, unless you believe that at some point in the future, a radically better solution will simply emerge from the model.
That is, the bet is that at some point, magic emerges from the machine that renders all domain-specialist tooling irrelevant, and one or two general AI companies can hoover up all sorts of areas of specialism. And in the meantime, they get all the investment money.
Why is it that we wouldn't trust a generalist over a specialist in any walk of life, but in AI we expect one day to be able to?