> It’s all very clever, yes, but at bottom it’s just a brute force approach.
Eh, this doesn’t really matter. If one "brute forces" a construct out of many simple models that acts like an AGI, then (in terms of the danger) it mind as well be an AGI.
And that, btw, is a major point of concern. LLMs and other models are "dumb," frozen and monolithic now, but stringing them together is a relatively simple engineering problem. So is providing some kind of learning mechanism that trains the model as it goes.
> But if Geoffrey Hinton is worried surely I’ve gone wrong somewhere. What am I not seeing?
He was very specifically worried about the pace. Hinton has been at the center of this space forever, and he had no idea things would jump so quickly. And if he couldn't see it, how is anyone supposed to see danger right before it comes?
Eh, this doesn’t really matter. If one "brute forces" a construct out of many simple models that acts like an AGI, then (in terms of the danger) it mind as well be an AGI.
And that, btw, is a major point of concern. LLMs and other models are "dumb," frozen and monolithic now, but stringing them together is a relatively simple engineering problem. So is providing some kind of learning mechanism that trains the model as it goes.
> But if Geoffrey Hinton is worried surely I’ve gone wrong somewhere. What am I not seeing?
He was very specifically worried about the pace. Hinton has been at the center of this space forever, and he had no idea things would jump so quickly. And if he couldn't see it, how is anyone supposed to see danger right before it comes?