It may just be the circles of the internet I’ve been hanging around in, but it seems very commonly assumed that the pace of progress in the field of AI is so rapid that — unless we take steps to avoid it — someone (or more likely, some large organization) will soon develop a super-intelligent AI.
And it’s not just the fringe. We can see see such apparently sober voices as Geoffrey Hinton sounding the alarm about super-smart AI.
But … why? All the recent exciting progress has been due to large language models, which are basically hacks that allow us to use large volumes of normally-intelligent text to train a statistical model for next-token prediction. It’s not even particularly complex to do this in principle (though wrangling all the compute you need can be quite difficult). It’s all very clever, yes, but at bottom it’s just a brute force approach.
These solutions get us neat tools but I don’t see how they bring us even one step closer to super-intelligence. You can’t just train an LLM with more and more parameters and more and more tokens and expect it to be smarter than the data it was trained on. And such models don’t bring us any real understanding of what it would take to make super-intelligent machines.
But if Geoffrey Hinton is worried surely I’ve gone wrong somewhere. What am I not seeing?
1. Progress in LLMs has come much more rapidly than expected. This means that when [arbitrary threshold] is crossed, we probably won't have much, if any, advance warning.
2. Nobody on earth knows what the path to AGI (or even narrow-but-still-superhuman intelligence with enough agency to be dangerous) looks like. So, it's not currently possible to know if LLMs are a path to existential threat. (No, it is not correct to say that LLMs are too simple to ever be a threat, as far as I can tell.) (Recall also that we don't know at all where we get our own consciousness or how it works.)
So it seems like #2 is more where you're hung up, and frankly, it's just unknown. If we knew what the path to AGI looked like, we woul be in a very different world. When you combine with #1, it becomes very scary because we might be on the cusp of an irreversible change, so it's useful to assume we're there if doing so allows you to potentially alter or avert disaster.