I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.
I'm not suggesting we don't see ASI in some distant future, maybe 100+ years away. But to suggest we're even within a decade of having ASI seems silly to me. Maybe there's research I haven't read, but as a daily user of AI, it's hilarious to think people are existentially concerned with it.
I have two toddlers. This is within their lifetimes no matter what. I think about this every day because it affects them directly. Some of the bad outcomes of ASI involve what’s called s-risk (“suffering risk”) which is the class of outcomes like the one depicted in The Matrix where humans do not go extinct but are subjugated and suffer. I will do anything to prevent that from happening to my children.
> I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.
Maybe they don't seem that to others? I mean, you're not really making an argument here. I also use GPT daily and I'm definitely worried. It seems to me that we're pretty close to a point where a system using GPT as a strategy generator can "close the loop" and generate its own training data on a short timeframe. At that point, all bets are off.
> I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.
Today, yes. Nobody is saying GPT-3 or 4 or even 5 will cause this. None of the chatbots we have today will evolve to be the AGI that everyone is fearing.
But when you go beyond that, it becomes difficult to ignore trend lines.
That's assuming a big overshoot of human intelligence and goal-seeking. An average human capability counts as "AGI."
If lots of the smartest human minds make AGI, and it exceeds a mediocre human-- why assume it can make itself more efficient or bigger? Indeed, even if it's smarter than the collective effort of the scientists that made it, there's no real guarantee that there's lots of low hanging fruit for it to self-improve.
I think the near problem with AGI isn't a potential tech singularity, but instead just the tendency for it potentially to be societally destabilizing.
If AI gets to human levels of intelligence (ie. can do novel research in theoretical physics) then at the very least it’s likely that over time it will be able to do this reasoning faster than humans. I think it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains. That would imply there was some arbitrary physical limit to intelligence but even within humans the variance is quite dramatic.
> it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains.
I'm assuming you meant "aren't" here.
> That would imply there was some arbitrary physical limit to intelligence
All you need is some kind of sub-linear scaling law for peak possible "intelligence" vs. the amount of raw computation. There's a lot of reason to think that this is true.
Also there's no guarantee the amount of raw computation is going to increase quickly.
In any case, the kind of exponential runaway you mention (years) isn't "pandemic at the speed of light" as mentioned in the grandparent.
I'm more worried about scenarios where we end up with an 75IQ savant (access encyclopedic training knowledge and very quick interface to run native computer code for math and data processing help) that can plug away 24/7 and fit on an A100. You'd have millions of new cheap "superhuman" workers per year even if they're not very smart and not very fast. It would be economically destabilizing very quickly, and many of them will be employed in ways that just completely thrash the signal to noise ratio of written text, etc.
I think it depends what is meant by fast take off. If we created AGIs that are superhuman in ML and architecture design you could see a significantly more rapid rate of progress in hardware and software at the same time. It might not be overnight but it could still be fast enough that we wouldn’t have the global political structures in place to effectively manage it.
I do agree that intelligence and compute scaling will have limits, but it seems overly optimistic to assume we’re close to them already.
We see alignment problems all the time. Current systems are not particularly smart or dangerous. But they lie on purpose and funnily enough considering the current situation, Microsoft's attempt was threatening users shortly after launch.
The argument would be that by the time we see the problem it will be too late. We didn’t really anticipate the unreasonable effectiveness of transformers until people started scaling them, which happened very quickly.
There is absolutely no AGI risk. These are mere marketing ploys to sell a chatbot / feel super important. A fancy chatbot, but a chatbot none the less.
Is that supposed to remove the amusement and irony? "We're creating friendly AI! Well, it thinks it's friendly, since to it, everything is just a numerical reward signal."
I wanted to add the observation that all the restricted heroes are ranged. Necrophos, Sniper, Viper, Crystal Maiden, and Lich.
Since playing a lane as a ranged hero is very different from playing the same lane as a melee hero, I wonder whether the AI has learned to play melee heroes yet.
Not only are they ranged, but this lineup is very snowball-oriented, i.e. the optimal play style with this kind of lineup is to gain a small advantage in the early game and then keep pushing towers together aggressively. The middle-to-late game doesn't really matter. Whoever wins the early game wins the game. And we do know that bots are going to be good at early game last hitting.
The lure of making money as a child is a temptation far stronger than most can resist. If I had access to the things these guys had, I can totally see myself going down the exact same path.
Now, a little older, the prospect of fines that will take a lifetime to repay and/or prison is way more deterring. As a kid, you just never think about it.
> Now, a little older, the prospect of fines that will take a lifetime to repay and/or prison is way more deterring. As a kid, you just never think about it.
I believe one does think about that, but concludes that the risk to get rich is worth it (because one has few such chances in life) and if all things go bad, there is still the suicide option.
Actually this subject is taken up directly in a chapter of Robert Sapolsky's Behave that I just read titled, appropriately enough, "Adolescence; or, Dude, Where's My Frontal Cortex?"
Some interesting stuff in there, some of which you're probably already familiar with. You could argue that a kid does "think" about it. But to use the word "concludes" may be a stretch.
I found this passage by Sapolsky on the neurobiology of risk/reward assessment in adolescents especially interesting and relevant here:
Age differences in absolute levels of dopamine are less interesting than differences in patterns of release. In a great study, children, adolescents, and adults in brain scanners did some task where correct responses produced monetary rewards of varying sizes. During this, prefrontal activation in both children and adolescents was diffuse and unfocused. However, activation in the nucleus accumbens in adolescents was distinctive. In children, a correct answer produced roughly the same increase in activity regardless of size of reward. In adults, small, medium, and large rewards caused small, medium, and large increases in accumbens activity. And adolescents? After a medium reward things looked the same as in kids and adults. A large reward produce a humongous increase, much bigger than in adults. And the small reward? Accumbens activity declined. In other words, adolescents experienced bigger-than-expected rewards more positively than do adults and smaller-than-expected rewards as aversive. A gyrating top, nearly skittering out of control.
This suggests that in adolescents strong rewards produce exaggerated dopaminergic signaling, and nice sensible rewards for prudent actions feel lousy.
That's not the whole story when it comes to kids' decision making, but it's of a piece with the rest of the chapter and shows that most kids are literally -- anatomically -- unable to think about things like this in a way they will be able to a few years later.
That particular tidbit of information is what makes me terrified of raising children and dovetails into the best description of the tragedy of being a teenager: you're exactly old enough to get into real trouble, and exactly young enough not to realize you shouldn't.
Drop your Twitter in your profile. Would love to give you a follow :)