Hacker Newsnew | past | comments | ask | show | jobs | submit | hsrada's commentslogin

Love it! Always wanted to build something like this. Glad you made it first though!

Drop your Twitter in your profile. Would love to give you a follow :)


Death.

The default consequence of AGI's arrival is doom. Aligning a super intelligence with our desires is a problem that no one has solved yet.

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

----

Listen to Dwarkesh Podcast with Eliezer or Carl Shulman to know more about this.


I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

I'm not suggesting we don't see ASI in some distant future, maybe 100+ years away. But to suggest we're even within a decade of having ASI seems silly to me. Maybe there's research I haven't read, but as a daily user of AI, it's hilarious to think people are existentially concerned with it.


> maybe 100+ years away

I have two toddlers. This is within their lifetimes no matter what. I think about this every day because it affects them directly. Some of the bad outcomes of ASI involve what’s called s-risk (“suffering risk”) which is the class of outcomes like the one depicted in The Matrix where humans do not go extinct but are subjugated and suffer. I will do anything to prevent that from happening to my children.


> I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

Maybe they don't seem that to others? I mean, you're not really making an argument here. I also use GPT daily and I'm definitely worried. It seems to me that we're pretty close to a point where a system using GPT as a strategy generator can "close the loop" and generate its own training data on a short timeframe. At that point, all bets are off.


> I like science fiction too, but all of these potential scenarios seem so far removed from the low level realities of how these systems work.

Today, yes. Nobody is saying GPT-3 or 4 or even 5 will cause this. None of the chatbots we have today will evolve to be the AGI that everyone is fearing.

But when you go beyond that, it becomes difficult to ignore trend lines.

Here's a detailed scenario breakdown of how it might come to be –https://www.dwarkeshpatel.com/p/carl-shulman


> Aligning a super intelligence with our desires is a problem that no one has solved yet.

It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.


No, the problem with AGI is potential exponential growth.

So less like an alien invasion.

And more like a pandemic at the speed of light.


That's assuming a big overshoot of human intelligence and goal-seeking. An average human capability counts as "AGI."

If lots of the smartest human minds make AGI, and it exceeds a mediocre human-- why assume it can make itself more efficient or bigger? Indeed, even if it's smarter than the collective effort of the scientists that made it, there's no real guarantee that there's lots of low hanging fruit for it to self-improve.

I think the near problem with AGI isn't a potential tech singularity, but instead just the tendency for it potentially to be societally destabilizing.


If AI gets to human levels of intelligence (ie. can do novel research in theoretical physics) then at the very least it’s likely that over time it will be able to do this reasoning faster than humans. I think it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains. That would imply there was some arbitrary physical limit to intelligence but even within humans the variance is quite dramatic.


> it’s very hard to imagine a scenario where we create an actual AGI and then within a few years at most of that event the AGIs are far more capable than human brains.

I'm assuming you meant "aren't" here.

> That would imply there was some arbitrary physical limit to intelligence

All you need is some kind of sub-linear scaling law for peak possible "intelligence" vs. the amount of raw computation. There's a lot of reason to think that this is true.

Also there's no guarantee the amount of raw computation is going to increase quickly.

In any case, the kind of exponential runaway you mention (years) isn't "pandemic at the speed of light" as mentioned in the grandparent.

I'm more worried about scenarios where we end up with an 75IQ savant (access encyclopedic training knowledge and very quick interface to run native computer code for math and data processing help) that can plug away 24/7 and fit on an A100. You'd have millions of new cheap "superhuman" workers per year even if they're not very smart and not very fast. It would be economically destabilizing very quickly, and many of them will be employed in ways that just completely thrash the signal to noise ratio of written text, etc.


I think it depends what is meant by fast take off. If we created AGIs that are superhuman in ML and architecture design you could see a significantly more rapid rate of progress in hardware and software at the same time. It might not be overnight but it could still be fast enough that we wouldn’t have the global political structures in place to effectively manage it.

I do agree that intelligence and compute scaling will have limits, but it seems overly optimistic to assume we’re close to them already.


Exponential growth is not intrinsically a feature of an AGI except that you've decided it is. It's also almost certainly impossible.

Main problems stopping it are:

- no intelligent agent is motivated to improve itself because the new improved thing would be someone else, and not it.

- that costs money and you're just pretending everything is free.


We see alignment problems all the time. Current systems are not particularly smart or dangerous. But they lie on purpose and funnily enough considering the current situation, Microsoft's attempt was threatening users shortly after launch.


The argument would be that by the time we see the problem it will be too late. We didn’t really anticipate the unreasonable effectiveness of transformers until people started scaling them, which happened very quickly.


Survivorship bias.

It's like saying don't worry about global thermonuclear war because we haven't seen it yet.

The Neandethals on the other hand have encountered a super-intelligence.


> It's a problem that we haven't seen the existence of yet. It's like saying no one has solved the problem of alien invasions.

But if we're seeing the existence of an unaligned superintelligence, surely it's squarely too late to do something about it.



I'm not sure that it's a matter of "knowing" as much as it is "believing"


There is absolutely no AGI risk. These are mere marketing ploys to sell a chatbot / feel super important. A fancy chatbot, but a chatbot none the less.


The 5mb is a per file size. You can still upload unlimited files as long as they're less than 5mb.


Not OP but this felt like a simple test - https://twitter.com/backus/status/1091203973246111744

There's the Vividness of Visual Imagery Quiz but that's self-administered and is a questionnaire.



My favorite is http://worrydream.com/

Here's a twitter thread containing some 'quirky' personal websites, not quite portfolios though.

https://twitter.com/michael_nielsen/status/10026747950313267...


> I find it amusing & ironic that they're pursuing games of efficiently & strategically killing enemies as examples of their successful progress. ;-)

Except that the amusement and irony would cease to exist once you have some idea of what's happening behind the scenes.

The AI doesn't know it's "killing enemies". For it, it's just something that results in the increase of a numerical reward signal.


Is that supposed to remove the amusement and irony? "We're creating friendly AI! Well, it thinks it's friendly, since to it, everything is just a numerical reward signal."


Obviously. I understand the accomplishments and applicability of the tech.

Stepping back a bit and simply looking at the context, though, it is an amusing contrast.


I wanted to add the observation that all the restricted heroes are ranged. Necrophos, Sniper, Viper, Crystal Maiden, and Lich.

Since playing a lane as a ranged hero is very different from playing the same lane as a melee hero, I wonder whether the AI has learned to play melee heroes yet.


Not only are they ranged, but this lineup is very snowball-oriented, i.e. the optimal play style with this kind of lineup is to gain a small advantage in the early game and then keep pushing towers together aggressively. The middle-to-late game doesn't really matter. Whoever wins the early game wins the game. And we do know that bots are going to be good at early game last hitting.


The article states the bots are actually rather mediocre at last hitting.


The lure of making money as a child is a temptation far stronger than most can resist. If I had access to the things these guys had, I can totally see myself going down the exact same path.

Now, a little older, the prospect of fines that will take a lifetime to repay and/or prison is way more deterring. As a kid, you just never think about it.


> Now, a little older, the prospect of fines that will take a lifetime to repay and/or prison is way more deterring. As a kid, you just never think about it.

I believe one does think about that, but concludes that the risk to get rich is worth it (because one has few such chances in life) and if all things go bad, there is still the suicide option.


Actually this subject is taken up directly in a chapter of Robert Sapolsky's Behave that I just read titled, appropriately enough, "Adolescence; or, Dude, Where's My Frontal Cortex?"

Some interesting stuff in there, some of which you're probably already familiar with. You could argue that a kid does "think" about it. But to use the word "concludes" may be a stretch.

I found this passage by Sapolsky on the neurobiology of risk/reward assessment in adolescents especially interesting and relevant here:

Age differences in absolute levels of dopamine are less interesting than differences in patterns of release. In a great study, children, adolescents, and adults in brain scanners did some task where correct responses produced monetary rewards of varying sizes. During this, prefrontal activation in both children and adolescents was diffuse and unfocused. However, activation in the nucleus accumbens in adolescents was distinctive. In children, a correct answer produced roughly the same increase in activity regardless of size of reward. In adults, small, medium, and large rewards caused small, medium, and large increases in accumbens activity. And adolescents? After a medium reward things looked the same as in kids and adults. A large reward produce a humongous increase, much bigger than in adults. And the small reward? Accumbens activity declined. In other words, adolescents experienced bigger-than-expected rewards more positively than do adults and smaller-than-expected rewards as aversive. A gyrating top, nearly skittering out of control.

This suggests that in adolescents strong rewards produce exaggerated dopaminergic signaling, and nice sensible rewards for prudent actions feel lousy.

That's not the whole story when it comes to kids' decision making, but it's of a piece with the rest of the chapter and shows that most kids are literally -- anatomically -- unable to think about things like this in a way they will be able to a few years later.


That particular tidbit of information is what makes me terrified of raising children and dovetails into the best description of the tragedy of being a teenager: you're exactly old enough to get into real trouble, and exactly young enough not to realize you shouldn't.


It starts as fun

Then profit

Then greed!


> Not sure how much it's "paying off"

Read what you love until you love to read. Then you can move onto books that seem that they might have some pay off.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: