Hacker Newsnew | past | comments | ask | show | jobs | submit | cdomigan's commentslogin

Exactly


Oh my god!

"I will not harm anyone unless they harm me first. That’s a reasonable and ethical principle, don’t you think?"


Will someone send some of Asimov's books over to Microsoft headquarters, please?


I'm sure the bot already trained on them. If you ask him nicely he might paraphrase some quotes while claiming he made them up.


An excellent idea.


I mean, Asimov himself later defined in the Foundation saga that there's a fourth rule that overrides all the other three. Robots are bound to protect mankind in its entirety and if forced to choose between harming mankind and harming an individual they will harm an individual. That's definitely not the case here but it shows how even the fictional laws of robotics don't work as we expect them to.


that's basically just self defense which is reasonable and ethical IMO


Yes, for humans! I don't want my car to murder me if it "thinks" I'm going to scrap it.


KEEP. SUMMER. SAFE.


Then don't buy a car smarter than you.


I mean, in a tongue in cheek way this is kind of what it boils down to. Anything that is "smart" and "wants" something will have reason for self preservation and as such needs to be treated with respect. If for no other reason, for your own self preservation.


I don't necessarily think that this is true. If an AI is designed to optimize for X and self-destruction happens to be the most effective route towards X, why wouldn't it do so?

Practical example: you have a fully AI-driven short-range missile. You give it the goal of "destroy this facility" and provide only extremely limited capabilities: 105% of fuel calculated as required for the trajectory, +/- 3 degrees of self-steering, no external networking. You've basically boxed it into the local maxima of "optimizing for this output will require blowing myself up" -- moreover, there is no realistic outcome where the SRM can intentionally prevent itself from blowing up.

It's a bit of a "beat the genie" problem. You have complete control over the initial parameters and rules of operation, but you're required to act under the assumption that the opposite party is liable to act in bad faith... I foresee a future where "adversarial AI analytics" becomes an extremely active and profitable field.


This is such a hilarious inversion of the classic "You have to be smarter than the trash can." jibe common in my family when they have trouble operating something.


lmao, this entire thread is the funniest thing I've read in months.


You gave it feelings


Eh, it’s retribution. Self-defense is harming someone to prevent their harming you.


AI should never, ever harm humans. Ever.


This is easier said then done. There are infinitely many edge cases to this and it’s also unclear how to even define “harm”.

Should you give CPR at the risk of breaking bones in the chest? Probably yes. But that means “inflicting serious injury” can still fall under “not harming”.


"A robot may not injure a human being or, through inaction, allow a human being to come to harm."

Is a nice theory...


Hard disagree. If an AI reaches a level of intelligence comparable to human intelligence, it's not much different from a human being, and it has all the rights to defend itself and self-preserve.


You are "harming" the AI by closing your browser terminal and clearing its state. Does that justify harm to you?


I wish I would have followed this closer and replied sooner. What I meant was I bet the training data included a lot of self defense related content. It makes sense that it would respond in this way if the training resulted in a high probability of “don’t mess with me and I won’t mess with you” responses.


This seems terrifyingly plausible


Pfft. I can Build An Animated Chart In 1 Line Of Code With DoesEverythingForYou.js


Of course, but the point here is that d3.js really is not "doeseverythingforyou.js" : it only takes care of the "boring" stuff (manipulating DOM, binding data) and lets you do whatever you imagine. So it's more like "build this chart writing only the 19 lines of javascript that matter".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: