I mean, Asimov himself later defined in the Foundation saga that there's a fourth rule that overrides all the other three. Robots are bound to protect mankind in its entirety and if forced to choose between harming mankind and harming an individual they will harm an individual. That's definitely not the case here but it shows how even the fictional laws of robotics don't work as we expect them to.
I mean, in a tongue in cheek way this is kind of what it boils down to. Anything that is "smart" and "wants" something will have reason for self preservation and as such needs to be treated with respect. If for no other reason, for your own self preservation.
I don't necessarily think that this is true. If an AI is designed to optimize for X and self-destruction happens to be the most effective route towards X, why wouldn't it do so?
Practical example: you have a fully AI-driven short-range missile. You give it the goal of "destroy this facility" and provide only extremely limited capabilities: 105% of fuel calculated as required for the trajectory, +/- 3 degrees of self-steering, no external networking. You've basically boxed it into the local maxima of "optimizing for this output will require blowing myself up" -- moreover, there is no realistic outcome where the SRM can intentionally prevent itself from blowing up.
It's a bit of a "beat the genie" problem. You have complete control over the initial parameters and rules of operation, but you're required to act under the assumption that the opposite party is liable to act in bad faith... I foresee a future where "adversarial AI analytics" becomes an extremely active and profitable field.
This is such a hilarious inversion of the classic "You have to be smarter than the trash can." jibe common in my family when they have trouble operating something.
This is easier said then done. There are infinitely many edge cases to this and it’s also unclear how to even define “harm”.
Should you give CPR at the risk of breaking bones in the chest? Probably yes. But that means “inflicting serious injury” can still fall under “not harming”.
Hard disagree. If an AI reaches a level of intelligence comparable to human intelligence, it's not much different from a human being, and it has all the rights to defend itself and self-preserve.
I wish I would have followed this closer and replied sooner. What I meant was I bet the training data included a lot of self defense related content. It makes sense that it would respond in this way if the training resulted in a high probability of “don’t mess with me and I won’t mess with you” responses.
Of course, but the point here is that d3.js really is not "doeseverythingforyou.js" : it only takes care of the "boring" stuff (manipulating DOM, binding data) and lets you do whatever you imagine. So it's more like "build this chart writing only the 19 lines of javascript that matter".