And to add to that, offline backups that go back 90+ days. Ransomware gangs frequently use time bombs to deploy their encryption after sitting within a system for a month or two. If you get hit by one of those gangs and only keep backups for 45 days, you're screwed, because your backup is still infected.
Lab manipulation doesn't necessarily mean biological warfare. It much more likely means study of a type of potential future naturally occurring virus that is dangerous to humans.
Gaits, like fingerprints, would do a pretty good job of matching a suspect to evidence, but if they have a gait (or fingerprint) and there is no match in the system, then what?
This should be calibrated to the risk the top X% of cautious/safe drivers, and exclude reckless, inexperienced, or intoxicated drivers. As a safe driver, you shouldn't have to accept risk calibrated to "average" (i.e. drunk, reckless) driver.
Why should it be? In the end the dead bodies count and it doesn't matter whether a cautious or inexperienced driver killed them. Inexperienced drivers are prerequisite of experienced drivers, there's no way to get rid of them. Excluding them from statistics is just discounting those deaths as... somehow less important?
If a self-driving vehicle is only 1.5x (instead of 5x) as safe as the average human driver then you're not trading between death by humans vs. death by machine, you're primarily trading between death by human and spared by machine and only secondly between the former.
This person is saying that on an individual level, they are not willing to cede control to an “average” AI when they know (or believe) themselves to be above average.
You’re talking about it at a societal level, as if everyone switched over to robot cars at the same time.
We don't need to switch everyone over at the same time. For example we could start with young (more likely to be drunk and inexperienced?) or known-bad (traffic offenses) drivers where perhaps even sub-average autonomous vehicles could make a difference.
You’re right, I shouldn’t have said “At the same time” but the point still stands: your other comment was talking past the OP, not addressing their point.
You’re talking about it as a macro optimization problem while the OP was explaining a rational decision at the level of the individual.
> In the end the dead bodies count and it doesn't matter whether a cautious or inexperienced driver killed them
It matters to the safe drivers. Bad drivers are mostly a danger to themselves. At only "1.5x as safe as average", it's a good deal for the bad drivers, but there are probably a lot of "2x as safe as average" drivers that are getting a bad deal. They are in more danger than before.
I am a safe driver. (My measure: two moving violations in nearly 40 years of driving, the last one 16 years ago. No accidents in 19 years, no injury accidents ever. And I've driven daily for the whole time.)
In the past couple of weeks, I've narrowly avoided hitting pedestrians three different times. Each time, the pedestrian was somewhere other than a valid crosswalk (once was on a highway exit). In each case, I think an autonomous vehicle could have handled it better than me.
Maybe a "perfect" autonomous vehicle would've reacted better... But, I'm pretty sure that lady with her bicycle walking across the street in Arizona would say differently in regards to Uber's program. I mean, it was a perfect test case for such a system. The guy whose Tesla drove into the side of a semi-truck might feel differently too. Oh and the one in a Tesla who was driven into a barrier in Mountain View and the car caught fire... He died too.
A perfect autonomous car sure sounds nice - but will it ever arrive? Would it have just hit 1 of those pedestrians instead? Would it just go, ding, and suddenly you're in control? Would it kill 1 and then the company would have the data to know to not kill pedestrians in that one specific example? How many people would have to die as test subjects before the system would be better than people? And what if it never got there but you still killed all those folks anyway?
Source? It seems only logical that the number of accidental deaths goes up with the number of bad drivers on the road - not just because they kill themselves.
I just mean that a disproportionate amount of the danger created by bad drivers is to themselves. I don't have a source, but I think this is obvious.
My point is that, even if we lower the total death count, the safest drivers could still end up at greater risk, because a disproportionate amount of the reduction in deaths will go to bad drivers.
IIHS says "Nationwide, 53 percent of motor vehicle crash deaths in 2018 occurred in single-vehicle crashes." The other categories being multi-vehicle and property only.
Let’s say you hire a chauffeur to drive your kids around. You find out they’ve been drinking on the job and speeding recklessly. When you confront them, they pull out stats that they’ve been actually less drunk than average. Do you fire them and find a new chauffeur?
When it’s a robot chauffeur, you have to evaluate it like you would a human one.
In this quite hypothetical scenario, if the statistics he cites are correct and also apply to chauffeurs (i.e. chauffeurs are not statistically different from the general population) then firing him and hiring a new one may not improve your situation. It would be better to invest in a breathalyzer or something.
So what you're suggesting is an appeal to emotion, fire your driver to ameliorate your dissatisfaction even if it might result in an even worse driver.
So to turn the question around, do you prefer false sense of of safety for your children or actual safety?
That’s only the case if it’s entirely statistical, while the whole point is that there are factors under your control. Hiring someone/something to drive your family around isn’t a reversion to the mean. You can make certain efforts (interviewing, not tolerating bad behavior, etc). It’s a third person version of the usual debate of ‘I’m a safe driver’ versus ‘I only had like three beers and that was two hours ago’ versus ‘robo car.’ If you bucket the first two together and throw your hands up in the air saying humans are humans oh well, you’re pretending you don’t have the agency you actually have.
In the third person version, I suppose there’s an implicit unstated option that while your particular chauffeur has evidence they are better than average, you have an option to hire someone more responsible. That aspect of agency is central here.
> if the statistics he cites are correct and also apply to chauffeurs
I meant compared to the general population. As in self driving versus general population stats.
Ok, I see what you're going for. But then the question is how much safety is that agency buying you? And how many people even have an option to exercise such agency? You do not have it when it comes to other drivers that may cause accidents or run you over (or your children if you wish) as pedestrian. You have far less of it for taxi, rideshare or public transport services. And how many parents will drive their children even when they're stressed or haven't slept because the children simply have to go somewhere and they can't afford other options?
In aggregate we can probably buy more safety by having policies that encourage replacement of bad drivers with merely average autonomous vehicles rather than attempting to rely on individual behavior to improve safety.
If you want to still exercise personal options you could choose an autonomous car plus safety driver.
Nobody will be forcing you to buy self-driving car for quite a while. But as a safe driver, you should care about eliminating the most unsafe drivers from the roads.
As a self-identified safe driver, no action I am capable of taking will put an unsafe driver in a self-driving car.
I'll even go so far to say that many unsafe drivers can't afford a self-driving car. They're often unsafe because their car is on balding tires, the brakes don't work, and the tail lights are busted.
The rest, well, they simply enjoy driving unsafely and thus have no reason to get into a self-driving car.
I've questioned the lack of driving experience as a risk factor since the pool of experienced driver excludes those who died becoming experienced.
Assuming someone has a certain (constant) probability of excluding himself from the driving pool every year, over time the average percentage will drop as folks most susceptible from excluding themselves will have already done so.
Exactly. Because now we can punish those individual drivers, lock them up, take away their car and license, but are we going to pull the plug on all cars with auto-pilot X because X is causing accidents? Is a small change in the software enough to establish it as a new driver? It's "smoking is good for you" all over again.
No, on the contrary: deaths through those drivers can be eliminated. It only makes sense to look at total number of deaths, including by alcohol, drugs, inexperienced drivers, elderly drivers, distracted drivers (smartphone, etc.).