I think you misunderstand. I agree that a safety driver being expected to be able to pay attention and being ready to take over at any moment isn't viable.
But consider a point where the cars are good enough that the remaining risks are ones where they can show that they can tune the hazard detection so that it doesn't miss any potential hazard situations, but may occasionally be overly cautious, and that it can reliably and safely slow down or stop short of potential hazards it doesn't know how to handle.
In that case you might get to a point where it's ok (safety wise, if not in terms of customer satisfaction) if the car stops for 30 seconds until a human safety driver reviews the data and confirms that what the car "sees" is not a dangerous situation.
In that case you might have e.g. 10 cars per safety driver, or more, and most of the time the car might not even stop - if a driver is available to respond immediately it may be sufficient for it to slow down until it gets a response. And you can simply slowly reduce the number of safety drivers as the cars get better. For a fleet service you might well never stop having some people monitoring to respond to unexpected conditions.
Of course, for this to be viable, the car needs to be possible to be made safe without human intervention, but that safety may be achieved by opting to stop or slow down the car in situations where continuing might be perfectly safe (and with the caveat that this may e.g. restrict where it may be possible to let it drive etc.), but where the car can't yet tell by itself.
This of course presupposes specific types of failure scenarios where the car can safely find a way to come to a stop but can't safely determine if it can continue forwards. It's not a given that's achievable with low enough effort (relative to solving the issues that might cause it to fail to spot a hazard) to be worth it.
Exactly. For example, imagine a situation where a tree is down, blocking one direction of travel. Humans would very cautiously share the remaining road space. But a robot taxi would just stop and wait for the tree to move. At that point it summons a human who tells it what to do in broad terms (e.g., "the new lane is here" or "do a u-turn and follow this other route").
But consider a point where the cars are good enough that the remaining risks are ones where they can show that they can tune the hazard detection so that it doesn't miss any potential hazard situations, but may occasionally be overly cautious, and that it can reliably and safely slow down or stop short of potential hazards it doesn't know how to handle.
In that case you might get to a point where it's ok (safety wise, if not in terms of customer satisfaction) if the car stops for 30 seconds until a human safety driver reviews the data and confirms that what the car "sees" is not a dangerous situation.
In that case you might have e.g. 10 cars per safety driver, or more, and most of the time the car might not even stop - if a driver is available to respond immediately it may be sufficient for it to slow down until it gets a response. And you can simply slowly reduce the number of safety drivers as the cars get better. For a fleet service you might well never stop having some people monitoring to respond to unexpected conditions.
Of course, for this to be viable, the car needs to be possible to be made safe without human intervention, but that safety may be achieved by opting to stop or slow down the car in situations where continuing might be perfectly safe (and with the caveat that this may e.g. restrict where it may be possible to let it drive etc.), but where the car can't yet tell by itself.
This of course presupposes specific types of failure scenarios where the car can safely find a way to come to a stop but can't safely determine if it can continue forwards. It's not a given that's achievable with low enough effort (relative to solving the issues that might cause it to fail to spot a hazard) to be worth it.