I think the engineer's answer to the child entering the roadway would be: The car SHOULD never drive at such a speed that if the child WERE to enter the visible zone that it could swerve+slow enough to not hit it, forget the toy. After that we can move the goal posts and say it's a FAST child on a bike - but then the reasonable solution to that is a human driver may have also hit the biking child. Then, of course, we get into the ethics of fault for the accident.
My agreement with you falls largely under my last paragraph. I'm trying to illustrate a couple examples where driving as a human on roads built for human drivers requires perceptive powers and understanding that are beyond 'merely' driving safely, but also require a sort of holistic understanding of the world. If your goal is to make a better than human substitute driver then I don't think it is a completely unreasonable position to believe you'll need some level of AGI. Of course, as we figure out how to do concrete tasks and incorporate them into a system they'll stop being considered traits that would require general intelligence, but I suppose that is a different discussion.
And your example isn't moving goalposts, its just another legitimate example of a situation thats gotta get figured out. If you think that things like understanding that some kid learning to skateboard nearby could fall a surprisingly far distance and thus you should exercise caution, or being aware of factors that imply fast biking children (say, an adult and a child implies the potential for another fast moving child on the same trajectory), that this sort of situational and contextual awareness is critical for proper driving.. then yeah, that would be a reasonable sounding argument to support "I think self driving cars will require some level of progress in AGI".