Agreed. All of us would like to think we would not hesitate to make the correct decision in such circumstances but we're looking at these scenarios while having already played them out in our mind(s).
Most of us have seen the scene of a ball bouncing out into a street, then after a few seconds of nothing, a child darts out after it, into the path of an oncoming car.
We'd all like to think we'd make the right decision and pause if we saw a ball bounce out in front of us...of course we'd at least pause for a few moments...wouldn't we?
Why would we expect nothing less from our "self-driving cars"?
We (the developers) need to be slow to obtain as much data as possible, not leaving out any possible "what-if's", no matter how unexpected or silly. If there is human life possibly involved, then that scenario needs to be fully weighed and measured before stamping it, "Passed!", just in order to get the car approved and out the assembly lines...to make a sale.
They need to design AI into the cars and robots as if their children's lives were dependent upon it. They just might be!