Google’s self-driving car appears to have caused its first crash on February 14, when it changed lanes and put itself in the path of an oncoming bus.
Sourced through Scoop.it from: www.wired.com
I wonder about the Traffic Management Plan as the obstacles were sandbags placed around a storm drain. Perhaps if there were road cones or signage the Google car may have operated differently. Given the story though, I doubt it, because it did recognise an obstacle and stopped to avoid it.
It then drove sedately around the sandbags onto the wrong side of the road and collided with a bus coming the other way. I suspect (despite the fact that the human in the car assumed the bus would slow down and stop) that a human driver would not have done this and in fact, if I read the story correctly, if it had been a human driver, they would have been not only facing the insurance claim (certainly not a problem for the giant corporation), but the driver might have also faced multiple charges for not giving way to a bus and traveling on the wrong side of the road.
The crash had a combined speed of around 17mph and no one was hurt. Could this have happened on the open road? There is frequently roadkill, remnants of or truck tyres and other obstacles on roads and through various skills and intelligence, human drivers tend to understand very quickly how to deal with the situation, including unpleasant decisions such as in New Zealand “It’s you or the possum.” What would the Google car do if it was driving at 60mph and straight ahead are the shining eyes of a frozen with fear possum or Bambi.
This is a teething problem,. but really important to consider for those who blindly trust the technology and intelligence of an autonomous car.
I’m interested (having been involved with a mapping car with very similar technology to the Google car), what happens when the LiDAR or the ultrasound, cameras, or perhaps one of the computers or communications stops working. Does the autonomous car stop working? How do they define a failure serious enough to stop the car. What happens with external problems like a failure in telecommunications (used to advise the car of temporary changes to traffic controls for example). Do all autonomous cars (once we are beyond concept cars) have a back up system with the usual driver controls?
If a few sandbags cause the computer to make an unsafe decision, what happens if a third party, like the boy racers in this story http://bit.ly/1LSTUqV decide to try to interfere with and confuse the autonomous car? Tell me you are confident that would never happen.