Google’s Self-Driving Car Caused Its First Crash

Google’s self-driving car appears to have caused its first crash on February 14, when it changed lanes and put itself in the path of an oncoming bus.

Sourced through Scoop.it from: www.wired.com

Whilst previously any crashes that Autonomous Cars were involved in were not caused by the car, in this case the Google Car seems to have made a judgement call to drive on the wrong side of the road to avoid a hazard, the same as other cars were doing. Fundamentally nothing wrong with that, except that in this case a bus was legally driving where the Google car also decided to drive itself.

I wonder about the Traffic Management Plan as the obstacles were sandbags placed around a storm drain. Perhaps if there were road cones or signage the Google car may have operated differently. Given the story though, I doubt it, because it did recognise an obstacle and stopped to avoid it.

It then drove sedately around the sandbags onto the wrong side of the road and collided with a bus coming the other way. I suspect (despite the fact that the human in the car assumed the bus would slow down and stop) that a human driver would not have done this and in fact, if I read the story correctly, if it had been a human driver, they would have been not only facing the insurance claim (certainly not a problem for the giant corporation), but the driver might have also faced multiple charges for not giving way to a bus and traveling on the wrong side of the road.

The crash had a combined speed of around 17mph and no one was hurt. Could this have happened on the open road? There is frequently roadkill, remnants of or truck tyres and other obstacles on roads and through various skills and intelligence, human drivers tend to understand very quickly how to deal with the situation, including unpleasant decisions such as in New Zealand “It’s you or the possum.” What would the Google car do if it was driving at 60mph and straight ahead are the shining eyes of a frozen with fear possum or Bambi.

This is a teething problem,. but really important to consider for those who blindly trust the technology and intelligence of an autonomous car.

I’m interested (having been involved with a mapping car with very similar technology to the Google car), what happens when the LiDAR or the ultrasound, cameras, or perhaps one of the computers or communications stops working. Does the autonomous car stop working? How do they define a failure serious enough to stop the car. What happens with external problems like a failure in telecommunications (used to advise the car of temporary changes to traffic controls for example). Do all autonomous cars (once we are beyond concept cars) have a back up system with the usual driver controls?

If a few sandbags cause the computer to make an unsafe decision, what happens if a third party, like the boy racers in this story http://bit.ly/1LSTUqV decide to try to interfere with and confuse the autonomous car? Tell me you are confident that would never happen.

See on Scoop.itLocation Is Everywhere

Advertisements

About Luigi Cappel

Writer for hire, marketing consultant specialising in Location Based Services. Futurist and Public Speaker Auckland, New Zealand
This entry was posted in Autonomous cars, Driverless car, driverless vehicles, drivers, driving, driving app, Google Car, Google Cars and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s