Driving a car is easy. Engine on, eyes open, foot on the gas and you’re all set. So why is it taking autonomous cars so long to learn?
So we can deal with inaccurate maps as long as we invest in $200,000 LiDAR and other technologies to confirm to the car computers that what surrounds them is not exactly what the map shows, that’s cool.
Computers are logical, when they work correctly. I used to sell expensive supermarket scanning systems and I remember one such processor that consistently made incorrect mistakes in totally transactions, It turns out that it had a faulty chip in it at we decided it was possibly the result of an electrostatic charge somehow having occurred which damaged it, but didn’t fry it. It didn’t happen every time, but no one complained when they were under charged, we found out when it overcharged by a factor of a few thousand dollars! We couldn’t fix it, we had to replace the board and it worked fine for the first month or so. What if that was a faulty circuit board in an autonomous car that sometimes made the car go slow and other times made it take off at maximum speed, but worked fine for the first month and then started to intermittently behave erratically.
What if people behaved erratically around the car that was interpreting behavior of other vehicles as logical. That fourth car that ran the red light, people who race around the corner on the wrong side of the road? What if people learned about their weaknesses and did it on purpose as in this story.
Maybe they need to understand the chaos theory behind the human driver. I’d like to see an Infographic showing the logic differences between an autonomous car and a human driver who is late for work, didn’t get much sleep, had an argument with his wife, drops his shaver as the light changed and he was reaching for his cell phone. I’m talk about the average human driver, not a boy racer, someone who is overtired, intoxicated or on drugs.
What will happen when the first driverless car is involved in an accident?