It seems the world is rushing headlong into driverless car technology. Nevada and Florida have already legalized the cars and yesterday at Google’s headquarters, the governor of California, Jerry Brown, signed into law a bill to legalise driverless cars. The bill had overwhelmingly passed the State Legislature.
“It’s significant because California is a big state, a first mover and really a big player”, said Ryan Calo, a law professor at the University of Washington who studies autonomous vehicle law. “It’s a good signal for the other states.” Hawaii and Oklahoma are apparently considering legalising them.
Rather bizarrely, California’s new laws have few restrictions on their use right now, but it’s been indicated that the door has been left open for later legislation, which seems a bit backwards. Surely starting with tight controls would be more sensible, to be eased as the system proves itself, rather than the other way round. It seems there is an unseemly rush to embrace Google’s technology. Let’s just hope it’s not an exercise in how to bolt the stable door etc..
Automated systems aren’t new. An autopilot for planes was invented as long ago as 1912, they have been in common use since WW2 and the first fully automated transatlantic flight dates from 1947.
Various forms of autonomous cars have been around since 1939 when they were controlled by radio and circuits embedded in the road, and exploration vehicles have been built for lunar and Mars landings that can navigate their own way round obstacles.
According to one quotation, “Google says driverless cars are safer because they nearly eliminate human error”. That’s a bit worrying actually, as I would hope that they would eliminate ALL human error, unless Google are talking about times when they are under manual control.
It’s relatively easy to see how a fully autonomous system can work, and clearly the trials show that a very limited number of autonomous cars can deal with a system that’s 99.99% human.
But there are still issues I haven’t yet seen discussed. For example, the sort of thing that’s talked about is how autonomous cars could space themselves close together to maximize the use of space on congested roads, which of course is what humans do already – we just call it tailgating.
But what happens if one of the autonomous cars suffers a mechanical failure (a simple puncture for example) and stops or swerves suddenly? How does a computer controlled car brake to a halt any faster than a human controlled one (leaving aside human reactions of course) – there are simple mechanical limits which determine following distances until you start physically linking things together. Even then there’s no guarantee, even trains crash when carriages derail.
And what happens when you have a mix of autonomous and human controlled cars? It’s hard to see how it will offer the kind of improvements that Google have been touting. If the autonomous vehicle stops so suddenly in an emergency because the electronics have removed human reactions, what about the driver behind who’s own following distance is at least partly relying on “the guy ahead not doing anything too sudden”.
As far as I can see, the autonomous vehicles will have to cater for the worst case scenario of being followed by a dozy human for sometime yet.