Google's driverless cars may eventually put humans out of the driver's seat, says Mark Goldfeder.

Editor’s Note: Mark Goldfeder is senior lecturer at Emory Law School and senior fellow at the Center for the Study of Law and Religion. He teaches law and technology, among other courses. The opinions expressed here are his own.

Story highlights

Google's driverless cars are very good drivers, says Mark Goldfeder

They could change what it means to be a "reasonable driver" under the law, he says

Goldfeder: Your grandkids won't drive because computer-driven cars will be too safe to allow humans to take over

CNN  — 

Google’s driverless car just caused its first crash. To the casual observer this may seem to vindicate the doubters. In fact, all it does is prove that the future is now.

After more than a million miles of autonomous driving, Google’s vehicle reached an intersection where it arguably had the right of way. The car assumed that an approaching bus would yield to let it pass, and when the bus did not slow down the driverless car – going at a speed of about 2 mph – made contact with the side of the bus, which was traveling at 15 mph.

No one was harmed, and Google has already released a statement affirming that the car learned from its mistake, and now understands that buses are less likely to yield than other types of vehicles. It should also be noted that the passenger who was in the car at the time said that he, a licensed reasonable person, would have made the same mistake the car did.

Mark Goldfeder

The incident comes on the heels of last month’s announcement by U.S. safety regulators that for the purposes of federal law they would consider the “‘driver”’ in Google’s new self-driving car, to be … the car itself.

That small doctrinal shift could eventually completely change the world as we know it, and this crash only serves to prove that point.

The law uses the “reasonable driver” standard in evaluating negligence liability. Simply put, if a driver can show he took as much care as a “reasonable driver” should have taken, he is generally not held liable in case of an accident.

Until now, that just meant comparison to a reasonable person. But if a “driver” can now be defined both as a “reasonable person” and as a computer – one that can react on the roadway 10 times faster than the average human being – then what does it mean to say “reasonable driver” anymore?

The traditional fear has been that cars driven by computers would not be as safe as those driven by people. That’s why California drafted a law requiring that all vehicles – including driverless cars – have a built-in steering-wheel and a licensed human passenger capable of taking control. This assumes it is safer to allow a human driver to grab the wheel in the event of an emergency.

But that assumption is far from clear. The average U.S. driver has one accident roughly every 165,000 miles. Google’s driverless cars are already doing much better, and constantly improving. It is becoming clear after millions of miles of research, that it is safer to simply let the computer drive – even in the event of an emergency. Google is so sure of this that there is no steering wheel in their latest design. And the federal government seemed to agree when it officially recognized the computer as a “capable driver” in the fullest legal sense of the word – even in the Google car, where a human cannot possibly take over.

So – getting back to our original question – if sooner or later half the cars on the road are driven by computers, what happens to the “reasonable driver” standard? If an average guy in an average car has an accident which the average “reasonable person” could not have avoided, should he now be held liable because a driverless car would have easily avoided it?

The assumption used to be that when driverless cars started to get into accidents, much of the legal wrangling would revolve around driverless car owners – or other responsible parties such designers, programmers, and manufacturers – having to prove that their vehicles met the “reasonable driver” standard. But increasingly, it appears that the technology is so good that the opposite will eventually be true: Human drivers will have the burden of proving that they met the new “reasonable” standard.

Will two separate standards evolve? If not, what happens when the skill of computer drivers is simply too far out of reach for the average human being to be considered safe under the same “reasonable” standard?

Will people simply be forced to give up driving on the open road? Don’t laugh – it’s not out of the question.

Consider how the law treats drunk drivers: We ban drunk drivers because they cannot meet the “reasonable driver” standard. We don’t compare them to a “reasonable drunk driver” because the law assumes a “reasonable driver” who is drunk would not drive in the first place.

It is quite possible that in a matter of years the empirical evidence will be clear: When a stone sober human gets behind the wheel instead of letting the car drive itself, the danger to others increases so drastically that doing so will be barred by law, the same way we bar people who are drunk or otherwise impaired.

In other words, a human being – just by virtue of being human – simply won’t meet the new legal standard for a “reasonable driver.”

And that might not be such a bad thing: 33,000 Americans die annually in automobile accidents, 93% of which are caused by human error.

As cars have taken small steps toward becoming smarter over the last 15 years, with the additions of sensors and vehicle-to-vehicle communications among other innovations, the frequency of accidents has fallen over 50%. If we remove the ability for a human – a tired, distracted, drunk, angry, or simply slow-reacting human – to take over, experts believe that accident frequency could drop by an additional, and astounding, 80%.

And computer drivers are getting better, not worse. As the incredible savings in human lives becomes clear, it will become more difficult to rationalize allowing human beings on the open road. Americans love to drive it’s true – but if they could save 33,000 lives every year by letting the computer drive instead, they may well choose to do it.

That’s why I think the writing is on the wall: Human drivers will soon be one of those things – like the rotary phone or the typewriter – that you will have to tell your grandkids about. Not only will human driving be unnecessary – many assume that by 2040 self-driving cars will be the norm – but it may well be considered genuinely unsafe, not to mention against the law!

Not to worry though, the legal acceptance of the artificial driver will only hasten the reconciliation of facts with people’s feelings, and the development of new social norms. In other words, by the time your grandkids are barred from driving, they probably won’t much care.

Join us on Facebook.com/CNNOpinion.

Read CNNOpinion’s Flipboard magazine.