As your battered old car tears down the road on a Monday morning, a shiny Toyota Prius in front of you screeches to a sudden halt at the intersection. The dull thump and clang of clashing metal makes your heart sink—it’s definitely a fender bender. Storming over to the careless driver to give him a piece of your mind, you rap definitively on the window. The driver’s glass rolls down slowly to reveal…no one.
If you’ve heard of Google’s latest invention, the “driverless” car is probably not a shock to you. But in a world where cars have no drivers, who will you blame for your dented fender? Or worse still, your broken neck?
As the world marvels at Google’s hands-free car, a select group, including engineers, remains skeptical. They wonder how courts will untangle the legal issues the car will face in accidents, and whether drivers will be ready to turn themselves over completely to a machine.
Driverless cars have advantages that could improve the efficiency and safety of our travel. They work out how to coast in the most fuel-efficient way, and their computer operators don’t get distracted, drunk or sleepy. According to Seth Teller, a vehicle engineer at the Massachusetts Institute of Technology, robot cars are eventually likely to reduce the number of accidents by more than half. Not only will the car’s sensors be able to see at night, sense distant obstacles and remain unaffected by fog, its computers can react to emergency situations within milliseconds—much quicker than human reaction times. “There is no question that robot vehicles will eventually match and even exceed human performance. It’s just a question of when,” Teller says.
Google’s car has already clocked 140,000 miles on California roads, with minimal human intervention. The car makes decisions using strategically placed cameras, sophisticated radars and range finders.
But what happens when the Google car gets into an accident?
Apparently, it’s already happened. A human driver rear-ended Google’s car at a red light in California, according to the New York Times. This particular accident may not have been Google’s fault, but if there are others, someone may get sued. Normally, if a car was at fault, its driver would be liable. “All of the vehicle code sections in California are specifically related to the driver,” says attorney Scott Lovernick, founder of Bicycle Defender, a California personal injury law firm. For example, according to Section 22350 of the California Vehicle Code, “No person shall drive a vehicle upon a highway at a speed greater than is reasonable or prudent…”
“The way the laws are written now certainly doesn’t account for the fact that there is no human driver,” Lovernick says.
Another unique problem is that Google may outsource its technology to automobile makers like Toyota. In that case, who would be responsible for automobile failure: Toyota, Google or neither? Lovernick explains that in a typical accident, the manufacturer is not considered responsible. However, when accidents happen in a driverless car, manufacturers are likely to be slammed with product liability suits. “It will be important to determine whether the car was manufactured improperly or whether it had software issues,” he says. Without a human element to assume ultimate responsibility, insurance companies and regulators may refuse to vouch for these vehicles, says Lovernick.
Google declined to comment on the legalities associated with its car, instead offering links to the company’s official blog for information. But the blog does not address any anticipated legal issues; nor does it make any mention of the widely publicized fender-bender. The car’s chief engineer, Sebastian Thrun, writes only that the project is still very experimental and that the local police were briefed on all test drives.
MIT’s Teller says he would be surprised if Google plans to put these cars on our roads before 2020, calling Google’s recent press blitz the announcement of a work-in-progress. But he’s confident that the Google car will eventually be ready for mass production. “I am a firm believer in this technology,” he says. “There’s no question that eventually, it should be deployed.”
Sociologists that study technology aren’t so sure. “In my previous life as an engineer, I would have liked this technology,” laughs Wiebe Bijker of the University of Maastricht in the Netherlands. But as a sociologist, he is suspicious. “It’s what we in the social studies call the ‘illusion of a technical fix’,” he explains. “This car is part of a trend where humans think that they can weed out risks by delegating to a machine.” According to Bijker, this attitude makes people less resilient when technologies fail.
Even the most ingenious technology is likely to need plenty of human input, according to Bijker. Drawing a parallel to air traffic automation trends two decades ago, he says cockpits became so automated that planes could fly themselves from takeoff to landing. “This trend made pilots inattentive and sleepy and people were afraid,” says Bijker. Thus, the hierarchy was reversed. Now, pilots steer the plane and are responsible for it, but the automatic systems are used as a backup.
Ultimately, the best solution may be to mimic autopilot technology in automated cars, thus combining the efficiency of cutting-edge robotics with focused human drivers who fully realize that driving a car safely is a risky business.