Will your driverless car kill you so others may live?
Computers are taking over automobiles |
It's 2025. You and your daughter are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can't get traction. Your car does some calculations: If it continues braking, there's a 90% chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?
This isn't an idle thought experiment. Driverless cars will be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?
The California Department of Motor Vehicles is now trying to draw up safety regulations for autonomous vehicles. These regulations might or might not specify when it is acceptable for collision-avoidance programs to expose passengers to risk to avoid harming others — for example, by crossing the double-yellow line or attempting an uncertain maneuver on ice.
Google, which operates most of the driverless cars being street-tested in California, prefers that the DMV not insist on specific functional safety standards. Instead, Google proposes that manufacturers “self-certify" the safety of their vehicles, with substantial freedom to develop collision-avoidance algorithms as they see fit.
That's far too much responsibility for private companies. Because determining how a car will steer in a risky situation is a moral decision, programming the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programming choices into the open, for passengers and the public to see and assess.
Should your autonomous vehicle risk your safety, perhaps even your life, because a reckless motorcyclist chose to speed around a sharp curve?- |
Will the Autonomous car save three pedetrians but put the two occupants at risk? |
Regulatory agencies will need to set some boundaries. For example, some rules should presumably be excluded as too selfish. Consider the over-simple rule of protecting the car's occupants at all costs. This would imply that if the car calculates that the only way to avoid killing a pedestrian would involve sideswiping a parked truck, with a 5% chance of injury to the car's passengers, then the car should instead kill the pedestrian.
Other possible rules might be too sacrificial of the passengers. The equally over-simple rule of maximizing lives saved without any special regard for the car's occupants would unfairly disregard personal accountability. What if other drivers — human drivers — have knowingly put themselves in danger? Should your autonomous vehicle risk your safety, perhaps even your life, because a reckless motorcyclist chose to speed around a sharp curve?
A Mountain View lab must not be allowed to resolve these difficult questions on our behalf.
That said, a good regulatory framework ought to allow some manufacturer variation and consumer choice, within ethical limits. Manufacturers or fleet operators could offer passengers a range of options. “When your child is in the car, our onboard systems will detect it and prioritize the protection of rear-seat passengers!" Cars might have aggressive modes (maximum allowable speed and aggressiveness), safety modes, ethical utilitarian modes (perhaps visibly advertised so that others can admire your benevolence) and so forth.
Google's self-driving car tours a housing development in Austin, Texas on Sept. 23.
Google Car |
Some consumer freedom seems ethically desirable. To require that all vehicles at all times employ the same set of collision-avoidance procedures would needlessly deprive people of the opportunity to choose algorithms that reflect their values. Some people might wish to prioritize the safety of their children over themselves. Others might want to prioritize all passengers equally. Some people might wish to choose algorithms more self-sacrificial on behalf of strangers than the government could legitimately require of its citizens.
There will also always be trade-offs between speed and safety, and different passengers might legitimately weigh them differently, as we now do in our manual driving choices.
Furthermore, although we might expect computers to have faster reaction times than people, our best computer programs still lag far behind normal human vision at detecting objects in novel, cluttered environments. Suppose your car happens upon a woman pushing a rack of coats in a windy swirl of leaves. Vehicle owners may insist on some sort of preemptive override, some way of telling their car not to employ its usual algorithm, lest it sacrifice them for a mirage.
There is something romantic about the hand upon the wheel — about the responsibility it implies. But future generations might be amazed that we allowed music-blasting 16-year-olds to pilot vehicles unsupervised at 65 mph, with a flick of the steering wheel the difference between life and death. A well-designed machine will probably do better in the long run.
That machine will never drive drunk, never look away from the road to change the radio station or yell at the kids in the back seat. It will, however, have power over life and death. We need to decide — publicly — how it will exert that power.
Eric Schwitzgebel is a professor of philosophy at UC Riverside and the author of "Perplexities of Consciousness." He blogs at the Splintered Mind.