Must self-driving cars be programmed to kill?
Back in the 1960s, ads in comic book offered a vision of the future that imagined we all would be driving around Louisiana and the rest of the country in personal gyrocopters. Traffic congestion would be a thing of the past, unless of course you happened to be in the air.
As we all know, gyrocoptering is not as prevalent as had been predicted, cars and trucks still ply the roads. And behind the wheel of each one is a human being responsible for making sure the vehicles get from one place to another without causing any damage to property or injury to people. But to be human is to err, and when another person’s negligence is to blame, he or she should expect to be held accountable.
Another vision of the future is one in which the vehicles on the road will handle all the driving. Proponents of autonomous vehicles say they will certainly be safer because they will be programmed to be. And while few dispute that claim, there are some in autonomous vehicle development who say that if they are ever to really become the norm, they’ll have to be programmed to kill.
The argument presented is that there are going to be situations on the road in which an accident that results in injury or death is going to be unavoidable. Either the collision will hurt someone in the vehicle, or someone outside of it.
The question is, with what ethical algorithm should self-driving cars be equipped? Should the primary objective always be to keep the occupants of the vehicle safe, or should they be sacrificed if more death and destruction would result by hitting others?
This issue that might not seem important right now, but experts say it needs to be overcome if the age of autonomous vehicles is to really arrive. In the meantime, we have to seek justice within the construct of the current legal system and its focus on personal or product liability.