Will Your Self-Driving Car Be Programmed to Save You, or Others?

9 years, 4 months ago - 27 June 2015, Autoblog
Will Your Self-Driving Car Be Programmed to Save You, or Others?
An ethical thought experiment called the trolley problem seems to be forming the basis of how autonomous cars might be programmed in the future to avoid accidents.

One of the variations of the question puts you in front of a switch with an out-of-control trolley that's barreling forward. Doing nothing means certain death for a group of people up ahead; switching the track kills just one person – your child. What do you do?

While an autonomous vehicle being put in such a dire scenario is somewhat unlikely, there are situations where driverless collisions happen on the road, and the software must decide. People are trying to tackle this thorny issue, though. "Ultimately, this problem devolves into a choice between utilitarianism and deontology," said University of Alabama at Birmingham alumnus and Rhodes Scholar Ameen Barghi to the UAB News. Utilitarianism would value the greatest number of lives, but deontology suggests there's no right answer because killing is always wrong.

A further problem is that we can't simply load every scenario into artificial intelligence because there are too many unique situations. "A computer cannot be programmed to handle them all," said Dr. Gregory Pence, chair of the UAB college of arts and sciences department of philosophy, said to the UAB News.

There was an interesting solution to the trolley problem proposed last year: let the owner pre-define the car's ethics. People could tell the software whether to prioritize themselves or others in a crash, and make all sorts of other decisions beforehand. Other reports have also considered more benign implications of autonomous vehicles breaking the law, like parking illegally. Insurance companies are already worried about all of the new possibilities that the new tech could bring.