On March 19th, 2018, a woman was struck and killed in Tempe, Arizona, by a self-driving vehicle owned and operated by Uber. For a brief period of time, the safety and future of autonomous vehicles was brought into the spotlight. However, with a settlement in the ensuing lawsuit, the story effectively ended. While Uber pulled its self-driving cars from the streets for now, this brings up an important question about the safety of autonomous vehicles. One of the most important concerns about self-driving cars is how their algorithms would be programmed to react to a “trolley problem”.
As developed by Judith Thomson, the trolley problem is a hypothetical situation in which a bystander witnesses a trolley headed towards a group of five workers. The bystander can choose to pull the lever and direct the trolley into a wall, killing the two workers on board, or do nothing, and allow the train to kill the five workers on the track.
In terms of self-driving cars, a car would theoretically have to be programmed to either protect the driver or protect the pedestrians. In a similar situation to the trolley problem, if a self-driving car is headed down the road and its brakes give out, would the car swerve into a wall to avoid a group of pedestrians, saving their lives but killing the passengers? This moral dilemma has created a serious problem for autonomous vehicle programmers.
Wired reports that Iyad Rahwan of the MIT Media Lab is conducting studies to determine how consumers understand self-driving cars and their potential to be programed to kill passengers instead of injure pedestrians. So far, he has reported, “most people would not buy a self-driving car that could make the decision to kill them as the passenger.”
This conundrum raises issues that extend beyond the ethics of robotics. Will humans ever be able to accept that technology is not perfect, and that scenarios like the trolley problem, though rare, do occur in real life?
There has been little-to-no corporate news coverage of the ethics of self-driving cars, their relation to the trolley problem, or the ethics of artificial intelligence (AI) as a whole. Establishment media reported about Uber settling in court, but the details of the incident itself were marginalized.
Throughout the emergence and growth of AI, many concerns have been voiced about the danger of playing with the unknown, as have been expressed most notably by the late, world-renowned physicist Stephen Hawking, as well as Tesla and SpaceX CEO Elon Musk. Hawking told the BBC, “The development of full artificial intelligence could spell the end for the human race.” Nevertheless, most corporate news outlets have focused on the benefits of new AI technologies.
Sources:
Timothy B. Lee, “Uber Self-driving Car Hits and Kills Pedestrian,” Ars Technica, March 19, 2018,://arstechnica.com/cars/2018/03/uber-self-driving-car-hits-and-kills-pedestrian/.
Ian Bogost, “Enough With the Trolley Problem,” The Atlantic, March 30, 2018, https://www.theatlantic.com/technology/archive/2018/03/got-99-problems-but-a-trolley-aint-one/556805/.
Henry Grabar, “Arizona’s Lax Approach to Regulating Self-Driving Cars Is Dangerous-and Paying Off,” Slate, March 27, 2018, https://slate.com/technology/2018/03/arizonas-lax-approach-to-regulating-self-driving-cars.html.
Matt, Simon, “To Make Us All Safer, Robocars Will Sometimes Have to Kill,” Wired, May 25, 2017, https://www.wired.com/2017/03/make-us-safer-robocars-will-sometimes-kill/.
Student Researcher: Carly Tashman (University of Vermont)
Faculty Researcher: Robert Williams Jr. (University of Vermont)
The post Self-Driving Cars Crash into Decades-Old Ethical Conundrum appeared first on Project Censored.
Source