Have you ever heard of the Trolley Problem ? According to Wikipedia, the Trolley Problem is this: You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options: Do nothing and allow the trolley to kill the five people on the main track. Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the more ethical option? Figure 1: Intervene, or no? Pretty gruesome scenario, perfect for Halloween! The question changes when you know more about the people on the tracks. Would it make a difference if the person on the side track were a doctor, and the five people on the main track were laborers? What if the person on the side track was your spouse? Your child? Your dog? Problems analogous to the trolley problem arise in the design of autonomous cars, in situations where the car’s software is forced during a potential crash scenario to choose between multiple courses of action (sometimes including options which include the death of the car’s occupants), all of which may cause harm. Stemming from these dilemmas, the MIT Media Lab developed the Moral Machine , a platform for “gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars.” From the website: The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices. Recent scientific studies on machine ethics have raised awareness about the topic in the media and public discourse. This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence. If you want to test your inherent sense of morals, give it a shot, and click “Start Judging” at the above link. It’s a short presentation of 13 scenarios of how you would want an autonomous car to react in certain settings, where you indicate a preference for saving passengers or pedestrians, law-breakers or law abiders, males or females, the young or elderly, animals or humans, fit or heavy, or distinguishing social status (“homeless” vs. “executive”). Figure 2: Pick one scenario. There are no do-overs. When I took the test, I tended to have a bias against passengers, because they actively made the decision to be transported in the autonomous cars, whereas the pedestrians did not. I definitely preferred to keep humans alive over animals and had a bias towards law-abiders vs. those who flout the law by crossing the street against the light. When it came to male/female, fit or heavy, or social status, I tried not to have a bias at all, though the nature of the test made that impossible to be completely unbiased. Truth be told, I tended to favor women over men, when I had no choice. Here’s the problem, though—an autonomous driving system won’t be able to distinguish between the social status of an individual on the street, nor how old that person might be. I would hope that a decision made by an autonomous vehicle wouldn’t make a distinction between male and female, even if it could identify the difference (which I am skeptical about in the first place). The critical factors to me, then, are (in order of descending importance): The number of human lives saved (more is better) Passengers vs. pedestrians (favor the pedestrians) Whether the pedestrians were breaking the law (obey the stoplights, people!) But these dilemmas are hard. Would I opt to ride in an autonomous vehicle if I knew that it had a bias against my family and me as passengers? But what makes me think that I could make the same kind of split-second decision when driving my own Level Three car, and be happy with the outcome? Chances are, an autonomous car would make a decision faster, and with less loss of life overall than any person’s foot on the brake could do. I’d love to hear your take on the problem. What were your results when you took the test? What criteria were important to you, and why? In other news, Reuters reported yesterday that Alphabet Inc’s. Waymo unit became the first company to receive a permit from the state of California to test driverless vehicles without a backup driver in the front seat. What I wanna know is, have they solved the Trolley Problem? —Meera Sources: https://www.bbc.com/news/technology-45991093 http://moralmachine.mit.edu/ https://en.wikipedia.org/wiki/Trolley_problem https://www.reuters.com/article/us-autos-selfdriving-waymo/waymo-gets-first-california-ok-for-driverless-testing-without-backup-driver-idUSKCN1N42S1
↧