The trolley problem problem

*this post was inspired by a conference workshop on education and responsible data science, run by the Digital Society initiative, a collective of Dutch universities working together to shape the relationship between data science and society.*

A rash of studies have come out recently about the trolley problem. This is the famous philosphical conundrum where someone is positioned to stop an out-of-control tram speeding towards a person tied on the tracks. The catch: they can only stop it by sending it down another track, on which two people are tied. What should they do?

Scientists at MIT Media Lab some time ago set up a crowdsourcing site called the Moral Machine, which asked people to vote on their favourite solution to the trolley problem. It explicitly links the trolley problem to self-driving cars, and the results of people’s deliberations are being compared by the researchers to see whether culture, location or income level makes a difference in people’s assessment of the right thing to do.

This is good, right? People are engaging in moral deliberation, and designers and computer scientists will be able to make decisions based on majority preferences and feelings. Democracy wins. And we like democracy.

But is this how we want technology to get built? There is a problem with the trolley problem, and it is this: it is based on an ethical perspective called consequentialism. Consequentialism asks people to consider what the effects of an action will be, and then to determine from those if the action is good or not. In the technical world, this usually gets translated into cost-benefit analysis. What are the potential costs of my invention? What good may it do? Engineers, mathematicians, physicists, business studies experts and others in mainly quantitative disciplines are comfortable with the idea that you can tell what is good by weighing A against B and deciding based on the comparative weight of good on each side.

Of course the trolley problem is more complicated than this – that’s why it’s popular. You have to kill someone no matter what you choose. This is why it’s such a common teaching tool for philosophers teaching scientists ethics. It forces people to admit that there is usually a tradeoff, and that they should be aware of it.

What happens, though, once one is aware of it? The key to the trolley problem is its urgency. You have to choose: one person or five? A fat person or a child? One child or sixteen? Make a decision! Quick! You are in charge of life and death decisions! The perceptive reader may have noticed that there is a moral perspective already embedded in this. ‘You’ are in charge of deciding who lives and who dies. ‘You’ are going to flip a switch. ‘You’ have to make a tradeoff, and your decision will be final. It positions the scientist as god, and only asks that they can justify their decision.

This kind of thinking is part of the scientific landscape, and is becoming more visible with the rise of data science as a rich, high-status discipline. You may have heard of the now-famous ‘fairness-accuracy tradeoff’ when you are, for example, analysing who should get access to a scarce good such as a mortgage or a scholarship. The people with the best credit or the best exam results are almost always whiter, from more privileged backgrounds, more predominantly male, etcetera, than those without. So if you award the benefit to the group with the best qualifications you are enhancing existing privilege at the expense of the marginalised. Another one is the famous ‘privacy-security tradeoff’. If you want to be safe from terrorism you have to give up your naked photos to the NSA. That’s the tradeoff.

It’s the framing that gets people. The idea that you can formalise being accurate or being secure, and trade it off explicitly against being fair, or being private. It’s attractive – and it’s nonsense. Try formalising fairness (there is a whole growing discipline of people out there trying to do this) and you find very quickly that you have chosen a version that does not fit with someone’s completely valid idea of what is fair. For instance, it turns out that if you pass laws that women should not suffer discrimination, and that people of colour should not suffer discrimination, someone female and black suffering discrimination who wants legal redress is going to have to pick one basis for a complaint rather than the other. And as Kimberlé Crenshaw has been pointing out since 1989, that is not fair.

So this is a problem for developers of technology. If someone asks you to complete the phrase ‘Jews are…’  or ‘Muslims are…’ what do you say? Is the ethical answer ‘nice’ rather than ‘evil’? Correct, it’s not. The ethical answer is ‘this question is not ok, I will not be answering it’.

Let’s try that with the trolley problem. In 2013, 32,719 people died in car crashes in the US. If cars were all automated, the story goes, only 3,272 people would die per year. So automated vehicles could save nearly 300,000 people in a decade. Do you flip the switch and make everyone drive an automated vehicle? MIT’s crowdsourcing project is predicated on the notion that we need to figure out what most people think about this (clue: they will want to stop 90% of potential car crashes, unless they are pathologically murderous) and that will tell us where to put our collective efforts.

So the framing matters. If we offer up everything as a cost-benefit analysis, tied to the assumption that things will get built anyway, we end up with a very different answer from if we are prepared to dispute the premise of the question. But disputing the premise of the question turns out to be an important tool where formalising important and complex ideas is the aim. If we can say ‘this is not a question that allows a formal definition of [concept x]’ (i.e. fairness, diversity, etc) we have to go to a different level of abstraction. Rather than approximate a solution to a problem using the tools available, we can instead reject the toolbox and build a new one – or alter our definition of the problem.

Some of the really important problems facing science are not just science problems. They are social problems with science components. Think of climate change, or promoting economic development. So one key skill necessary for solving them is the ability to think critically about which components are answerable by increasing empirical understanding and testing hypotheses, and which components can only be dealt with using higher-level concepts such as justice or representation. These concepts are slippery and contested, and you can’t formalise them.

There are two approaches to this. One is to throw the toolbox at the problem and see what sticks. This may result, however, in people being surprised and offended that you have thrown sharp iron things at them unexpectedly. The other is to work in coordination with people who can address different bits of the problem. Doing this is difficult, costly, and may result in people not being able to say ‘blockchain’ as often as they do currently. It also has implications for the way both academic and commercial research are structured: cooperating around big questions in academia is extraordinarily politically and financially sensitive, and currently it makes much more organisational sense most of the time for scientists (including social scientists) to tackle a problem without taking account of huge chunks of it, than it does for them to determine whom they should collaborate with across disciplines, how to pay for people’s time if it’s available, and what kind of audience work like this might have, given that academic publishing culture is still organised almost entirely by discipline.

So the trolley problem is a problem. But if we treat it as a proxy for things that are too politically sensitive to talk about, but on which our ability to solve complex problems depends, it may be a useful one after all.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: