I think ethicists and scientists owe an apology to the trolley problem now that driverless cars are here

I’m going to do a little of the thing that I don’t like other people doing, which is vague attribution, because I don’t have a ton of time this morning, and I don’t think it’s important.

For years,some people who thought about ethics decided to make themselves somewhat irrelevant by insisting on casuistry. Now, casuistry has its place in any system of thought–after all, the point of learning about history is to learn from it and see if there is anything you might mine from it to apply to future situations. Toy examples are mere bourgeois naval gazing. Oh, so nuanced! Oh, so much more practical in the real world than all you naval gazers with your simplified, artificial non real case things.

Except.

That approach also carries its own intellectual trap. By insisting you can not abstract in ethics, and that “toy” examples are useless to ethics because you can never, ever, ever understand a choice until you are deep in the thick of it, with all its complications and exigencies, you render just about any conclusions outside of that case invalid.

That prescription basically means you can never really learn anything because every situation is different and contains different actors with different obligations and different preferences and values. Just because I read about what you did in your situation, and drew moral judgment on what you did in that situation, does not mean that I should do what you did, or the opposite, in a similar situation because how similar is similar enough to overcome all those particular exigencies and details unique to each context and particular decision?

Taken to its extreme, if you really think you can’t evaluate choice without all the information, then every case is, simply, a highly complicated toy example because you are reading it or writing it rather than living it. That doesn’t mean casuistry is worthless; it just means people try to learn from cases and do the best they can. Learning from “real” cases involves storytelling and representation…which is basically what simple toy examples are, too.

Toy examples are type of story telling that help illustrate abstract concepts and provide people with practice runs for moral thought in less fraught, less high-stakes contexts than real choices in the real world. Toy problems are like starting with the tricycle before you get the big kid bike. That’s the reason I like them. You don’t stop with toy problems, but they are a decent place to start.

Probably the most famous, and thus most derided, toy example, is the trolley problem. You’ve probably heard it. Here is a lovely explanation.

The trolley problem is an oldie and a goodie, and I am fond of it because it first got me excited about moral choice. I spent a lot of time in my intro to philosophy class (where there was WAYYYYYYY TO DAMN MUCH FREUD because that my proffie’s pet interest) thinking about it, arguing about it, pestering other people to think about it. To this day, I still actively consume any attempts at a new take on the problem because it combines my two lifelong loves: public transit and ethics!

The trolley problem is great because it gets students to thinking about the idea of weighing life against life and active versus passive moral choice. It also illustrates the nonvoluntary nature of such choices. It’s possible that, should you be driving the trolley, instead of just being the person who can switch it, that your poor driving made the trolley go out of control and the situation is of your making. But it’s also just as likely that some trolley mechanic made a mistake, or simply, that luck is against us and something necessary broke, putting us into a position where we have to make a choice that is, by all accounts, shitty no matter what we do. There are so many riffs on the answer of what you should do that I won’t got into them, save to note that the BBC illustration, while simple, does a good job of outlining the issues you must weigh in making the decision.

The trolley problem is important to questions of public policy. The decision to use the atomic bomb was a real-world, horrible example of the trolley problem.

And after years of having both ethicists and scientists sniff that the little trolley problem is irrelevant because it is neither solvable in a “Science” way (with a Capital S) or, from the ethicists, nuanced (my new least-favorite word) in the manner of real-world ethical problems, driverless vehicle technology brings us back here again.

Here is a really nice discussion of the question from MIT: why driverless cars must be programmed to kill. The problem that the trolley example tries to help us work through never become irrelevant. Once all the solvable problems get solved–the technology, the engineering–we are always left with ourselves, and the problems we make solving all the solvable problems.

In reality, we’ll probably program cars to do what results in the least liability for the manufacturer. But it’s worth thinking about.