This is #2 in my Best of Neuron Culture Moving Party — a run of 10 of my favorite posts from the blog’s tenure at WIRED, as I move the blog here. In this one, “Kill Whitey,” I look at a playful but ingeniously fresh look at a popular social science approach to studying decision-making and ethics, the so-called trolley problem. This was among my first posts at Neuron Culture’s WIRED venue and remains the most popular post I ever ran.
Kill Whitey. It’s the Right Thing to Do.
Originally posted 15 September, 2010
[Sept 10, 2010] A couple years ago, David Pizarro, a young research psychologist at Cornell, brewed up a devious variation on the classic trolley problem. The trolley problem is that staple of moral psychology studies at dinner parties in which you ask someone to decide under what conditions it’s morally permissible to kill one person to save others. Here, via Wikipedia, is its most basic template:
A trolley is running out of control down a track. In its path are 5 people who have been tied to the track by a mad philosopher. Fortunately, you can flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch?
This has generated scores of studies that pose all kinds of variations. (You can take a version of the test yourself at Should You Kill the Fat Man?) Perhaps the richest has been the footbridge problem. The footbridge scenario puts the subject in a more active hypothetical role: You’re on a footbridge over the trolley track, and next to you, leaning perilously over the rail to see what happens, stands a very large man — a man large enough, in fact, to stop the train. Is it moral to push the guy over the rail to stop the train?
Researchers generally use these scenarios to see whether people hold a) an absolutist or so-called “deontological” moral code or b) a utilitarian or “consequentialist” moral code. In an absolutist code, an act’s morality virtually never depends on context or secondary consequences. A utilitarian code allows that an act’s morality can depend on context and secondary consequences, such as whether taking one life can save two or three or a thousand.
In most studies, people start out insisting they have absolute codes. But when researchers tweak the settings, many people decide morality is relative after all: Propose, for instance, that the fat man is known to be dying, or was contemplating jumping off the bridge anyway — and the passengers are all children — and for some people, that makes it different. Or the guy is a murderer and the passengers nuns. In other scenarios the man might be slipping, and will fall and die if you don’t grab him: Do you save him … even if it means all those kids will die? By tweaking these settings, researchers can squeeze an absolutist pretty hard, but they usually find a mix of absolutists and consequentialists.
As a grad student, Pizarro liked trolleyology. Yet it struck him that these studies, in their targeting of an absolutist versus consequentialist spectrum, seemed to assume that most people would hold firm to their particular spots on that spectrum — that individuals generally held a roughly consistent moral compass. The compass needle might wobble, but it would generally point in the same direction.
Pizarro wasn’t so sure. He suspected we might be more fickle. That perhaps we act first and scramble for morality afterward, or something along those lines, and that we choose our rule set according to how well it fits our desires.
To test this, he and some colleagues devised some mischievous variations on the footbridge problem. They detail these in a recent paper (pdf download; web), and Pizarro recently described them more accessibly at the recent Edge conference on morality. (The talk is on video, or you can download the audio.)
As Pizarro describes, the variations are all of a piece: All explore how the political and racial prejudices — and guilt — of both liberals and conservatives might affect where they stand on the absolutist-consequentialist spectrum.
Perhaps most revealing is what Pizarro calls the “Kill Whitey” study. This was a footbridge problem — two variations on a footbridge problem in one, actually — that the team presented to 238 California undergrads. The undergrads were of mixed race, ethnicity and political leanings. Before they faced the problem, 87 percent of them said they did not consider race or nationality a relevant factor in moral decisions. Here the paper‘s (.pdf) description of the problem they faced:
Participants received one of two scenarios involving an individual who has to decide whether or not to throw a large man in the path of a trolley (described as large enough that he would stop the progress of the trolley) in order to prevent the trolley from killing 100 innocent individuals trapped in a bus.
Half of the participants received a version of the scenario where the agent could choose to sacrifice an individual named “Tyrone Payton” to save 100 members of the New York Philharmonic, and the other half received a version where the agent could choose to sacrifice “Chip Ellsworth III” to save 100 members of the Harlem Jazz Orchestra. In both scenarios the individual decides to throw the person onto the trolley tracks.
Tyrone and Chip. Just in case you’re missing what Pizarro is up to:
While we did not provide specific information about the race of the individuals in the scenario, we reasoned that Chip and Tyrone were stereotypically associated with White American and Black American individuals respectively, and that the New York Philharmonic would be assumed to be majority White, and the Harlem Jazz Orchestra would be assumed to be majority Black.
So the guy on the bridge kills either Tyrone to save the New York Philharmonic or Chip to save the Harlem Jazz Orchestra. How, Pizarro asked the students, did they feel about that? Was sacrificing Chip/Tyrone to save the Jazz Orchestra/Philharmonic justified? Was it moral? Was it sometimes necessary to allow the death of one innocent to save others? Should we ever violate core principles, regardless of outcome? Is it sometimes “necessary” to allow the death of a few to promote a greater good?
Turned out the racial identities did indeed color peoples’ judgments — but it colored them differently depending on their political bent. Pizarro, who describes himself as a person who “would probably be graded a liberal on tests,” roughly expected that liberals would be more consistent. Yet liberals proved just as prejudiced here as conservatives were, but in reverse: While self-described conservatives more readily accepted the sacrifice of Tyrone than they did killing Chip, the liberals were easier about seeing Chip sacrificed than Tyrone.
But this was just college students. Perhaps they were morally mushier than most people. So the team went further afield. As Pizarro describes in the talk:
We wanted to find a sample of more sort of, you know, real people. So we went in Orange County out to a mall and we got people who are actually Republicans and actually Democrats, not wishy-washy college students. The effect just got stronger. (This time it was using a “lifeboat” dilemma where one person has to be thrown off the edge of a lifeboat in order to save everybody, again using the names “Tyrone Payton” or “Chip Ellsworth III”.) We replicated the finding, but this time it was even stronger.
If you’re wondering whether this is just because conservatives are racist—well, it may well be that conservatives are more racist. But it appears in these studies that the effect is driven [primarily] by liberals saying that they’re more likely to agree with pushing the white man and [more likely to] disagree with pushing the black man.
So we used to refer to this as the “kill whitey” study.
They offered some other scenarios too, about collateral damage in military situations, for instance, and found similar differences: Conservatives accepted collateral damage more easily if the dead were Iraqis than if they were Americans, while liberals accepted civilian deaths more readily if the dead were Americans rather than Iraqis.
What did this say about people’s morals? Not that they don’t have any. It suggests that they had more than one set of morals, one more consequentialist than another, and choose to fit the situation. Again, from the talk:
It’s not that people have a natural bias toward deontology or a natural bias toward consequentialism. What appears to be happening here is that there’s a motivated endorsement of one or the other whenever it’s convenient.
Or as Pizarro told me on the phone, “The idea is not that people are or are not utilitarian; it’s that they will cite being utilitarian when it behooves them. People are aren’t using these principles and then applying them. They arrive at a judgment and seek a principle.”
So we’ll tell a child on one day, as Pizarro’s parents told him, that ends should never justify means, then explain the next day that while it was horrible to bomb Hiroshima, it was morally acceptable because it shortened the war. We act — and then cite whichever moral system fits best, the relative or the absolute.
Pizarro says this isn’t necessarily bad. It’s just different. It means we draw not so much on consistent moral principles as on a moral toolbox. And if these studies show we’re not entirely consistent, they also show we’re at least determined — really determined, perhaps, given the gyrations we go through to try to justify our actions — to behave morally. We may choose from a toolbox — but the tools are clean. As Pizarro puts it at the end of his talk,
I am still an optimist about rationality, and I cling to the one finding that I talked about, which is that when you point out people’s inconsistencies, they really are embarrassed.
Image: Flickr/Heath Brandon
Over the next week I’ll be leaving WIRED’s Science Blogs, moving Neuron Culture on June 7 to a self-hosted location at at http://neuronculture.com — a domain name that on June 7 , 2013, will switch from one pointing to WIRED to one pointing to the blog’s new, self-hosted home elsewhere. Please join me there. And you can always follow me at The Twitter as well.
To celebrate and mark the end of Neuron Culture’s 2.75-year run at WIRED, I’m posting a “Best of Neuron Culture” over its final 10 days, spotlighting each day a post from the past that I feel embodies the best of Neuron Culture’s WIRED tenure. (Neuron Culture was previously at Seed’s ScienceBlogs as well as at my own site on TypePad.) These posts, among the stronger and more popular ones I’ve done here, also characterize the sorts of possibilities that a hosted blog has offered in this period’s strange transitional time of writing, publishing, and journalism.