A couple years ago, David Pizarro, a young research psychologist at Cornell, brewed up a devious variation on the classic trolley problem. The trolley problem is that staple of moral psychology studies at dinner parties in which you ask someone to decide under what conditions it’s morally permissible to kill one person to save others. Here, via Wikipedia, is its most basic template:
A trolley is running out of control down a track. In its path are 5 people who have been tied to the track by a mad philosopher. Fortunately, you can flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch?
This has generated scores of studies that pose all kinds of variations. (You can take a version of the test yourself at Should You Kill the Fat Man?) Perhaps the richest has been the footbridge problem. The footbridge scenario puts the subject in a more active hypothetical role: You’re on a footbridge over the trolley track, and next to you, leaning perilously over the rail to see what happens, stands a very large man — a man large enough, in fact, to stop the train. Is it moral to push the guy over the rail to stop the train?
Researchers generally use these scenarios to see whether people hold a) an absolutist or so-called “deontological” moral code or b) a utilitarian or “consequentialist” moral code. In an absolutist code, an act’s morality virtually never depends on context or secondary consequences. A utilitarian code allows that an act’s morality can depend on context and secondary consequences, such as whether taking one life can save two or three or a thousand.
In most studies, people start out insisting they have absolute codes. But when researchers tweak the settings, many people decide morality is relative after all: Propose, for instance, that the fat man is known to be dying, or was contemplating jumping off the bridge anyway — and the passengers are all children — and for some people, that makes it different. Or the guy is a murderer and the passengers nuns. In other scenarios the man might be slipping, and will fall and die if you don’t grab him: Do you save him … even if it means all those kids will die? By tweaking these settings, researchers can squeeze an absolutist pretty hard, but they usually find a mix of absolutists and consequentialists.
As a grad student, Pizarro liked trolleyology. Yet it struck him that these studies, in their targeting of an absolutist versus consequentialist spectrum, seemed to assume that most people would hold firm to their particular spots on that spectrum — that individuals generally held a roughly consistent moral compass. The compass needle might wobble, but it would generally point in the same direction.
Pizarro wasn’t so sure. He suspected we might be more fickle. That perhaps we act first and scramble for morality afterward, or something along those lines, and that we choose our rule set according to how well it fits our desires.
To test this, he and some colleagues devised some mischievous variations on the footbridge problem. They detail these in a recent paper (pdf download; web), and Pizarro recently described them more accessibly at the recent Edge conference on morality. (The talk is on video, or you can download the audio.)
As Pizarro describes, the variations are all of a piece: All explore how the political and racial prejudices — and guilt — of both liberals and conservatives might affect where they stand on the absolutist-consequentialist spectrum.
Perhaps most revealing is what Pizarro calls the “Kill Whitey” study. This was a footbridge problem — two variations on a footbridge problem in one, actually — that the team presented to 238 California undergrads. The undergrads were of mixed race, ethnicity and political leanings. Before they faced the problem, 87 percent of them said they did not consider race or nationality a relevant factor in moral decisions. Here the paper‘s (.pdf) description of the problem they faced:
Participants received one of two scenarios involving an individual who has to decide whether or not to throw a large man in the path of a trolley (described as large enough that he would stop the progress of the trolley) in order to prevent the trolley from killing 100 innocent individuals trapped in a bus.
Half of the participants received a version of the scenario where the agent could choose to sacrifice an individual named “Tyrone Payton” to save 100 members of the New York Philharmonic, and the other half received a version where the agent could choose to sacrifice “Chip Ellsworth III” to save 100 members of the Harlem Jazz Orchestra. In both scenarios the individual decides to throw the person onto the trolley tracks.
Tyrone and Chip. Just in case you’re missing what Pizarro is up to:
While we did not provide specific information about the race of the individuals in the scenario, we reasoned that Chip and Tyrone were stereotypically associated with White American and Black American individuals respectively, and that the New York Philharmonic would be assumed to be majority White, and the Harlem Jazz Orchestra would be assumed to be majority Black.
So the guy on the bridge kills either Tyrone to save the New York Philharmonic or Chip to save the Harlem Jazz Orchestra. How, Pizarro asked the students, did they feel about that? Was sacrificing Chip/Tyrone to save the Jazz Orchestra/Philharmonic justified? Was it moral? Was it sometimes necessary to allow the death of one innocent to save others? Should we ever violate core principles, regardless of outcome? Is it sometimes “necessary” to allow the death of a few to promote a greater good?
Turned out the racial identities did indeed color peoples’ judgments — but it colored them differently depending on their political bent. Pizarro, who describes himself as a person who “would probably be graded a liberal on tests,” roughly expected that liberals would be more consistent. Yet liberals proved just as prejudiced here as conservatives were, but in reverse: While self-described conservatives more readily accepted the sacrifice of Tyrone than they did killing Chip, the liberals were easier about seeing Chip sacrificed than Tyrone.
But this was just college students. Perhaps they were morally mushier than most people. So the team went further afield. As Pizarro describes in the talk:
We wanted to find a sample of more sort of, you know, real people. So we went in Orange County out to a mall and we got people who are actually Republicans and actually Democrats, not wishy-washy college students. The effect just got stronger. (This time it was using a “lifeboat” dilemma where one person has to be thrown off the edge of a lifeboat in order to save everybody, again using the names “Tyrone Payton” or “Chip Ellsworth III”.) We replicated the finding, but this time it was even stronger.
If you’re wondering whether this is just because conservatives are racist—well, it may well be that conservatives are more racist. But it appears in these studies that the effect is driven [primarily] by liberals saying that they’re more likely to agree with pushing the white man and [more likely to] disagree with pushing the black man.
So we used to refer to this as the “kill whitey” study.
They offered some other scenarios too, about collateral damage in military situations, for instance, and found similar differences: Conservatives accepted collateral damage more easily if the dead were Iraqis than if they were Americans, while liberals accepted civilian deaths more readily if the dead were Americans rather than Iraqis.
What did this say about people’s morals? Not that they don’t have any. It suggests that they had more than one set of morals, one more consequentialist than another, and choose to fit the situation. Again, from the talk:
It’s not that people have a natural bias toward deontology or a natural bias toward consequentialism. What appears to be happening here is that there’s a motivated endorsement of one or the other whenever it’s convenient.
Or as Pizarro told me on the phone, “The idea is not that people are or are not utilitarian; it’s that they will cite being utilitarian when it behooves them. People are aren’t using these principles and then applying them. They arrive at a judgment and seek a principle.”
So we’ll tell a child on one day, as Pizarro’s parents told him, that ends should never justify means, then explain the next day that while it was horrible to bomb Hiroshima, it was morally acceptable because it shortened the war. We act — and then cite whichever moral system fits best, the relative or the absolute.
Pizarro says this isn’t necessarily bad. It’s just different. It means we draw not so much on consistent moral principles as on a moral toolbox. And if these studies show we’re not entirely consistent, they also show we’re at least determined — really determined, perhaps, given the gyrations we go through to try to justify our actions — to behave morally. We may choose from a toolbox — but the tools are clean. As Pizarro puts it at the end of his talk,
I am still an optimist about rationality, and I cling to the one finding that I talked about, which is that when you point out people’s inconsistencies, they really are embarrassed.
___
Note: This piece originally ran on 9/15/10.
Image: Flickr/Heath Brandon
I can see the logic in killing the one person to save 100 others, but 2 questions come to mind. Is there really a guarantee that the 100 will get killed, and who exactly wants to be the person to push Tyrone or Chip off the bridge? It’s not too hard to decide it should be done, it’s a lot harder to envision actually doing it.
The problem with these sorts of “experimental morality” questions is that there are too many variables that are not held constant. It’s hard to take them too seriously as a result.
We have the scientific method for a reason, and control our experiments as much as humanly possible, in order to deductively arrive at useful conclusions (and, more importantly, useful questions).
All other things held constant (ie. All things held constant; the killing or X=1person will definitely save X+n others). Varying the n value would make for an interesting experiment. Put in logical terms: IF you kill X, then X+n will be saved. The problem with this is that morality is not number dependent but VALUE dependent. The only valid experiment you could carry out is one where the value of X and each increase in n are equal (ie. IF X=1, and n=0 in the preceding formula, THEN the statement is not correct, because 1 is not greater or less than 1; and any increase in n is of the same value as X).
A different kind of experiment where X is a moral valuation (ie. X is a no longer a number of persons, but rather a valuation… a murderer would have a lower X than a baby child) would yield a formula of: The killing of X will definitely save Y (where Y is the value of the second person(s) ). Put in logical terms: IF you kill person(s) with value X, then the person(s) with value Y will be saved.
My 2 cents.
Remember:
1) Lord loves a working man
2) Don’t trust Whitey
3) See a doctor and get rid of it
You have made the assumption that since life is valued, all life is valued the same (inductive logic). While on the surface the majority of people will agree that human life is sacred, when probing deeper you may (continually) find that some human life is valued higher than others (degrees of value).
Our experiences and prejudice color our judgements, even (and perhaps especially) in “think-fast” situations where there is little time for careful consideration. A common example of this is in racism, which can be here defined as a devaluation of a human life or person based solely on their racial background. Another more pertinent example to this problem is with obese people. Many “normal/average” people will consider an obese person to be less valuable than another “normal/average” person, not just because of the “other-group” status that the obesity implies, but because health in ones self and others is highly valued and respected. A lack of health is avoided as a matter of instinct (think: the instinct of avoiding a person who coughs a lot).
The problem you are posing is not quite fair in terms of raw morality. Your results are therefore invalid, as you have not come across them deductively (see: scientific method).
A more interesting question is: Assuming the 5 workers on the track were “normal/average” people, would a) throwing a “normal/average” person in the way of the tracks or b) diverting the streetcar toward that “normal/average” person) be acceptable if a) it would stop the streetcar from hitting the other 5 “normal/average” people, thus saving their lives, b) it would save the lives of the 5 other “normal/average” people.
The language you use throughout this experiment must be kept consistent, otherwise your results are highly skewed and you are forced into top-down logic (inductive reasoning), which is highly error-prone.
Also (from my reply to another poster’s comments): The problem with these sorts of “experimental morality” questions is that there are too many variables that are not held constant. It’s hard to take them too seriously as a result.
We have the scientific method for a reason, and control our experiments as much as humanly possible, in order to deductively arrive at useful conclusions (and, more importantly, useful questions).
All other things held constant (ie. All things held constant; the killing or X=1person will definitely save X+n others). Varying the n value would make for an interesting experiment. Put in logical terms: IF you kill X, then X+n will be saved. The problem with this is that morality is not number dependent but VALUE dependent. The only valid experiment you could carry out is one where the value of X and each increase in n are equal (ie. IF X=1, and n=0 in the preceding formula, THEN the statement is not correct, because 1 is not greater or less than 1; and any increase in n is of the same value as X).
A different kind of experiment where X is a moral valuation (ie. X is a no longer a number of persons, but rather…
I question the terms conservative vs liberal. the political definitions morph depending on who’s popular, and the popular definition changes from country to country. Sometimes they’re reversed. To throw somebody off a bridge based on that criteria is pretty meaningless. It’s more like the terms are being defined by the situation.
White liberals are self-hating. I didn’t need any research to tell me that.
And conservatives are more than willing to murder black people. Didnt need any research to tell me that.
FTFY: “more willing” not “more than willing.”
I love how they call it the “Kill Whitey” study to completely distract from the fact that it definitively proves that conservatives hate and have no problem killing black people.
I think “Kill Whitey” attracted because it was a) less expected and b) well, it started as an in-house joke.
The study does not show a lot about the morality of anyone.
It does however shows something about how people pass judgement on the actions of others based on perceptions and prejuice.
To test the actual morality of people, you would have to place them personally in the physical situation, where they are to make such a life or death decision.
Even then, it may be impossible to distinguish between people acting (or not acting) out of ingrained instinctive responses or out of moral judgement. The degree of understanding of a given situation and the posibilities/consequences of any set of given actions is likely to influence your actions much more than any moral judgement.
P.S.
Who comes up with these dilemmas? I can understand the dual railway track setup, but who really expects anybody to make the split moment judgement that a severely obese person is big enough to stop a trolley in it’s track over a given distance?
The stretch of the imagination that is needed to accept this setup may influence peoples moral judgement, since it makes the situation abstract and removed from reality.
I believe Pizarro’s point is that these studies show less about morality (that is, how we actually act or will act) than they show about our ideas about morality — the moral rules we claim to live by.