The Batman Movie Killings, Madness, and Culture

Let’s try this door again.

Below find #2 in my Best of Neuron Culture Moving Party — a run of 10 of my favorite posts from the blog’s tenure at WIRED, as I prepared to move them to this site.

In this post, on the Batman-movie shootings last summer, I took advantage of a blog’s reiterative freedoms to clarify an argument about madness and culture (in this case, film) that I’d made less than successfully a few days earlier. This is one of the beauties of blogging — it lets you revisit, revise, regroup, and continue a conversation that may not yield much light the first time around. I suspect we’ll be having this one for a while.   

 

Batman Returns: How Culture Shapes Muddle Into Madness

Originally posted 27 July, 2012

What does it mean to say a culture shapes the expression of mental dysfunction? I bungled that question a few days ago in “Batman Movies Don’t Kill, But They’re Friendly to the Concept,” my post about Batman movies and James Holmes. Even friends who got what I was getting at told me I hadn’t really made the case well. Heeding that top item on my daily to-do list — “Do better” — I’ll try to improve on it here. I’ll draw on two brilliant pieces of writing that I hope will make this gin clear.

In the original piece I deliberately referred to “certain unhinged or deeply a-moral people.” I left this vague for good reason: Mental health diagnoses are to a great extent social constructs. Their framing and use not only identify traits or behaviors that most observers in a given culture would agree on, but categorize a person in a way that can push that person further out of society and culture. Indeed, such diagnoses explicitly seek to identify what is different  about the person — what sets them aside, and to some extent, outside, the rest of society. Good diagnosticians do this because, at least in theory, it can help caretakers help the person. But the resulting sense of alienation can exacerbate the person’s problems.

In the case of schizophrenia, for example (and I mean example, since as of this writing we have no reliable diagnosis or description of Holmes’s mental state), the very diagnosis can push a person almost instantly into alienation. But it’s not that way in every society. In his splendid Crazy Like Us: The Globalization of the American Psyche, Ethan Watters describes research demonstrating that the course of schizophrenia, as well as the actions of those who have it, depends enormously on culture.

Janis Hunter Jenkins and Robert John Barrett, two of the premier researchers in the field, describe the general state of affairs.
“In sum, what we know about culture and schizophrenia is… [that] culture is critical in nearly every aspect of schizophrenic illness experience: the identification, definition and meaning of the illness during the primordial, acute, and residual phases; the timing and type of onset; symptom formation in terms of content, form, and constellation; clinical diagnosis; gender and ethnic differences; the personal experience of schizophrenic illness; social response, support, and stigma; and perhaps most important, the course and outcome with respect to symptomatology, work, and social functioning.”
By “course and outcome,” Jenkins and Barrett are referring to that most perplexing finding in the epidemiology on the disease: people with schizophrenia in developing countries appear to do better over time than those living in industrialized nations.

A large  World Health Organization study [huge PDF download], for instance, found that “Whereas 40 percent of schizophrenics in industrialized nations were judged over time to be ‘severely impaired,’ only 24 percent of patients in the poorer countries ended up similarly disabled.’ Their symptoms also differed, in the texture, intensity, and subject matter to their hallucinations or paranoia, for instance. And most crucially, in many cases their mental states did not disrupt their connections to family and society.

Watters, curious about all this, went to Zanzibar to see how all this worked. He learned that there, schizophrenia was seen partly as an especially intense inhabitation of spirits — bad mojo of the sort everyone had, as it were. This led people to see psychotic episodes  less as complete breaks from reality than a passing phenomena, somewhat as we might view, say, a friend or coworker’s intermittent memory lapses.

For instance, in one household Watters came to know well, a woman with schizophrenia, Kimwana,

was allowed to drift back and forth from illness to relative health without much monitoring or comment by the rest of the family. Periods of troubled behavior were not greeted with expressions of concern or alarm, and neither were times of wellness celebrated. As such, Kimwana felt little pressure to self-identify as someone with a permanent mental illness.

Continue reading →

Kill Whitey, It’s the Right Thing to Do (NC Moving Party Track 2)

This is #2 in my Best of Neuron Culture Moving Party — a run of 10 of my favorite posts from the blog’s tenure at WIRED, as I move the blog here. In this one, “Kill Whitey,” I look at a playful but ingeniously fresh look at a popular social science approach to studying decision-making and ethics, the so-called trolley problem. This was among my first posts at Neuron Culture’s WIRED venue and remains the most popular post I ever ran.  

 

Kill Whitey. It’s the Right Thing to Do.

Originally posted 15 September, 2010

 

 

[Sept 10, 2010] A couple years ago, David Pizarro, a young research psychologist at Cornell, brewed up a devious variation on the classic trolley problem. The trolley problem is that staple of moral psychology studies at dinner parties in which you ask someone to decide under what conditions it’s morally permissible to kill one person to save others. Here, via Wikipedia, is its most basic template:

A trolley is running out of control down a track. In its path are 5 people who have been tied to the track by a mad philosopher. Fortunately, you can flip a switch, which will lead the trolley down a different track to safety. Unfortunately, there is a single person tied to that track. Should you flip the switch?

This has generated scores of studies that pose all kinds of variations. (You can take a version of the test yourself at Should You Kill the Fat Man?) Perhaps the richest has been the footbridge problem. The footbridge scenario puts the subject in a more active hypothetical role: You’re on a footbridge over the trolley track, and next to you, leaning perilously over the rail to see what happens, stands a very large man — a man large enough, in fact, to stop the train. Is it moral to push the guy over the rail to stop the train?

Researchers generally use these scenarios to see whether people hold a) an absolutist or so-called “deontological” moral code or b) a utilitarian or “consequentialist” moral code. In an absolutist code, an act’s morality virtually never depends on context or secondary consequences. A utilitarian code allows that an act’s morality can depend on context and secondary consequences, such as whether taking one life can save two or three or a thousand.

In most studies, people start out insisting they have absolute codes. But when researchers tweak the settings, many people decide morality is relative after all: Propose, for instance, that the fat man is known to be dying, or was contemplating jumping off the bridge anyway — and the passengers are all children — and for some people, that makes it different. Or the guy is a murderer and the passengers nuns. In other scenarios the man might be slipping, and will fall and die if you don’t grab him: Do you save him … even if it means all those kids will die? By tweaking these settings, researchers can squeeze an absolutist pretty hard, but they usually find a mix of absolutists and consequentialists.

As a grad student, Pizarro liked trolleyology. Yet it struck him that these studies, in their targeting of an absolutist versus consequentialist spectrum, seemed to assume that most people would hold firm to their particular spots on that spectrum — that individuals generally held a roughly consistent moral compass. The compass needle might wobble, but it would generally point in the same direction.

Pizarro wasn’t so sure. He suspected we might be more fickle. That perhaps we act first and scramble for morality afterward, or something along those lines, and that we choose our rule set according to how well it fits our desires.

To test this, he and some colleagues devised some mischievous variations on the footbridge problem. They detail these in a recent paper (pdf download; web), and Pizarro recently described them more accessibly at the recent Edge conference on morality. (The talk is on video, or you can download the audio.)

As Pizarro describes, the variations are all of a piece: All explore how the political and racial prejudices — and guilt — of both liberals and conservatives might affect where they stand on the absolutist-consequentialist spectrum.

Perhaps most revealing is what Pizarro calls the “Kill Whitey” study. This was a footbridge problem — two variations on a footbridge problem in one, actually — that the team presented to 238 California undergrads. The undergrads were of mixed race, ethnicity and political leanings. Before they faced the problem, 87 percent of them said they did not consider race or nationality a relevant factor in moral decisions. Here the paper‘s (.pdf) description of the problem they faced:

Participants received one of two scenarios involving an individual who has to decide whether or not to throw a large man in the path of a trolley (described as large enough that he would stop the progress of the trolley) in order to prevent the trolley from killing 100 innocent individuals trapped in a bus.

Half of the participants received a version of the scenario where the agent could choose to sacrifice an individual named “Tyrone Payton” to save 100 members of the New York Philharmonic, and the other half received a version where the agent could choose to sacrifice “Chip Ellsworth III” to save 100 members of the Harlem Jazz Orchestra. In both scenarios the individual decides to throw the person onto the trolley tracks.

Tyrone and Chip. Just in case you’re missing what Pizarro is up to:

While we did not provide specific information about the race of the individuals in the scenario, we reasoned that Chip and Tyrone were stereotypically associated with White American and Black American individuals respectively, and that the New York Philharmonic would be assumed to be majority White, and the Harlem Jazz Orchestra would be assumed to be majority Black.

So the guy on the bridge kills either Tyrone to save the New York Philharmonic or Chip to save the Harlem Jazz Orchestra. How, Pizarro asked the students, did they feel about that? Was sacrificing Chip/Tyrone to save the Jazz Orchestra/Philharmonic justified? Was it moral? Was it sometimes necessary to allow the death of one innocent to save others? Should we ever violate core principles, regardless of outcome? Is it sometimes “necessary” to allow the death of a few to promote a greater good?

Turned out the racial identities did indeed color peoples’ judgments — but it colored them differently depending on their political bent. Pizarro, who describes himself as a person who “would probably be graded a liberal on tests,” roughly expected that liberals would be more consistent. Yet liberals proved just as prejudiced here as conservatives were, but in reverse: While self-described conservatives more readily accepted the sacrifice of Tyrone than they did killing Chip, the liberals were easier about seeing Chip sacrificed than Tyrone.

But this was just college students. Perhaps they were morally mushier than most people. So the team went further afield. As Pizarro describes in the talk:

We wanted to find a sample of more sort of, you know, real people. So we went in Orange County out to a mall and we got people who are actually Republicans and actually Democrats, not wishy-washy college students. The effect just got stronger. (This time it was using a “lifeboat” dilemma where one person has to be thrown off the edge of a lifeboat in order to save everybody, again using the names “Tyrone Payton” or “Chip Ellsworth III”.) We replicated the finding, but this time it was even stronger.

If you’re wondering whether this is just because conservatives are racist—well, it may well be that conservatives are more racist. But it appears in these studies that the effect is driven [primarily] by liberals saying that they’re more likely to agree with pushing the white man and [more likely to] disagree with pushing the black man.

So we used to refer to this as the “kill whitey” study.

They offered some other scenarios too, about collateral damage in military situations, for instance, and found similar differences: Conservatives accepted collateral damage more easily if the dead were Iraqis than if they were Americans, while liberals accepted civilian deaths more readily if the dead were Americans rather than Iraqis.

What did this say about people’s morals? Not that they don’t have any. It suggests that they had more than one set of morals, one more consequentialist than another, and choose to fit the situation. Again, from the talk:

It’s not that people have a natural bias toward deontology or a natural bias toward consequentialism. What appears to be happening here is that there’s a motivated endorsement of one or the other whenever it’s convenient.

Or as Pizarro told me on the phone, “The idea is not that people are or are not utilitarian; it’s that they will cite being utilitarian when it behooves them. People are aren’t using these principles and then applying them. They arrive at a judgment and seek a principle.”

So we’ll tell a child on one day, as Pizarro’s parents told him, that ends should never justify means, then explain the next day that while it was horrible to bomb Hiroshima, it was morally acceptable because it shortened the war. We act — and then cite whichever moral system fits best, the relative or the absolute.

Pizarro says this isn’t necessarily bad. It’s just different. It means we draw not so much on consistent moral principles as on a moral toolbox. And if these studies show we’re not entirely consistent, they also show we’re at least determined — really determined, perhaps, given the gyrations we go through to try to justify our actions — to behave morally. We may choose from a toolbox — but the tools are clean. As Pizarro puts it at the end of his talk,

I am still an optimist about rationality, and I cling to the one finding that I talked about, which is that when you point out people’s inconsistencies, they really are embarrassed.

___

Image: Flickr/Heath Brandon

Over the next week I’ll be leaving WIRED’s Science Blogs, moving Neuron Culture on June 7 to a self-hosted location at at http://neuronculture.com — a domain name that on June 7 , 2013, will switch from one pointing to WIRED to one pointing to the blog’s new, self-hosted home elsewhere. Please join me there. And you can always follow me at The Twitter as well.

To celebrate and mark the end of Neuron Culture’s 2.75-year run at WIRED, I’m posting a “Best of Neuron Culture” over its final 10 days, spotlighting each day a post from the past that I feel embodies the best of Neuron Culture’s WIRED tenure. (Neuron Culture was previously at Seed’s ScienceBlogs as well as at my own site on TypePad.) These posts, among the stronger  and more popular ones I’ve done here, also characterize the sorts of possibilities that a hosted blog has offered in this period’s strange transitional time of writing, publishing, and journalism. 

The Depression Map: Genes, Culture, Environment, and a Side of Pathogens

millbrook
This post, originally published  14 September, 2010, examines how genes and culture can apparently shape one another’s development and expression — a topic much in my mind as I write my book The Orchid and the Dandelion, about how genes, environment, and culture shape temperament, behavior, and destiny

 

The Depression Map: Genes, Culture, Environment, and a Side of Pathogens

Maps can tell surprising stories. About a year ago, Northwestern University psychologist Joan Chiao pondered a set of global maps that confounded conventional notions of what depression is, why we get it, and how genes — the so-called “depression gene” in particular — interact with environment and culture.

Chiao had run across data suggesting that many East Asians seemed to carry the “depression gene” — shorter variants, that is, of a mood-regulating gene known as the serotonin transporter gene, or SERT — at unusually high rates. Yet though dozens of studies over the prior 15 years had shown these short SERT genes made people more prone to react to trouble by becoming depressed or anxious,* it was not Chiao’s impression that this association held for most Asians. Then again, no one had gathered the data.

So she gathered it. Chiao and one of her grad students, Katherine Blizinsky, found all the papers they could that studied serotonin or depression in East Asian populations. These papers, along with similar studies in other countries and some World Health Organization data on mental health, painted a pretty good picture of short-SERT variant and depression rates not just in North American and Europe, but in East Asia. A pretty good picture — but seemingly twisted in the middle. The eastern half was upside down. For while East Asians carried the short-SERT “depression gene” variants at almost twice the rate (70-80%) that white westerners did (40-45%), they suffered less than half the rates of anxiety and depression.

You can see it in the maps. Below, the first map shows prevalence of the short-SERT ‘depression gene,” and the second shows prevalence of depression. Their colors should line up, but instead they conflict.

Fig 1. Known prevalence of S-S and S-L serotonin transporter gene variants worldwide.
Yellow denotes low rates, orange middling rates (around 40-50%, and red high, around 80%. From Chiao and Blizinsky 2009.


Fig 2. Percentages of populace diagnosed with mood disorders at some time in lifetime. Again, yellow is low, in the single digits, while red is high, around 20%, and gray areas lack sufficient data. If the prevalance of the “depression gene” predicted the prevalence of depression, then this map should look much like the one above it. But — especially if you look at North American and Asia, which are the areas in interest here — it doesn’t. It looks almost ass-backwards. From Chiao and Blizinsky 2009. Gray areas lack sufficient data.

You can chart the data in other ways too, and it still looks weird. A well-established gene variant that is supposed to predict depression seems to predict just the opposite in East Asia.

Squaring two maps with a third

Why did fewer East Asians get depressed even though more of them carried the depression risk gene? It wasn’t as if life in East Asia was stress free. The gene seemed to generate vulnerability in one culture and resilience in another.

As Chiao recognized, several possibilities offered themselves. Might depression be underdiagnosed in East Asians and overdiagnosed in westerners? It might — but probably not enough to account for a complete reversal of the risk dynamic. Perhaps most East Asians carried some other gene that canceled the SERT gene’s depression risk? Again, could be, but it seemed an awfully strong effect.

To Chiao, these sorts of explanations couldn’t reconcile the two maps. The maps did start to make sense, however, when Chiao considered them in light of gene-culture evolutionary theory (aka dual inheritance theory). This is the notion that genes and culture influence each other, and that culture can shape the way genes express themselves and even how they evolve. To Chiao, the mismatch between the SERT map and the depression map smelled of gene-culture effects. The gene in question was obviously SERT. So what was the cultural suspect? What cultural difference between western whites and East Asians might affect both the prevalence and apparent effect of the so-called “depression gene”?

And what jumped out of that question, both to Chiao and Blizinsky and to Baldwin Way and Matthew Lieberman, a pair of UCLA researchers who happened to be asking the same questions in California, was the difference between individualism and collectivism.

This individualism – collectivism distinction comes not from Mao, but from a Dutch organizational sociologist named Geert Hofstede. Back in the 1970s, Hofstede did a massive study for IBM of several hundred thousand of the company’s workers in 70 countries. Hofstede found several cultural factors that shaped business practices differently in IBM offices around the globe, the most famous of which became the spectrum between individualistic cultures, which emphasize a person’s independence, and collectivist cultures, which emphasize a person’s interpersonal, social, and civic connections. The study wielded enormous influence and made the collectivism individualism spectrum a staple of certain strains of sociological studies. (For other echoes, see here.)  And as another map from Chiao’s paper shows, the white west generally leans toward individualism while East leans toward collectivism.

Fig 3. Collectivism in world cultures. Yellow is low in collectivism, red is high. From Chiao and Blizinsky 2009.

So how does individualism-v-collectivism relate to depression and depression genes? Here Chiao and Blizinsky, as well as Way and Lieberman (these connections were apparently ripe) turned to another emerging idea: That the short SERT gene seems to sensitize people not just to bad experience, but to all experience, good or bad. (I explored this “sensitivity gene” or “differential susceptibility” hypothesis at length in an Atlantic article last December and am now working on a book about it.)  Both Chiao & Blizinsky and Way & Lieberman published papers within the last year laying all this out: Chiao and Blizinsky last December (abstract; pdf), Way and Baldwin this June (abstract; pdf download; Replicated Typo has a good write-up here). And both pairs assert that these short SERT variants make people sensitive to social experience in particular.

Way and Lieberman, for instance, note several studies in which the short, or S/S variant, seems to magnify both the negative and positive effect of social support.

In a study of depressive symptomatology, when short/short individuals had experienced more positive than negative events over the last 6 months, they had the lowest levels of depressive symptomatology in the sample (Taylor et al., 2006), indicating that short/short individuals are more sensitive to positive life events as well as negative ones. Subsequent research has shown that this relationship between life events and affect for individuals with the short/short genotype was primarily driven by the social events, as the nonsocial events were not significantly related to affect (Way and Taylor, 2010). Other groups have found heightened sensitivity to positive social influences amongst short allele carriers as well, which has even been documented using neurochemical measures (Manuck et al., 2004). Thus, these results suggest that the 5-HTTLPR moderates sensitivity to social influence regardless of its valence [that is, whether the experience is positive or negative].

Because short/short individuals are more sensitive to the social realm, social support appears to be more important for maintaining their well-being. In support of this claim, short/short individuals exposed to a natural disaster (a hurricane) were at no higher risk for depression than long/long individuals provided they perceived that they had good social support (Kilpatrick et al., 2007). However, if short/short individuals exposed to this disaster perceived that they did not have good social support they had a 4.5 times greater risk for depression. Similarly, a randomized control trial designed to improve nurturant and involved parenting reduced adolescent risky behavior, but only amongst those with the short allele (Brody et al., 2009b). A similar differential sensitivity was seen among adolescents in foster care. If the short/short individuals had a reliable mentor present in their life they were at no higher risk for depression than adolescents with the other genotypes. However, if they did not have such support they were at a high risk for depression (Kaufman et al., 2004). Thus being embedded in a richly interconnected social network, as is present in collectivistic cultures, might be particularly im-portant for maintaining the well-being of short/short individuals.

This starts to explain the purported interplay of the S/S allele and a collectivist culture: If short-SERT people get more out of social support, a more supportive culture could buffer them against depression, easing any selective pressure against the gene. Meanwhile the gene’s growing prevalence would make the culture increasingly supportive, since those who carry it might be more empathetic. Studies have shown, for instance, that short-SERT people more readily recognize and react to others’ emotional states. In one still-unpublished study — a favorite of mine — marriage partners with S/S SERT alleles more accurately read and predicted their spouses’s emotional states than did people (sometimes those same partners) with L/L variants. This could make for some interesting dynamics at the breakfast table over the years.

A conversation between genes and culture

One major piece of the puzzle remains: How did the short SERT variant, which has generally been painted as bad news, become so prevalent in East Asia in the first place? Good question. The short-SERT variant appeared in humans only in the last 100,000 years. It was during this same period that humans moved out of Africa and spread around the globe. And it was during this time that the this S/S variant thrived in particular in people who moved east and took up residence in East Asia. Why did it blossom so spectacularly? And what came first, the high S/S rates or the collectivist culture?

Here the gene-culture dynamic must walk on tiptoes, as the sketchy evidence forces caution. Yet it can offer some speculative hypotheses. Drawing on work by Corey Fincher and Randy Thornhill, for instance, Chiao speculates that both a collectivist culture and the socially sensitive S/S allele gained ground when high pathogen loads along human migration routes from Africa to East Asia rewarded socially sensitive, collectivist behaviors that defended against pathogens. (The high pathogen loads in turn rose from the warm, most climates and abundant bird and mammal life in those regions.) The heightened danger of infection, that is, may have selected for a more group-oriented mindset, such as more attention to group rules regarding sanitation, food preparation, and whatever elemental medical care (such as stopping to rest) might have helped people avoid or survive infection. The adjustment would have been partially cultural: Those who followed these practices would suffer less infection. But (the argument goes) the adjustment would also have been genetic, as selection favored an S/S SERT variant that made carriers more likely to observe the rules.

I’m not quite sure what to think of this idea. A paper exploring the link between high pathogen loads and lower IQ recently came under fire, and this may too; yet Chiao cites a strong correlation. Meantime, Replicated Typo offers an alternative but compatible mechanism for this gene-culture evolution, based more directly on migration routes. In any case, as Chiao notes, pathogen loads offer just one among several possible environmental or cultural factors, not mutually exclusive, that might have selected for collectivist behavior and socially sensitive genotypes, creating a feedback loop increasingly friendly to  behavior, gene, and a particular culture.

The new math

This is a lot to wrap your mind around. If you consider yourself of hard-nosed empirical bent, you might, after you take a deep breath or a long walk, cast about about for a “western-type” study that runs along more classic gene-environment lines. If you did, you would soon reel in a 2004 study of badly abused children. This study, by Joan Kaufman and others at the Yale genetic psychiatry lab of Joel Gelernter, looked at 57 school-age children who were so badly abused they were moved to foster homes.

First the researchers crossed the kids’ depression histories with their SERT genotypes. They found the expected: maltreated kids with the short SERT gene — the double whammy — suffered mood disorders at almost twice the rate as did maltreated kids who had the L-S or L-L variants or, for that matter, short-SERT kids with no maltreatment.

So far, so predictable. Then Kaufman laid both the depression scores and the SERT types across the kids’ level of “social support.” She defined social support quite narrowly: contact at least monthly with a trusted adult/mentor figure outside the home. This modest, closely defined social support, however, eliminated about 80% of the combined risk of the risk gene and the maltreatment. It virtually inoculated kids against extreme maltreatment and a proven genetic vulnerability.

It makes you wonder: What’s the real toxin in situations like this? We tend to view bad experience — abuse, violence, extreme stress, family strife — as toxic, and risk genes as semi-immunological weaknesses that let the toxin take hold. And maltreatment is clearly toxic. Yet if social support can almost completely block the effects of a severe toxin in a vulnerable individual, isn’t a lack of social support almost as toxic as the severe maltreatment? Even this clever study’s design and language frame “social support” as a protective add-on. But this framing implies that humanity’s default state is isolation. it’s not. Our default state is connection. To be unconnected — to feel alone — is to endure a trial almost as noxious as regular beatings and sharp neglect.

The University of Chicago psychologist John Cacioppo and William Patrick explore this beautifully in their book Loneliness, And Michael Lewis’s hysterically funny article about the Greek credit crisis, published just a few days ago, suggests that a hyper-individualistic default state doesn’t serve the world economy too well, either. Lewis describes how the Greek credit crisis, which currently threatens to spread to the European and perhaps the global economy, arose partly because a break in the social contract created an every-man-for-himself ethic in Greece, since everyone assumes everyone else cheats and that no one pays taxes. He signs off with this:

Will Greece default? There’s a school of thought that says they have no choice…. On the face of it, defaulting on their debts and walking away would seem a mad act: all Greek banks would instantly go bankrupt, the country would have no ability to pay for the many necessities it imports (oil, for instance), and the country would be punished for many years in the form of much higher interest rates, if and when it was allowed to borrow again. But the place does not behave as a collective … It behaves as a collection of atomized particles, each of which has grown accustomed to pursuing its own interest at the expense of the common good. There’s no question that the government is resolved to at least try to re-create Greek civic life. The only question is: Can such a thing, once lost, ever be re-created?

If Greece doesn’t do some fast gene-culture evolution toward collectivism, the whole world may get depressed.

As gene-culture theory gets hold of the kind of data that allows for papers like Chiao’s, I suspect we’ll see a growing stream of studies showing that genes have different effects in different cultures. A few weeks back, for instance, Ed Yong wrote up a fascinating paper by Heejung Kim and colleagues demonstrating that a particular variant of an oxytocin receptor made Americans, but not Koreans, more likely to seek emotional social support in times of distress. As Yong noted, these studies all but insist that we may need to expand our definition of environment when we consider gene-environment interactions.

Many studies have looked at how nature and nurture work together but in most cases, the “nurture” bit involves something social that’s either harsh or kind, such as loving or abusive parenting. Kim’s study stands out because it looks as cultural conventions instead, and Ebstein says that it “provides an interesting new avenue for researching gene-environment interactions.”

In a sense, these studies are looking not at gene-x-environment interactions, or GxE, but at genes x (immediate) environment x culture — GxExC. The third variable can make all the difference. Gene-by-environment studies over the last 20 years have contributed enormously to our understanding of mood and behavior. Without them we would not have studies, like these led by Chiao and Way and Kim, that suggest broader and deeper dimensions to what makes us struggle, thrive, or just act differently in different situations. GxE is clearly important. But when we leave out variations in culture, we risk profoundly misunderstanding how these genes — and the people who carry them — actually operate in the big wide world.

_________

*This so-called “depression gene” is, to most researchers, either of two “short” versions of the serotonin transporter gene —SLC6A4, and known by some as SERT. SERT appears to regulate levels of the neurotransmitter serotonin and to be crucial to mood, among other things. Because we effectively get one half of this gene from each parent — either a “long” or a “short” — each of us carries a version that is either long-long (L/L), long-short (L/S), or short-short (S/S). As many have pointed out, depression is far more complicated than one gene , which is probably why even the depression gene is so clearly probabilistic rather than predictive. But as we’ll see shortly, it’s far more complicated than that.

Yet others may recall that this depression-risk view of the gene was aggressively challenged by Risch et alia. That view got a lot of attention. Less attention went to the several strong rebuttals and critiques noting that Risch’s challenge was a) was rather selective in its choice of studies to include in its meta-analysis, b) weighted the studies in odd ways, so that the small ones least likely to see a genetic association were weighed as heavily as the large ones that were more likely to detect such associations; and c) ignored altogether a huge body of physiological work that details mechanisms through which SERT variants could affect one’s sensitivity to environment. As I noted in an earlier post, that leaves intact the SERT-depression link’s main framework.

___

**Why leave Wired? I’m folding the blog tent here so I can focus more steadily for a time on finishing my book, tentatively titled The Orchid and the Dandelion, that I’ve often mentioned here. I know some people manage it, but I’ve found it hard to reconcile the demands of blogging at a venue like Wired and of writing a serious book that requires deep immersion: a matter of not just the time needed for each venture, but of the mindset and what you might call the focal length of one’s mental lens. A venue like this requires, methinks, either an unrelenting focus on a particular beat or a fairly steady tour through many fields; I can’t seem to mesh either with the sort of time and focus needed for a book. The move also frees me up to experiment a bit more. I hope to see what sort of more Tumblr-like approach I can take at Neuron Culture once it’s in a self-hosted venue.

But it has been a fun run here at WIRED. I want to thank WIRED.com, and especially Betsy Mason, Evan Hansen, Brandon Keim, Dave Mosher, Adam Rogers, and the rest of the WIRED team, present and past, for giving me a productive blogging platform here since September 2010; my fellow bloggers for their support, good cheer, and many fabulous posts; and most of all, my readers, whom I hope will come along and follow me at my new home, starting June 7, 2013, you can find at http://neuronculture.com — a domain name that currently points here to WIRED, but which starting June 7 will point to the blog’s new, self-hosted home elsewhere. And you can always follow me at The Twitter, as well.

A Case That Tells the Weird Tale of DSM – and Other Recommended Reading

For a single post that shows how weirdly and unevenly psychiatric diagnosis actually works (and fails to work) in this country, and what that means for the new DSM, get over to Maia Svalavitz’s clear-eyed account of her own five diagnoses (and the one she never got):

 Over the course of my life, I have been given no fewer than five different diagnoses for mental illnesses, under the diagnostic system laid out in psychiatry’s “bible,” the DSM.  But it was a sixth diagnosis— one that ironically will no longer appear in the edition being rolled out this week, DSM-5— that probably most accurately describes what is genuinely different about me.  I’m sharing this because my experience is a case study for explaining why the latest revision to the manual is raising such ire.

That’s at  Viewpoint: My Case Shows What’s Right — and Wrong — With Psychiatric Diagnoses | TIME.com. A brave piece of writing, a splendid read.

Other essential reading on the DSM comes from Gary Greenberg, who’s been blogging about it at Elements, the New Yorker’s science channel. The D.S.M. and the Nature of Disease, is his primer on the current flap on the new DSM. Does Psychiatry Need Science? looks at the DSM5’s failure to include what Greenberg argues is a better-documented disorder than many in the manual: melancholia, a sort of less acute but perfectly serious form of depression than the favored official “Major Depressive Disorder.” And he brings some needed historical perspecitve in The Creation of Disease and The Rats of N.I.M.H. which looks at the National Institute of Mental Health’s long-running dissatisfaction with the DSM’s categories.

Greenberg’s other essential reading is his “The Book of Woe,” a romp of an account of the making — rather sausagelike — of the DSM-5, which is to come out next week amid doubtless even more noise. It might or might not be, as many are saying, that the DSM, however flawed, is still the best thing we have to diagnose mental illness. But this book will show you why it’s not only not getting better, but how the APA steadfastly failed to fix even its most obvious problems.

My review of Greenberg’s book at Nature (paywalled; I’ll re-pub the whole thing here in a couple weeks) mostly shares his grim view:

For more than 100 years, psychiatry has been getting by on pseudo-scientific expla- nations and confident nods while it waited for the day, always just around the corner, in which it could be a strictly biological undertaking. Part of the DSM5’s long delay occurred because, a decade ago, APA leaders actually thought that advances in neurosci- ence would allow them write a brain-based DSM. Yet, as former APA front liner Michael First, a psychiatrist at Columbia University in New York, confirms on Greenberg’s last page, the discipline remains in its infancy.

Greenberg shows us vividly that psychiatry’s biggest problem may be a stubborn reluctance to admit its immaturity. And we all know how things go when you won’t admit your problems.

Also sharply critical, though slightly less so than Greenberg, is Allen Frances, who edited the previous edition of the DSM (DSM-IV). His book Saving Normal is also just out, and he’s posting a steady stream of sharp-tongued criticism of the new DSM at Huffington Post, among other places.

I’m dying to get Frances and Greenberg in the same room.

But start and end with Maia Svalavitz’s account of her own history; it grounds the discussion in a way that’s quite needed.

Photo: David Dobbs

How Churchill and Lincoln Can Help You Whup Depression

Winston Churchill in 1904, age 29.

Neuroskeptic, one of the most insightful neuro-psycho-bloggers out there today, has a nice post at Discover on a Mark Brown article about whether it helps, if you’re fighting depression, to hear of famous role models who did so too. In general, Neuroskeptic shares the skepticism Brown feels about this. Where Brown asserts that

where the inspirational figure is selected for us, and the gap between their life and ours is too great, the effect is not one of encouragement but of disillusionment – especially if their story is told in terms of personal qualities like bravery or persistence.

Neuroskeptic agrees:

 “He’s got it, and so do you, so you can be like him” is perilously close to “He’s got it, and so do you, so you should be like him – what’s your excuse?”

But then Neuroskeptic goes on to say that in Churchill’s case, it helps, for Churchill was so famously tough, which counters the notion that depression means you’re weak.

With all due respect, I think Neuroskeptic is trying here to throw away the cake and eat it too.  I disagree with Brown, and with Neuroskeptic, even while agreeing with him about Churchill. I think Churchill does help, but that other examples help too — including Stephen Fry, whose example Brown and Neuroskeptic essentially dismiss, and, most notably, Abe Lincoln.

In all three cases, the sensitivity that opens the person to depression becomes a strength that lets them overcome not just it, but other obstacles. Fry taps into that sensitivity to draw on the deep empathy that helps make a great performer and comic. Churchill draws on his experience facing down depression to face down Hitler. And Lincoln, as described beautifully in Josh Shenk’s Lincoln’s Melancholy, found in his deep, long struggle with depression a depth of moral insight, character, resolve, and empathy that let him confront with resolve both his own and his country’s deepest moral and political crisis.

These are all special people, whose strengths may seem out of reach. Yet what’s reachable and teachable is that they developed these strengths not in spite of depression, but by drawing on the same sensitivity that produced the depression. They offer models not of an inaccessible state, but of an accessible process — a particular, constructive intimacy with one’s psychic sensitivity and hard experience.

See Churchill and the Stigma of Depression : Neuroskeptic

Temple Grandin Rides the Runaway Brain Train

Temple Grandin has a new book out, and over at the Times, I’ve a review of it — rather mixed, I’m afraid:

For a quarter century, Dr. Grandin — the brainy, straight-speaking, cowboy-shirt-wearing animal scientist and slaughterhouse designer who at 62 is perhaps the world’s most famous autistic person — has been helping people break through the barriers separating autistic from nonautistic…. Dr. Grandin has helped us understand autism not just as a phenomenon, but as a different but coherent mode of existence that otherwise confounds us. In her own books and public appearances, she excels at finding concrete examples that reveal the perceptual and social limitations of autistic and “neurotypical” people alike.

As I note, when these strengths

burst upon the scene in her 1995 book “Thinking in Pictures,” they amazed people, as they continue to do in many of her YouTube and TED talks (not to mention the 2010 biopic “Temple Grandin,” in which she was played by Claire Danes). Alas, in “The Autistic Brain,” her fourth book, she largely abandons these strengths, setting out instead to examine autism via its roots in the brain. It does not lead to rich ground.

The problem? In this book, which I suspect was conceived at the height of the brain-book bubble, before various disasters slowed it down, Grandin moves to explain autism via the brain. The results, I fear, make a sparse repast.

Get the rest at  ‘The Autistic Brain’ Review — Temple Grandin Traces Roots of a Disorder – NYTimes.com.

Fantastic Film ‘Chasing Ice’ to Run on National Geographic TV

A few months ago I posted here about “Chasing Ice,” a film following photographer James Balog’s quest to document the shrinkage of ice fields and glaciers as climate change melted them away. I was quite moved, when I saw it in theater in my current hometown, not just by Balog’s efforts but by the electrifying beauty of much of the imagery, including the biggest ice calving ever caught on film — a truly monumental event, as a piece of glacier the size of lower Manhattan peels over over several minutes. Truly jaw-dropping

The film got modest distribution in theaters, but you can catch it this Friday, April 26, on National Geographic TV, at either 4 or 5:30 pm EDT. You won’t regret the time.

Here’s the heart of my earlier post:

“Chasing Ice” documents both the earth’s current warming and one man’s obsessive efforts to show that warming in terms everyone can understand: visual, immediate, dramatic. National Geographic photographer James Balog says he was a bit of a climate skeptic himself until he took an assignment in 2005 and 2006 photographing the retreat of a single glacier in Iceland for a National Geographic story. Seeing the glacier’s retreat with his own eyes (and in his photographs) convinced him. He figured that if he could show the same thing on many glaciers around the globe, he could convince other skeptics that climate change was real and serious. So he organized the Extreme Ice Survey to document global warming with time-lapse photographs of retreating glaciers. The film shows this effort — and some of the truly stunning images they captured, both in stills and in live video.The film’s most renowned segment left me truly drop-jawed.

Some see this as an antidote to a sort of cognitive resistance that discourages us from acknowledging changes or risks that can’t be directly perceived or that seem distant in time. The role of such thinking in climate-change skepticism was called into question in May 2012 by an interesting paper out of Yale. That paper found that neither scientific literacy nor supposedly rational modes of thought made people more likely to acknowledge climate change. Rather, in a manner that brings to mind the Kill Whitey studies of morality, people tend to take the view most harmonious with whatever peer groups or political cultures they identify with. We subscribe to a view that we’re comfortable with socially, culturally, and politically, then backfill the reasoning.

So it’s possible this film may leave your climate-change friends cool on the whole global-warming thing. Then again, it may “work,” for the film makes a particularly strong case with its combination of ingenious graphics, a story of a very nice guy pursuing an idealistic obsession with lots of sexy choppers, crampons, and cameras; and some of the most stunning and beautiful earth footage I’ve ever seen. This is one of the few movies where I was moved to the verge of tears by the imagery’s sheer beauty.

The official trailer:

It runs this Friday, April 26, on National Geographic TV, at either 4 or 5:30 pm EDT.

Full disclosure: I sometimes write features for National Geographic Magazine, but have no connections with National Geographic TV. I just like this film.

How to Place Rose Petals on Your Lover’s Skin (& Write About Science)

Scratches on a steel rail in a skateboard park. Photo: David Dobbs

The Guardian and the Wellcome Trust have been holding a science-writing prize contest lately, and to accompany it The Guardian has been running a superb series of interviews with science writers about how to write about science. Geoff Brumfiel, Helen Pearson, Roger Highfield, Linda Geddes, Jo Marchant, and many more have all weighed in — a trainload of great advice, wisdom, and fun writing. This past week it was my turn, and with the Guardian’s blessing I’m happy to replicate the post below.  Do get over to the Guardian and enjoy the rest as well. 

What’s a good science story?

A good science story is like any other good story: it has tension and movement; it has conflicts the reader can relate to; it’s usually about someone who wants something badly and faces obstacles trying to get it. What does this palaeontologist want to figure out or prove, and why? What stands in the way of her doing so?

In terms of material, I look for three things in particular: an alluring scientific idea or discovery; a scientist who is a highly intriguing figure on his or her own or who can talk engagingly; and either a subject or an event in which we see the idea or process at work.

I want as many of those three things as I can get. If I’ve found a well-spoken, brilliant neurologist who wants to test her new theory about depression by doing experimental brain surgery on terribly depressed people, I’m two-thirds of the way there; if one of her patients is herself interesting and articulate and responded to the surgery in amazing fashion, I can’t miss.

What do you need to know to write well about science?

You need to know a lot. Then again, sometimes a few key skills – like interviewing and reading, persistence, and a good bullshit detector – will get you through nicely. Many excellent science writers, including Carl Zimmer and David Quammen, have no formal training in science.

Still, it’s useful to know certain things, and I think the most vital knowledge for a science writer is familiarity with at least one major scientific controversy that was fought and (mostly) resolved before the writer was born. Does the sun circle the Earth or vice versa? Are species mutable? How do coral reefs form?

The big fights over these questions (the last of which I wrote about in my third book, Reef Madness) show you two essentials it’s easy to miss when you’re reporting on science happening right now: that the science of any age is shaped by (a) the deep philosophical, cultural and social movements of its time and (b) the personalities, desires, ambitions and rivalries of the main players.

It’s hard to see those things in your own time. But once you’ve seen how profoundly they influenced virtually every scientific controversy of former times, including the way scientists thought and behaved, you’re more likely to see similar dynamics and motivations in the science you’re reporting on now.

How do you choose your opening line?

Occasionally one falls into your lap. When I was researching “Buried answers,” a feature about autopsies, I knew I had my opener when I was interviewing a pathologist and he started complaining, in a good-humored way, that it “took some convincing” to get his siblings to agree to let him cut up their mom to see how she died. Bingo.

More often, however, an opening sentence emerges slowly out of the rhythms and imperatives of the first paragraph, rather than the other way around. It’s nice to have a pithy opener along the lines of James M Cain’s “They threw me off the hay truck about noon.” But it’s a mistake to force it.

I find things tend to work if I write the best first paragraph I can and generate a first sentence that fits with the paragraph. That’s far better than a clever first line that doesn’t flow into what comes after.

It can get you something nice and simple, such as, “Few of us are as smart as we’d like to be”, to start an essay about the genetics of stupidity. Or it might get you something slightly more teasing, but hopefully not too cute, such as “Deanna Cole-Benjamin never figured to be a test case for a radical new brain surgery for depression.” Or a sentence about your son calling you from the police barracks. (That worked out okay.)

My fervid prayer is to someday match Tom McGuane’s opening sentence for a travel story about a week he spent at a luxurious fishing camp in the Rockies: “I tend to do a lot of fishing when I’m at home, so when I go away, I always try to do a lot of fishing.”

How do you get the best out of an interviewee?

The wonderful science-writer shop-talk site The Open Notebook recently ran a nice post on this. As I noted there, it helps to know well both the person you’re interviewing and the world they’re part of – whatever realm or discipline you’re exploring in your story. But whether I know all that or not, I like to approach the interview as if I don’t, and to get the interviewee to answer, in fresh language, two essential questions.

The first is whatever your story (and interview) is about: for instance, “Do animals have consciousness?”.

The second question is some version of, “How did you get started on this puzzle?” That question might take a more specific form, such as “What led you to study how octopuses use coconut shells?” Either way, asking how a person got pulled into a quest can reveal not only new angles on the subject but much about the person as well.

It can help turn the story from one about an idea or a discovery or new theory to one about a person obsessed with a puzzle – always more fun. Make sure, when they start talking like a scientist, to ask them how they’d explain it to your brother the plumber.

How do you use metaphors and analogies in a story?

As a Romeo places petals on his sleeping lover’s skin: carefully, and with exquisite attention to the petals chosen, lest their weight disturb her. Though I try not to get fancy.

What do you leave out of your stories?

About 90%. I seek to gather a ludicrous amount of great material, along with an even more ludicrous amount of so-so material, and then throw out everything but the best. This includes tossing, as David Quammen advises, almost everything that’s important but dull. If you’re not leaving superb stuff on the cutting-room floor – material it deeply pains you to cut – then you haven’t gathered enough to make the story sing.

How do you stay objective and balanced as a writer? Should you?

I give it little thought. I don’t think of getting opposing views; I think, as Ivan Oransky advises, of getting outside views. I consider objectivity a fantasy. I consider balance an invitation to what Jay Rosen calls “The view from nowhere” – a stance of bogus neutrality that shirks your responsibility to imbue your writing, at least implicitly, with a point of view and informed opinion.

I can buy the idea of balance if, rather than giving equal time to opposing views, it means writing with enough fidelity to the facts that you don’t fall over trying to be fair to all views.

What’s the biggest potential pitfall when writing about science?

The biggest potential pitfall when writing about science is to communicate in a manner that is repetitive and unimaginative in one’s use of the vocabulary and rhetorical and syntactical strategies, for example by using passive construction, impersonal voice, excessive jargon, polysyllabic diction, and lengthy, complex, but monotonous sentence structures that too often dominate conventional science-communication presentations, thus generating ponderous prose that dissuades interest.

Don’t do that. Speak plainly. Play loose. Make things move. Quote people cursing. Hunt down jargon, mercilessly like a mercenary possessed, and kill it.

David Dobbs on science writing: hunt down jargon and kill it | Science | guardian.co.uk.

How A Four-Year-Old Kicked My Butt at Blickets

In the clip above, filmed in the lab of University of California, Berkeley, cognitive psychologist Alison Gopnik, a particularly charming 4-year-old girl named Esther is playing a game that Gopnik made up. The game is called Blickets. The goal is to figure out which of the clay figures are blickets, as indicated by which clay figures light up the little box.

It’s much harder than you might think. I know, because I was sitting next to the camera that filmed Esther from behind one-way glass that day,  and I too was trying to figure out which of the clay figures were blickets. And Esther just left me in the dust.

To find out how and why, read “Playing For All Kinds of Possibilities,” my story in today’s New York Times about how young children play — and what their play suggests about human evolution, cognition, and exploration. Here’s part of the opener:

When it comes to play, humans don’t  play around.

Other species play, but none play for as much of their lives as humans do, or as imaginatively, or with as much protection from the family circle. Human children are unique in using play to explore hypothetical situations rather than to rehearse actual challenges they’ll face later. Kittens may pretend to be cats fighting, but they will not pretend to be children; children, by contrast, will readily pretend to be cats or kittens — and then to be Hannah Montana, followed by Spiderman saving the day.

And in doing so, they develop some of humanity’s most consequential faculties. They learn the art, pleasure and power of hypothesis — of imagining new possibilities. And serious students of play believe that this helps make the species great.

More at the Times.

 

Aglitter in the Net – Readings from April 2013, week 3

Bookwork and some other issues have limited my blogging lately, and I thank my followers for patience on this front.

Yet if I can’t relay all I’m thinking and reading about this and that, I can try to relay more of what I’m reading. Here, in at last a temporary revival of my sporadic “Aglitter in the Net” gatherings, are some goodies, with an (attempted) emphasis on stuff you may not have come across:

When the Tsarnaevs led the Boston cops, the FBI, and generally half the SWAT teams of the eastern U.S. on a wild-goose chase last Thursday, the first carload of journalists chasing them happened to contain two journalists I know: One was the veteran and excellent science journalist Seth Mnookin, who wrote about the adventure in The New Yorker. The other was a journalism student just shy of graduation — young, hungry, a bit skeered, given the situation, but a kid who lives for this stuff, a kid who had already jumped at the chance to cover the bombing, who has transferred an earlier yen for fast driving into a yen for good journalism   — who happened to be my son, Taylor Dobbs, who about the crazy night in Hunting the Manhunt, published by Medium, a new venue former Wired.com EIC Evan Hansen.  (Note to editors everywhere: Kid graduates in June. Currently up for grabs. He did this with a cell phone, no salary, and no press pass. Imagine what he could do with backing. Just an observation.)

A separate Storify recreates much of Seth & Taylor’s real-time effort to sift coherence from chaos.

Burkhard Bilger: A New Era in Mars Exploration : The New Yorker – Bilger has a knack for making almost any subject he writes about seem a perfect match for him. So it is here. Dig in.

You know how Boston was locked down as cops and journalists chased the bombers? One-night stands weren’t exempted. From Esquire.

How do you piss off Harrison Ford? Ask him about Star Wars.

The incredible Neuroskeptic wrestles with a study casting yet more caveats on the fMRI brain scans the press so loves.

It looks as if primates may shape forest structure in Africa.

Now and then, as people die, they see life with particular clarity and relay it with especial beauty. Lisa Adams does so here.

George Johnson writes about The Best Science Book Ever Written. It’s certainly one of the top ten.

Does real-time reporting on Twitter and such make for good news? Jury’s out. But Matthew Ingram is right in saying Twitter shows how the news is made, and it’s not pretty — but it’s better that we see it.

What’s it like having a stroke? Let Andy Revkin tell you. Don’t listen to it while you’re taking a run.

And to end a round-up from a sad week, Why do humans cry? A new reading of the old sob story.

Photo: The author