The Selfish Gene is a static meme, and that ain’t science

Richard Dawkins’s “The Selfish Gene,” book and meme, is now 40 years old. Has it served its purpose? And how do we talk about whether it has?

When I argued not long ago that his ‘selfish-gene’ model obscures richer emerging views of genetics and evolution, the responses ranged from enthusiastic agreement to objections both civil and savage. I naturally drew pleasure from the excited agreement, which came from both laypeople and scientists. I was truly heartened by the constructive criticism from scientists and others who took issue with the idea of retiring the selfish-gene meme. Their challenge expanded my thinking, helped me to improve the essay in a revised form, and, best of all, spurred a wide-ranging, open-minded discussion full of mutual inquiry, reconsideration, and great humour.

Alas, a more vitriolic line of objection also arose. I first ran into it in a tweet from the Harvard psychologist Steven Pinker, describing me as ‘another confused journalist who hates genetic evolution but doesn’t understand it’. I remain puzzled that Pinker concluded I hate genetic evolution, whose wonders and riddles I have written about for several years.

In another tweet Pinker asked:

Why do sci journalists think it’s profound that genes are switched on/off? Do they think that all cells produce all proteins all the time?

Which leads me to ask:

Why does Steven Pinker think it’s shallow when science writers tell readers about things that scientists know but others do not?

As a writer and teacher, surely Pinker is in the business of sharing knowledge and ideas. Why should I not do the same? Gene expression might be old hat to scientists. But the power of this most essential biological dynamic strikes many other curious and intelligent people as something new and, as the responses to my essay made clear, deeply exciting. In his blog, population geneticist Jerry Coyne also accused me of trying to sell old things as new. And Dawkins, after graciously acknowledging that I ‘made scarcely a single point’ that he would not have been glad to make himself, rather less graciously accused me of writing about well-established facts, ideas, and dynamics as a way of ‘manufacturing controversy’.

It soon became apparent that some people are willing to defend the selfish gene idea as if guarding a holy kingdom. The rhetoric was astounding. Coyne averred that ‘if [Dobbs] were an honest man’, I would apologise for my story, ‘but we know that won’t happen!’ His followers accused me of bringing ‘other agendas’; of tabloid-style sensationalism, intentional distortion, and intellectual dishonesty; of being a journalistic buffoon; of being cheap, shoddy, and crass; of writing in the pay of creationists. One commenter said that rather than question him, I should behold Richard Dawkins and cower.

I suppose I can see how people might write such stuff if they’ve spent too much time defending science from attacks from creationists or others hostile to empirical endeavour. But it’s an odd way to respond to ideas submitted in good faith.


My feelings here matter little. What does matter is the effect such attacks have on others looking on, and on open discussions about genetics and evolution at a time when genetics has plentiful reason to regroup and reconsider instead of defend and attack. Such hostility seems designed to quell rather than enrich discussion; to freeze rather than advance understanding; above all, to silence. It worked. While evolutionary researchers who objected to my article rightly felt free to speak up, few scholars who agreed with me felt similarly comfortable. Although many expressed agreement privately, almost no one did so in the open. I can’t blame them; who wants to leap into a bloody shark pool?

On the upside, some people did object to this noise. Many, including people I’d never heard from before, wrote to me privately to say they thought the Pinker-Coyne-Dawkins response was sclerotic and counterproductive. And a few protested publicly. One commenter at my blog, a reader named Agga, expressed his dismay this way:

As a complete layperson, my interpretation of the Aeon article was this. Wow, evolution makes sense now! Before, as someone who has only taken high-school biology and an undergrad short module in heritability, I was taught that evolution worked in this manner: genes randomly mutate and the most favourable carry on through survival and reproduction.… This extremely simplified view is what is being taught, and what is implied from the common narrative of evolution. To find out that evolution has these mechanisms such as epigenetics and the mechanism that changes the locust and bees without changing the gene first, that just blew my mind! The whole thing is so much more intuitive; and explains much better how such complexity and specialisation could arise, through interaction with the environment in this way.

Agga also took issue with the complaint about gene expression being old hat:

[P]erhaps all you PhDs should remember that you do not know what the layman’s view is, what the common narrative or the [selfish gene] metaphor actually does, how it is interpreted. You don’t know this because you already know about the complexity. I never knew, until now. Isn’t that a shame?

Dawkins, responding to my article, asked: ‘Does Dobbs really expect me to be surprised [by the power of gene expression]?’

I did not. I was not writing for Dawkins. I was writing, as Dawkins himself writes, for a general audience, and for the same reasons Dawkins does: to share the wonders of genes and evolution with people who might not know of them; to put those wonders into context in a way that might generate new understanding; to share and make memorable not a brand-new fact or finding but a fresh reframing of the story of how evolution works. Like the ideas Dawkins described in The Selfish Gene, the ideas I wrote about had been discussed by scientists for years or decades but had reached few outside academe. And as Dawkins had done originally, I argued that a different characterisation of the gene’s role in evolution – in my case, one emphasising the gene’s sociability rather than its selfishness – could tell a story about evolution that was still accurate but more layered, exciting, and consistent with recent research.

For Agga and others, including many scientists, this worked. The article stirred in them, if I might borrow the title of Dawkins’s newest book, an appetite for wonder.

Some might object that science is not about stories but facts. But science is always a story about facts. That’s why scientific papers have discussion sections. And there are always different stories to tell about any given set of facts. That’s why people offer various and overlapping hypotheses and theories. Science’s true job and modus operandi is to find and articulate the most compelling story consistent with the facts. Naturally, scientists must revise and replace these stories as research reveals new facts.

Dawkins knows this, and in The Selfish Gene he tells one hell of a compelling story. But in an age when research is showing the genome’s conversation with the outside world, and with itself, to be far more complex than we ever supposed, does the selfish-gene story remain the most compelling one we can offer about genetics and evolution?


That’s my question. Many of Dawkins’s defenders dismiss it by insisting that Dawkins’s selfish gene is not merely a meme or a metaphor, but a parsimonious statement of fact that deserves the status of a fact itself. But it’s not a fact. It’s a story about facts.

In truth, we can hardly even agree on what a gene is. George Williams himself, the biologist who was the selfish gene’s true father, clearly recognised this. In Adaptation and Natural Selection, his pivotal 1966 book that laid out the gene-centric theory which Dawkins would popularise a decade later, Williams noted that our DNA is passed on in repeatedly, continuously ‘dissociated fragments’, and that the ‘potentially immortal’ object of selection – ‘the gene’ that Dawkins would soon call selfish – was an abstraction that could be defined in any number of ways. Williams emphasised this by citing no less than four definitions of ‘the gene’ (as he himself framed it, in quotes) in the very paragraph in which he called it potentially immortal. He defined the gene as ‘“the gene” that is treated in the abstract discussions of population genetics’; as a rare ‘segment or chromosome’, protected from common forces of recombination, ‘[that] behaves in a way that approximates the population genetics of a single gene’; as ‘that which segregates and recombines with appreciable frequency’, and which is ‘potentially immortal’; and finally and most broadly, as ‘any hereditary information’ for which there is selection.

That was 48 years ago. As the Yale geneticist Mark Gerstein and others demonstrate in the article ‘What Is a Gene?’ (2012), the ensuing half-century has added only more definitions to Williams’s conservative list.

In the century since it was named, ‘the gene’ has been a thing vague, variable, and often abstract. Is it wise to insist that something so slippery and mutable, so variously conceived, is not just ‘potentially immortal’, as Williams proposed, but literally immortal? Science does not advance by insisting that certain of its stories are immortal. It moves by allowing stories to evolve. And sometimes by letting them die.

_______

This post is adapted from my contribution to “Dead or Alive: Is it tie to kill off the idea of the ‘selfish gene’?”,  a roundtable discussion of “Die, Selfish Gene, Die,” at Aeon. I’m thankful to John Dupré, Laura Hercher, Karen James, and Robert Sapolsky, who also contributed to the roundtable; to Aeon, for publishing both pieces; and to all the geneticists, writers, and others who engaged in constructive discussion of the articles.  Thanks also to Philip Ball, whose smart, calm post on this today spurred me to revisit the issue and post this. 

Is the gene still selfish after all these years?

header_Grasshopper.jpg

 

Philip Ball on the strange, often savage defense of a 40-year-old meme past its prime:

The fact is that genes can only propagate with the help of other genes. John Maynard Smith recognized this in the 1970s, and so did Dawkins. He chose the wrong title, and the wrong metaphor, and wrote a superb book about them.

I find it curious that there’s such strong opposition to that fact. For example, I’m struck by how, when the selfish-gene trope is questioned, defenders will often point to rare circumstances in which genes really do seem to be “selfish” – which is to say, where propagation of a gene might be deleterious to the success of an organism (and thus to its other genes). It is hard to overstate how bizarre this argument is. It justifies a metaphor designed to explain the genetic basis of evolutionary adaptation by pointing to a situation in which genetic selection is non-adaptive. You might equally then say that, when genes are truly selfish, natural selection doesn’t “work”.

What is meant to be implied in such arguments is that this selfishness is always there lurking in the character of genes, but that it is usually masked and only bursts free in exceptional circumstances. That, of course, underlines the peril of such an anthropomorphic metaphor in the first place. The notion that genes have any “true” character is absurd. Genetic evolution is a hugely complex process – far more complex than Dawkins could have known in 1976. And complex processes are rarely served well by simple, reductionistic metaphors….

There is an old guard of evolutionary theorists, battle-scarred from bouts with creationism and intelligent design, who are never going to accept this, and who will never see why the selfish gene has become a hindrance to understanding. They can be recognized from the emotive hysteria of their responses to any such suggestion – you will find them clearly identified in David Dobbs’ excellent response to criticisms of his Aeon article on the subject. It is a shame that they have fallen into such a polarized attitude. As the other responses to David’s piece attest, the argument has moved on.

Photo: Grasshopper (Acrididae), Barbilla National Park, Costa Rica. Photo by Piotr Naskrecki/Minden Pictures/Corbis

 

How A Billionaire Used a Wrestler to Get Revenge and Silence Gawker

Angel investor Caterina Fake gets real on Peter Thiel:

Generally, people avoid frivolous lawsuits because it often exposes them to as much scrutiny as those they sue, so what is significant about this case is that by funding Hogan behind the scenes, Thiel could get his revenge, escape exposure, and influence the outcome of the case. Hogan’s lawyers made decisions against Hogan’s best interests, withdrawing a claim that would have required Gawker’s insurance company to pay damages rather than the company itself–a move that made Nick Denton, Gawker Media’s founder and CEO, suspect that a Silicon Valley millionaire was behind the suit. Gawker Media may or may not survive the suit in which Hogan was handed down a judgement of $140 million, which the publisher has appealed.

My hope is that the high profile of this case will hasten legal reform. The ethical dodginess of this type of funding is well known–after all champerty was once illegal.

Further reading (via Fake’s post):

Gawker-Thiel-Hogan lawsuit article on Forbes

Why Denton thought Thiel was behind the lawsuit article on re|code

Arms Race: Law Firms and the Litigation Funding Boom article in American Lawyer

Two Sharp Takes on Mukherjee’s The Gene

Mukherjee's The Gene (cover)

Nathaniel Comfort, “Genes Are Overrated”:

Mukherjee gives us a Whig history of the gene, told with verve and color, if not scrupulous accuracy. The gene, he tells us, was first described by the Augustinian friar Gregor Mendel, in the mid-19th century. Tragically, no one noticed—not even the great Charles Darwin. “If Darwin had actually read” the reference to Mendel in a volume on Darwin’s own shelves, Mukherjee writes, it “might have provided the final critical insight to understand his own theory of evolution.” The “missing link” in Darwin’s day, he continues, was “information,” by which he means genetic or hereditary information.…

The antidote to [this] Whig history is a Darwinian approach. Darwin’s great insight was that while species do change, they do not progress toward a predetermined goal: Organisms adapt to local conditions, using the tools available at the time. So too with science. What counts as an interesting or soluble scientific problem varies with time and place; today’s truth is tomorrow’s null hypothesis—and next year’s error.…

The Whig interpretation of genetics is not merely ahistorical, it’s anti-scientific. If Copernicus displaced the Earth from the center of the universe and Darwin displaced humanity from the pinnacle of the organic world, a Whig history of the gene puts a kind of god back into our explanation of nature. It turns the gene into an eternal, essential thing awaiting elucidation by humans, instead of a living idea with ancestors, a development and maturation—and perhaps ultimately a death.

Michael Eisen, on Mukherjee’s (and others) mucking up of epigenetics, particularly in The New Yorker excerpt

Mukherjee is far from the only one to have fallen into this trap. Which brings me to what I think is the most interesting question here: why does this particular type of epigenetic inheritance involving an obscure biochemical process have such strong appeal? I think there are several things going on.

First, the idea of a “histone code” that supersedes the information in DNA exists (at least for now) in a kind of limbo: enough biochemical specificity to give it credibility and a ubiquity that makes is seem important, but sufficient mystery about what it actually is and how it might work that people can imbue it with whatever properties they want. And scientists and non-scientists alike have leapt into this molecular biological sweet spot, using this manifestation of the idea of epigenetics as a generic explanation for things they can’t understand, a reason to hope that things they want to be true might really be, and as a difficult to refute, almost quasi-religious, argument for the plausibility of almost any idea linked to heredity.

But there is also something more specifically appealing about this particular idea. I think it stems from the fact that epigenetics in general, and the idea of a “histone code” in particular, provide a strong counterforce to the rampant genetic determinism that has dominated the genomic age. People don’t like to think that everything about the way they are and will be is determined by their DNA, and the idea that there is some magic wrapper around DNA that can be shaped by experience to override what is written in the primary code is quite alluring.

Of course DNA is not destiny, and we don’t need to evoke etchings on DNA to get out of it. But I have a feeling it will take more than a few arch retorts from transcription factor extremists to erase epigenetics from the zeitgeist.

There are rants and there are rants. This here is a rant.

As Jezebel notes elsewhere, this bomb-throwing freelancer revenge rant burns bridges with admirable abandon. Been a while since I’ve read one quite so fun and strange. It’s a tough trick to write this, for instance:

We left Paris and went to the south of France to write the piece that I had promised would be 10,000 words. The Riviera is the perfect place to make you forget what a schmuck you are.

and retain any of the reader’s sympathy. Certainly it weakens the writer’s later “broke writer” lament. But who cares? Somehow, at least for this dear reader, Jacques pulls it off. I wonder what he’ll do for a living now.

Grab the popcorn. Elle on Earth, by Jacques Hyzagi, at The Observer.

Realizing I was dealing with a power angry maniac I called the meeting off and stood her up. Almost-famous people have a tendency to act even more obnoxiously than the famous ones. Graydon Carter, who knows a thing or two about fame, has this parable about a peasant like me arriving in New York from his hamlet and trying to make it in the big city like in a Balzac novel. The provincial enters a dark room and tries to find a door that will enable him to enter another room and so on until he finally reaches success but at each room the door to the next is more difficult to find. Usually in New York society very few arrivistes make it past the first room. I have no idea what he’s talking and it’s probably why his magazine is a giant bore.

I chose Edith Wharton when time came to learn about New York social cues and suffice to say there was no mirth in the house of ELLE. I thought the hell with it I’ll go somewhere else but by then CDG was set on ELLE and the Guardian, the same outlets I had to work (is it clear here that it was CDG I had to convince into accepting the outlets?) hard in convincing in the first place. I understood that once you set the process forward with the egomaniac genius and precise designer, the slightest change might send the whole apparatus crashing. Too often the fear instilled by mediocrity and incompetence, the two tits that nourish capitalistic societies, can only feed the beast if patterns and routines are kept as is. The slightest changes might unravel the whole company because they will unveil a paper-like deus ex machina.

How Failure Is Moving Science Forward

replication 4

Psychology, biomedicine and numerous other fields of science have fallen into a crisis of confidence recently, after seminal findings could not be replicated in subsequent studies. These widespread problems with reproducibility underscore a problem that I discussed here last year — namely, that science is really, really hard. Even relatively straightforward questions cannot be definitively answered in a single study, and the scientific literature is riddled with results that won’t stand up. This is the way science works — it’s a process of becoming less wrong over time.

How Failure is Moving Science Forward, by Christie Aschwanden, FiveThirtyEight. Includes some good insight on why thin results get heavy attention.

 

Brooke Borel’s strange story about Kevin Folta interviewing himself, among other (mis)adventures

 

Art from Buzzfeed
Art from Buzzfeed

The Kevin Folta/GMO/Monsanto/Right-to-Know/conflict-of-interest variety show and bazaar — a saga about a food scientist who took $25,000 from Monsanto without disclosing he did so but seems to have thought that was probably more or less okay — just got more bizarre, as Brooke Borel describes in a strange and deftly told story in Buzzfeed.

This is conflict-of-interest as tragicomedy. I love how Borel’s attention to the comedy in this situation a) underlines the weird cluelessness of Folta’s behavior, b) thereby reminds us that cluelessness and seemingly benign self-deception can lead to COIs as readily as greed can, c) establishes Borel as a highly informed and deeply sourced observer who keeps a crucial critical distance, and d) makes the piece a joy to read. Awesome, exemplary work.

I was confused, to say the least.… So I wrote back to Folta: Was he actually Blazek? Did he interview himself?

The email conversation that followed was decidedly odd. Yes, Folta was Blazek. He was using a pseudonym, he said, because it was fun (“I see why Colbert did the Colbert Report”), and so he could “play in this space” without drawing attention to his role in the project.

Yes, he had interviewed himself, but only because some of his listeners had caught on that Blazek might be him, and he wanted to throw them off his trail. And, well, no, he hadn’t considered how all this might look to an outsider.

It only gets weirder.

Seed Money, @Buzzfeed

Ernest Hemingway, Clutterbug

“Like his father, he saved every totem that touched his hand.”

“Hemingway was someone who felt the talismanic power of objects, of things, of the materiality of experience,” Declan Kiely, who is a young and genial Englishman with Irish roots, said when I visited “Between Two Wars.” “If something happened to him, he hung onto it.”

The Morgan Library has an exhibit of the better finds among the neat piles of stuff Hemingway hung onto. Barry Yourgrau takes a look.

“Was he a pack rat?” Susan Wrynn, the then curator of the Hemingway Collection, asked herself in the New York Times, after the materials were made available. “Absolutely, absolutely.” Indeed, Hemingway’s clutter was noted as far back as 1958, when George Plimpton visited the Finca for a Paris Review interview. According to Plimpton, the bedroom where Hemingway wrote “The Old Man and the Sea,” standing up at his work desk in “a square foot of cramped area,” was a hive of clutter, clean but enormously crowded. The room suggested “an owner who is basically neat but cannot bear to throw anything away—especially if sentimental value is attached.” Hemingway’s fourth wife, Mary, once declared that he couldn’t toss “anything but magazine wrappers and three-year-old newspapers.”

 

Paxil shown unsafe for teens, drugmaker congratulates self for sharing damning data it hid for years

This post got an upgrade: The revised, expanded version is now at The Atlantic. Many thanks to the folks at The Atlantic for picking it up. If you need a teaser:

One night in 2002, Sara Bostock woke in the night thinking she’d heard a bump in her kitchen. When she went to investigate, she found her 25-year-old daughter on the kitchen floor in a pool of blood. Next to her was a large and bloody chef’s knife. In her chest were two knife wounds. One was shallow; the other was fatally deep.

Sara Bostock has always thought that her daughter was killed that night by an antidepressant called Paxil. Cecily, a bright, generally cheerful Stanford graduate, had been taking Paxil for two weeks. Five months before, she had become moderately depressed, and, as Sara would recall it, entered a psychiatric system newly enamored of chemical models of depression and chemical solutions. In search of the drug that would work well, doctors had put her on one after another that worked badly. Of these, Sara says, Paxil was the worst. It made Cecily more and more agitated, increasingly unlike herself. Finally she ended it in what Bostock has called a death “completely unexpected, out of character and violent.”

For more on the cost of hiding data, see “How Many Suicides Happened Because of Paxil’s Misleading Safety Study?“, at TheAtlantic.com.

Roberta Payne on the art of schizophrenia

By Roberta Payne, all rights reserved; do not copy without artist persmission. Used here courtesy Roberta Payne.
By Roberta Payne, all rights reserved; do not copy without artist persmission. Used here courtesy Roberta Payne.

Roberta Payne, author of the superb memoir Speaking to My Madness, did the cover art on the current issue of Schizophrenia Bulletin. The issue also runs an essay she wrote about “schizophrenic art.”

I once drew on poster-size paper a gracefully diagonal, writhing black eel. So far, conventional structure that any artist might have planned. But the eel’s mouth was a circle of jagged teeth, out of which poured, in lovely calligraphy, words, phrases, and sentences about the nature of evil. (This was in response to the Aurora theater shootings.) I can’t imagine that this drawing would be anything but abnormally chilling to a consensus-reality viewer, chilling like the hallucinations of huge, neon, electric spiders slithering down walls struck me decades ago, when I experienced delirium tremens from alcohol withdrawal. The (usually anonymous) schizophrenic art I have viewed in books and on the Internet often has unique, otherworldly traits, especially aesthetics that remind the viewer of isolation, of a world made of metal, and bone-rattling electricity. It could be that the avoidance of aesthetic revulsion (a kind of fear) is a tactic that consensus reality uses to maintain its balance.

The most extreme example of creating repulsiveness takes place for me when I’ve been drawing while paranoid. I’m appalled by what I’ve drawn, just as I’m appalled by the paranoia itself, which feels like fear and anger experienced simultaneously. The figures I draw are nasty, sneering, in a world I want to run from: weird bats, black crows with huge wingspans, eerie owls uttering phrases in Latin. I’ve often hid those drawings, afraid of their power over me, and afraid of the startled reactions of others.

How is this different from, say, Diane Arbus’ photographs of the grotesque, of people malformed and the stuff of sideshows, subjects whom she admitted to hating? It would be easy to say that she chose of her own free will to take those particular photographs, that they were not in any way commanded by internal demons. My guess is that she was just as artistically driven to take them as a psychotic artist is to draw or paint her vision. But Arbus in her prime stood out nearly alone in her vision. Producing psychotic art, on the other hand, is quite common among schizophrenics; and I have heard both clinicians, researchers, schizophrenics themselves, as well as members of the general public refer to the rich, creative, artistic bent that seems at times to accompany the illness.

As I am currently writing an article about, among other things, how our culture reacts to schizophrenia by isolating the person suffering it — and how destructive that isolation is — I’m pleased to see that Roberta’s portrait of a happier state of mind and being shows other people, themselves in various states of mood, nestled into the persons’ hair like peas in pods.