As Tom Friedman Notes, We’re Missing Our Train

Thomas Friedman, whom I sometimes find grating*, is spot-on today in underlining America’s paralysis in responding to tomorrow’s challenges.

China is doing moon shots. Yes, that’s plural. When I say “moon shots” I mean big, multibillion-dollar, 25-year-horizon, game-changing investments. China has at least four going now: one is building a network of ultramodern airports; another is building a web of high-speed trains connecting major cities; a third is in bioscience, where the Beijing Genomics Institute this year ordered 128 DNA sequencers — from America — giving China the largest number in the world in one institute to launch its own stem cell/genetic engineering industry; and, finally, Beijing just announced that it was providing $15 billion in seed money for the country’s leading auto and battery companies to create an electric car industry, starting in 20 pilot cities. In essence, China Inc. just named its dream team of 16-state-owned enterprises to move China off oil and into the next industrial growth engine: electric cars.

and:

We need to be in a race with China, not just Al Qaeda. Let’s start with electric cars.

He follows with a painful assessment.

Meanwhile, the US can’t even get its absurdly modest ($7.5B) high-speed rail program on track.

But …

Not to worry. America today also has its own multibillion-dollar, 25-year-horizon, game-changing moon shot: fixing Afghanistan.

So there you go.

____

*Why do I find Friedman grating?

  1. He tends to tie everything back to his pet metaphors; I call it his flat earth policy.
  2. He’s too fond of the one-liner. (Though not as enslaved to it as Maureen Dowd. I recognize I just applauded one of his one-liners.)
  3. Possibly I still haven’t forgiven him for cheerleading the war and refusing to just say I blew it. Okay: Definitely I haven’t forgiven him.

The Consciousness Meter: Sure You Want That?

Where does consciousness come from? And when it ramps up or down, at what point does it move from consciousness to not-consciousness?

Carl Zimmer published both a blog post and a story in the New York Times yesterday looking at the work of Guilioi Tononi, a University of Wisconsin neuroscientist who looks at these questions. As Zimmer puts it, Tononi

has been obsessed since childhood with building a theory of consciousness–a theory that could let him measure the level of consciousness with a number, just as doctors measure temperature and blood pressure with numbers.

In short, Tononi is trying to develop a consciousness meter. This piqued my interest, as a few years ago, shortly after immersing myself in consciousness studies for a profile of Christof Koch, I wrote a piece for Slate pondering the implications of coming up with a conscious meter — or, as I called it, a “consciometer.”

Sometime in the next decade or so, neuroscientists will likely identify the specific neural networks and activity that generate the vague but vital thing we call consciousness. Delineating the infrastructure of awareness is biology’s most difficult problem, but a leading researcher like Christof Koch, Gerald Edelman, or Stanislas Dehaene could soon solve it. Science will then possess what might be called a “consciometer”—a set of tests (probably an advanced version of a brain scan or EEG) that can measure consciousness the way kidney or lung function is now measured.

The gist of the piece was that figuring this out might make some ethical dilemmas easier and some harder, because consciousness has taken on some distinct legal implications about both the end and the beginning of life.

The close association of consciousness with life dates only to the last half-century, when doctors learned to sustain heart and lung function long after awareness and will were gone. In the 1980s, legislators responded by establishing whole-brain death as the legal standard of death. At the same time, upper-brain death—the cessation of organized activity in the “thinking” cortex—became a common point at which to authorize the withdrawal of medical treatment. In theory, you can pick any state of health—upper-brain death or paralysis, for example—as your own signal to stop medical care. (Read an intensive-care doctor’s description of what happens when there’s no such signal.) But in practice most people choose the lack of demonstrable consciousness that doctors call a persistent vegetative state.

This practice spread through medicine and then law. And the basic equation — that is,  measurable brain death = no consciousness = legally dead  — was firmed up by the Schiavo case. This carries quite an irony, as conservatives, by pushing so hard on the Schiavo case, created a precedent that may bite them in beginning-of-life issues:

In the many appeals of the trial court’s decision to remove her feeding tube, however, state and federal courts repeatedly based their decisions on Schiavo’s cognitive status, making it the central issue in the case. Congress and the Bush administration similarly framed their efforts to restore Schiavo’s feeding tube. And here lies the affair’s great irony: Religious conservatives want the law to define life as the existence of a single living cell containing human DNA. Yet their Schiavo campaign bolstered both the acceptance of consciousness as the boundary between life and death and the authority of neuroscience to measure it.

The consciometer will strengthen this authority further.

The tricky part comes when these definitions of life get applied at the beginning of life. The landmark 1973 case Roe v. Wade replaced an old marker of life — the “quickening” or first movements of the fetus — with one based on fetal viability, which typically occurs at about the 23d week. This was a tactical move meant to provide a firmer marker for legal purposes. Law seeks clarity. Which is where a consciousness meter could be quite tempting to the courts — and discouraging to anti-abortion conservatives:

As leading neuroscientist Michael Gazzaniga, a member of President Bush’s Council on Bioethics, describes in his book The Ethical Brain, current neurology suggests that a fetus doesn’t possess enough neural structure to harbor consciousness until about 26 weeks, when it first seems to react to pain. Before that, the fetal neural structure is about as sophisticated as that of a sea slug and its EEG as flat and unorganized as that of someone brain-dead.

The consciometer may not put the abortion issue to rest—given the deeply held religious and moral views on all sides, it’s hard to imagine that anything could. But by adding a definitive neurophysiological marker to the historical and secular precedents allowing abortion in the first two-thirds of pregnancy, it may greatly buttress the status quo or even slightly push back the 23-week boundary.

There is another possibility. The implications of the consciometer could create a backlash that displaces science as the legal arbiter of when life ends and begins. Such a shift—a rejection of science not because it is vague but because it is exact—would be a strange development, running counter to the American legal tradition. Should a fundamentalist view of life trump rationalist legal philosophy? Roe v. Wade considered this question explicitly and answered no. For nonfundamentalists, that probably still seems right.

How will the sort of consciousness meter contemplated by Tononi affect this? At first glance it seems like it won’t or can’t apply: Tononi is using EEG sensors, and how would you get those onto a fetus? You wouldn’t. Yet if Tononi can generate acceptance of the idea that certain relative levels and types, or “shapes,”* of brain activity mark consciousness, then the only thing preventing the scoring of the consciousness level of fetuses is a way to measure their brain activity without going inside the uterus. And I suspect that can’t be long.

This is all very what-iffy, of course. One huge caveat: As we learn more about the states of consciousness in people in comas and such, we’re seeing more and more gradations or classes of consciousness (or lack thereof), rather than a firmer line between consciousness and brain death. It used to be you were “brain dead” or not. Now we’re finding gradations between. Work like Tononi’s might only break that down further, breaking a black-and-white on-off scale further into a spectrum with subtle gradations.

On the other hand, he and others — and common experience — suggests that we badly want to define something unique and vital and elemental about consciousness: To prove that there’s a certain level of awareness and meta-awareness that essentially defines what it is to be alive.

It’ll be interesting to watch this develop.


*I found Tononi’s notion of brain activity taking various shapes, explored in Zimmer’s blog post, the most intriguing part of the work Zimmer described. It brought immediately to mind (heh) György Buszaki’s beautiful and ground-breaking work on the vital role that patterns of brain-wave synchronization play in the brain’s work. (The first ten pages or so of Buszaki’s book are mind-blowing. Man’s on a roll.) So I was surprised when Zimmer’s story said, briefly and tantalizingly, that Tononi seemed to dismiss that work, or at least set it aside. I’d love to hear more about how his work differs or is incompatible. (Carl?)

See also:

John Hawks with a brief riff on Zimmer’s article; he ends up at Darwin, which is (Tononi > Zimmer > Hawks > Darwin) fitting enough.

My profile of Joseph LeDoux, whose work on the not-conscious workings of the brain suggests it’s rather vital to our essence as well.

Christof Koch’s fine series of columns in Scientific American Mind, where he keeps up with consciousness and other intriguing puzzles. He also has a nice book and an interesting web page.

The Tononi Lab.


Illustration by Robert Neubecker, courtesy Slate.com

 

Harvard opens the (exit) door a crack for Hauser

Be easy to miss this. In an interview about several things (mainly the prospect of the ROTC returning to Harvard), Harvard president Drew Gilpin Faust gave the first hint today that Harvard may be considering squeezing Hauser out:

Faust also called into question the future of psychology professor Marc Hauser, whom the university has found responsible for eight instances of scientific misconduct.

The university’s official stance has been that Hauser would return to teaching in July 2011 after being placed on a year’s leave. But yesterday Faust said there are “too many uncertainties of what the future is going to bring’’ for her to know whether Hauser will resume teaching.

“He may decide he may not wish to come back,’’ Faust said. She also said findings from an ongoing federal investigation could have a bearing on his return.

As far as I know, this is the first time Harvard official has spoken of Hauser leaving or hinted that they may pressure him to leave. Since it comes from the presdient, some might take this language itself as an application of such pressure — though perhaps she was just speaking frankly. In any case, this is a bit different than responding to all such questions that he’d return to teaching next fall.

Update: The Harvard Crimson expands on this a bit.

____

H/t to @moximer, who blogs at Child’s Play over at Scientopia for tweeting this hidden bit to my attention.

Image: by futureshape, via Flickr

The Boston Globe on the Hauser Fallout

I’ve noted a few times that the Hauser misconduct case at Harvard would ripple through science for quite some time. Today the Boston Globe’s Carolyn Johnson, who has done a nice job on this case all along, has a story on how the department at Harvard is trying to deal with the fallout.

In the Harvard Psychology Department, faculty have been meeting to discuss how to remove the cloud created by the scientific misconduct case of one of their most prominent colleagues, Marc Hauser.

Elsewhere, a scientist is considering repeating a key experiment Hauser conducted on the behavior of monkeys.

A month after Harvard said it found Hauser guilty of eight infractions involving three published papers and other unpublished work, scholars in and out of the university are struggling with how to respond, and particularly with how to establish the reliability of the rest of Hauser’s large and influential body of research.

The uncertainty is not just an academic concern. In popular books, news stories, and television programs, Hauser drew people into deep scientific questions that spark the imagination. Can nonhuman animals tell what others’ intentions are? What cognitive abilities make us uniquely human?

Now, many scientists fear that because Hauser contributed so much to the public perception of not only his own work, but of a field that looks for the evolutionary underpinnings of human cognitive abilities, the questions about him will also cast a broader shadow.

Johnson describes how people in and out of the department are trying to double-check some of Hauser’s work. Some of what they’re finding doesn’t sweeten the picture. Among the studies people are looking at again is

an experiment that Hauser reported on in a 1995 Proceedings of the National Academy of Sciences paper, which looked at the ability of cottontop tamarin monkeys to recognize themselves in a mirror. That paper was criticized by some scientists when it was published, but is not part of the current misconduct findings.

In the experiment, multiple observers coded the data, and one observer did not know the experimental condition. In a videotape of the experiment, which was provided to the Globe by Gordon G. Gallup Jr., a psychology professor at State University of New York at Albany who requested the raw data from Hauser, monkeys sometimes look in the mirror. When this happens, Hauser can be heard saying “stare.’’

Lengthy bouts of a monkey staring into the mirror calmly, instead of acting aggressively toward its reflected image, were one piece of evidence Hauser used to show the monkeys passed the mirror test. But Gallup said he saw no evidence on the tapes of mirror-guided behavior.

Rather tough work, this. I don’t think anyone really likes doing all this due diligence. It must be like looking through a crime scene.

Hat-tip to Razib Khan for drawing my attention to this.

Why Publishing the Paper is Only Half the Scientist’s Job

The scientific paper is a wonderful thing. So how is it holding science back?

I’ve got a guest post over at the Guardian science  blog network pondering just that. In particular I look at how making the scientific paper the effective currency of science — instead of just one of several ways to share the real goods, which is data and ideas — can discourage researchers from explaining science and its importance to the public.

Here’s the essential fact: science has no importance or value until it enters the outside world. That’s where it takes on meaning and value. And that’s where its meaning and value must be explained.

Scientists implicitly recognise this at a limited scale: They want their colleagues to understand their work, so they go to conferences and explain it. But that’s not enough. They need to go explain it at the Big Conference — the one outside of academe. They need to offer the larger world not just a paper meaningful only to peers, but a friendly account of the work’s relevance and connections to the rest of life. That means getting lucid with letters columns or op-ed pages or science writers or science cafes or schoolchildren or blog readers. Those who can’t hack that – stage fright, can’t write, or just doesn’t feel right – can support their peers who do engage the rabble. Write some code for them, maintain their web pages, give them rides, or grant them time off from inside the lab to take the lab’s work outside. But do something. Because if you “just do the work,” you’re not finishing the work. You haven’t got it out there.

Some are already swinging into action. Many of the scientists at the SOLO event argued their community must do more to engage the public and make the case for research funding – unless, of course, they want to see massive budget cuts and a world where social and political discussion are shaped less by evidence than authority. Some of them, crying “No more Doctor Nice Guy!” are now organising British scientists to take to the streets.

Getting your research out there and taking time out from the lab is a pain, no doubt. But if you’re a scientist, surely you don’t expect the rest of us to just assume your work is important. No. If you want the world to believe that your work is important and that modern life and a free society depend on a rigorous, evidence-based approach to things, you wouldn’t ask us to take it on faith. You’d want to show us the evidence.

Get the whole thing over at the Guardian. And while you’re there, check out yet the other guest posts at Guardian science blogs, which is yet another of the several science-blog networks that have popped up in the last couple of months.

_____

Image: Carl Sagan with the Viking lander, courtesy Nasa.gov

Is page reading different from screen reading?

I perked up a couple weeks ago when I read Jonah Lehrer’s post about e-books and the possible differences between reading a screen and reading a page. Like Jonah, I regard e-books with an excitement tinged with lament. But as he notes, the tide is in; they’re here to stay. Jonah describes in his post how, when packing to move back to the US from England a few years ago, he stuffed his bags with books. When I packed for England two months ago, I packed just two physical volumes, indispensable because I’d annotated them heavily for my current book project. The rest of my reading pile — about 30 books — came along in my iPad.

Yet even as I dive into these iPad books every night, I feel, like Jonah, that reading on a screen differs in some significant way from reading on paper. I’m not saying this is bad or that it will make me stoopid; just that it is.

Where’s the proof? Jonah offered some speculative brain-based hypotheses; I can offer two bits of evidence that are blatantly subjective.

The first echoes something Jonah offered in his post-scriptural “bonus point”:’

Bonus point: I sometimes wonder why I’m only able to edit my own writing after it has been printed out, in 3-D form. Why?

I find the same thing. I revise effectively both onscreen and on paper, but I revise differently on paper. I work more at a macro scale. I’m more sensitive to proportion and rhythm and timbre. I see spaces and densities better: the clumps where the prose has grown too dense, the wandering of the path where I ramble, the seams that need to be closed, the misaligned joint that I suddenly realize — yeah; there it is! — is where that paragraph from three pages ahead belongs.

As Jonah asks, Why? Is the manuscript’s physicality giving me a greater sense of physical proportion? Does the act of pressing slickened grooves into the page with my fountain pen somehow invite a corresponding mental penetration? Is the curved, flexible rigidity of five sheets in my hand sharpening my awareness of texture? Or perhaps the slowness of my pen relative to the speed of my typing favors this more structural approach — big cross outs, sections circled and moved wholesale, massive reorganizations planned with quick scribbles in the margin — over the finer-grained tweaks and cutting-and-pasting the keyboard seems to encourage.

I don’t know. But I know it’s different. It’s like putting down your violin and climbing out of the string section to take the conductor’s podium. And it works reliably. I know that when my fifth or ninth or fifteenth onscreen edit isn’t getting me anywhere or is digging me deeper into some hole I can’t get the dimensions of, I can print the manuscript and get above ground and suddenly see things I was missing.

I feel there’s a second significant difference in screen versus page reading as well, one I’ve been pondering for a couple years. I think reading on the page is vertical and personal where reading on the screen is horizontal and communal. This is subtle and took a while for me to extract. But I’ll try to explain. I’ll put this slightly more starkly than it really is, to heighten the contrast.

When I read on the screen, I’m always aware of the links. I mean not just the literal hyperlinks but the implied hyperlinks that are now embedded into every word on virtually every screen, simply because it’s so easy and productive to search. Reading onscreen, I’m always half-aware that I can go horizontally, as it were, via links, to anything the reading brings to mind — which could be anything.

This makes the reading slightly more provisional, less engaged, less settled in. You’re reading, and you’re serious about it, but you’re also aware you might feel a need to leave, even if for a moment, to check a definition, Google Dehaene or dorsal stream, or (because you can) check your email or Twitter feed. You’re reading, but you haven’t really dug in. You haven’t put you feet up. And why would you? You might have to cross the room.

When you read on the page, by contrast, you can really settle in, because it’s much more just you and the book or the magazine. It’s a far more closed, vertical exchange that requires a more committed engagement.There’s no (or less, anyway) thought of links, no implied invitation to turn to another conversation, to consult others, to follow a trail out sideways. You can’t easily go elsewhere — not without leaving your chair, anyway. Whatever you’re going to get out of this book, whatever you’re going to make of it, you’re going to have to find either in the book in your hands or in the hallways of your head. You’re really finding it, of course — you’re generating it — in this deep conversation with the book. Side conversations break the spell.

This doesn’t place page reading on a pedestal or make screen reading a threat to civilization. But it’s different. I think it makes you dig harder. I think it draws more out of you, or at least draws on you in different ways.

Possibly the benefits are more emotional than intellectual, moral, cognitive, or cultural. Possibly it’s more a luxury than a need. But it’s something I want. It’s the sort of engagement portrayed in my favorite portrait of reading, Wayne Thiebaud’s “Man Reading.” I can’t post it here, partly because I might be sued but also because I can’t find it online anyway. So I’ll have to describe it.

The painting dates, I’d guess, from the mid-sixties. We’re looking at an utterly ordinary-looking man seated directly before us in a simple chair, wearing a dark suit and black oxfords, and though he faces us, we can’t see his face because he’s bent over, leaning with his elbows on his thighs, and looking down at the book in his hands. We see his balding pate and that he wears glasses. His face we must imagine, but of his state of mind we needn’t guess. Everything in the way he holds himself on that chair, his immense private stillness, shows that he has been profoundly, perhaps permanently changed by this book. The book is closed now; he’s presumably just finished reading it; and it has so moved him that he has bent over to hold and look at it so that the world remains for a few more precious minutes just him and that book. He’d do this forever if he could. He wants the world to stay this changed. He wants to stay inside this thing he and the book have created.

I might be wrong. It might be that as I read more books on my iPad, I’ll hit some that move me that profoundly. Of coures even the iPad in book mode offers its distractions. Text on iPad books isn’t linked the way the text on web pages is, but when I highlight some text to, um, highlight it, up pop at least three options I can pursue — highlight, note, define — and this reminder that I’m creating a digital page full of excerpts, rather than a highlighted page of paper, pulls me a bit into that same linked brainspace where you read on screens; suddenly I hear other people in the room.

Those distractions aside, though — who knows, maybe I’ll adjust and this distinction will fade. But so far the engagement just doesn’t feel the same. The links don’t feel as deep.

______

Image by vishwaant

Speaking Skeptically about Mark Hauser and morality research

Did Marc Hauser fabricate or falsify data in his monkey studies? Will his troubles stain the fields of morality studies and evolutionary bases of behavior? How exactly do you ask a monkey if it can hear patterns of speech?

These were among the questions Desiree Schell asked me the other day on Skeptically Speaking, an Edmonton-based radio show about “Bad Science,” and our conversation takes up the first 15 minutes or so; it’s followed by an interview with cognitive psychologist Barbara Drescher about the common mistakes that scientists make and what that does to science — a sort of primer on how people can get into trouble, even with the best intentions.

If you’ve not read my coverage here or on Slate, this interview with Schell is a good overview of the issues I’ve explored and the situation in general: What Hauser was accused of, what evidence is out so far, what methods he seems to have been using (or misusing), and what sort of harm the scandal might do to the rest of the field..

To listen,  download the episode here, and a little audio player should open in a new browser window.

If you’re more the reading type, you can read my earlier work on this. In the order they appeared:

Marc Hauser, monkey business, and the sine waves of science The ugly beginning. (Aug 11)

Hauser wake cont’d: Could the hivemind prevent fraud & misconduct? Mmm. Maybe. (Aug 13)

Hauser update: Report done since JANUARY. That’s a long time. (Aug 14)

Updated: This Hauser thing is getting hard to watch. The Chronicle of Higher Educations spills some beans. (Aug 20)

Journal editor’s conclusion: Hauser fabricated data. A major blow. (Aug 27)

Edge corrects — no, make that ERASES — the record on Hauser.  Not so good. (Sep 5)

A Rush to Moral Judgment: What went wrong with Marc Hauser’s search for moral foundations My article at Slate. (Sep 7)

In Marc Hauser’s rush to judgment, what was he missing? Fun and beauty, among other things. (Sep 7)

How the curveball fools you: Illusion of the Year

sandy
Koufax, bringing the four-seamer. God save the guy at the plate.

Author’s note: In honor of Koufax Day (anniversary of the day he skipped a World Series start b/c it was Yom Kippur), I repost this piece that originally ran in the old Neuron Culture May 2009. See also my follow-up on Koufax as god, with Most Incredible Pitcher Stats Ever and a vid of Sandy bringing it. Meantime:

I always look forward to the Illusion of the Year contest, but this year brings a special treat: a new explanation of how the curveball baffles batters.

Just a few days ago, during BP, my friend Bill Perreault threw me one of those really nasty curves of his, and though I read it about halfway in, I was still ahead — and still unprepared for the sudden slanting dive it made at that last crucial moment. The good curves do that: Even when you have that millisecond of curveball detection beforehand, they still seem to take a bend sharply and suddenly late in their path, as if some invisible hand gave them an extra tap.

This wonderful “illusion” — really an explanation via an illusion put together by Arthur Shapiro, Zhong-Lin Lu, Emily Knight, and Robert Ennis — explains how that happens. The curveball kills you two ways: through actual movement and an extra perceived movement that further complicates the task of getting that tiny strip of sweet spot onto the ball. (The sweet spot on a bat is about a half-inch tall and maybe 6 inches long. You have to get that small strip, which is over 2 feet away from your accelerating hands, onto the ball …. at just the right moment, and with the bat accelerating, or you’re probably out.)

I can’t paste the illusion in here, but suffice to say that certain visual dynamics — a difference between the neural dynamics of central vision and peripheral vision — cause you to see a baseball that is rotating horizontally but falling vertically appear to fall vertically if you’re looking straight at it — but moving slantwise sideways if it’s in your peripheral vision. Batters can’t completely keep up with a thrown pitch as it approaches them and it effectively accelerates its path across their field of vision (from coming at them to moving past them). So at the crucial moment — the last part of the ball’s path — you are essentially trying to track this baseball, which is usually in flight for a half-second or less, with your peripheral vision. It is in this fraction of a second — in which the ball would be hardest to track anyway — that the curveball actually moves the most — and in which its real downward and sideways motion is suddenly exaggerated as it moves into your peripheral vision (cuz your eyes can’t quite keep up).

That’s why Bill’s curveball was so untouchable. And that’s why the unexpected curveball, which arrives later than you expect, sharpens its effect all the more and seems to just jump downward.

All of which reminds me of a great story Jane Leavy told in her splendid biography of Sandy Koufax. World Series in either 61 or 62, Koufax is facing the terrifying Mickey Mantle. Book on Mantle is never, ever throw him the curve, for even though you may fool him badly, he’s so strong that he can still crush the ball even if his body is fooled but his hands are still not committed. So just don’t throw it to him. Koufax faces Mantle three times and throws all fastballs. (The best fastballs ever known.) Third time up, crucial situation, he gets two strikes on Mantle — and then shakes off the fastball sign, twice. Catcher catches on, puts down two fingers to call for the curve — which was a horrid thing, a nose-to-toes diver that just killed batters, best curve in the game … but still, he’d been told NOT to throw this thing to Mantle. But he does. Ball comes in eye-high, then dives, crosses the plate at Mantle’s knees. Mantle flinched a bit but never moved. Called strike three. He stands there an extra second, then says to the catcher, “How the fuck is anybody supposed to hit that shit?” and walks back to the dugout.

Oh Sandy.

See also my follow-up on Koufax as god, with Most Incredible Pitcher Stats Ever

Depression’s wiring diagram

Overlap of brain areas affected by four historical psychosurgical depression treatments.

The ever-excellent Neurocritic has an interesting post looking at “lesion studies” of depression. As he notes, he was hoping for real lesions, from people who’d had actual psychosurgery, but had to settle for a simulation study that used MRIs. The study, (Schoene-Bake et al., 2010) looked at the different brain areas affected by four different areas targeted in psychosurgeries meant to treat depression. According to the study, these four approaches overlapped in spots. And as the study put it, the

convergence of these shared connectivities may derive from the superolateral branch of the medial forebrain bundle (MFB), a structure that connects these frontal areas to the origin of the mesolimbic dopaminergic ‘reward’ system in the midbrain ventral tegmental area [VTA]. Thus, all four surgical anti-depressant approaches may be promoting positive affect by converging influences onto the MFB.

As Neurocritic notes, this finding overlaps heavily with experimental work in which Emory neurologist Helen Mayberg is targeting one particular connection along the MFB — an area called Area 25. Since I wrote on Mayberg’s study at length in the Times Magazine a few years ago, and then more briefly in a followup at Scientific American, I find this quite intriguing: Mayberg’s rigorous approach, in which two decades of work converged on Area 25, led her to an area that had been part of broader, less specific targets in earlier, cruder efforts, but which seems to be a more successful target. As I wrote in the Times Magazine article:

Mayberg … increasingly homed in on Area 25, which seemed crucial in both its behavior and its position in this network [implicated in depression]. [She] found that Area 25 was smaller in most depressed patients; that it lighted up in every form of depression and also in nondepressed people who intentionally pondered sad things; that it dimmed when depression was successfully treated; and that it was heavily wired to brain areas modulating fear, learning, memory, sleep, libido, motivation, reward and other functions that went fritzy in the depressed. It seemed to be a sort of junction box, in short, whose malfunction might be “necessary and sufficient,” as Mayberg put it, to turn the world dim. Maybe it could provide a switch that would brighten the dark.

So she got a neurosurgeon to slide some tiny brain-stimulating electrodes into Area 25, turned them on at a steady 4 volts, and found that this gentle stimulation of Area 25 actually calmed that hyperactive region — and relieved extreme, intractable depression in roughly 2 of every 3 patients. This is a high high efficacy for depression, and the relief was sometimes immediate. (For the whole story, see the article.)

Neurologist Helen Mayberg's hand-drawn diagram of A25 connectivity involved in depression. Don't try the surgery using this.

As I note in my comment at Neurocritic’s post, Mayberg and others are now working larger, double-blind placebo studies to test the treatment further.

In the comments at the Neurocritic post, Neuroskeptic (another ace neuroblogger, presumably slightly less critical than Neurocritic) wonders if Area 25 might be implicated in other mood disorders. If I remember correctly, Mayberg’s long analysis did not find so — and other efforts to treat OCD, depression, and other mood disorders by sending deep-brain stimulators elsewhere have not fared as well. (Though Mayberg is getting promising results in some early trials targeting Area 25 for bipolar disorder.) Perhaps more historical lesion studies like that done by Schoene-Bake and colleagues, and explored in Neurocritic’s cool post, might suggest additional targets. Then again, it might be that other disorders don’t have a single node as approachable and effective as Area 25 appears to be.

____

Top image: from Schoene-Bake, J., Parpaley, Y., Weber, B., Panksepp, J., Hurwitz, T., & Coenen, V. (2010). Tractographic Analysis of Historical Lesion Surgery for Depression. Neuropsychopharmacology DOI: 10.1038/npp.2010.132

Lower image: Courtesy Helen Mayberg