Talk Therapy or Pill? A Brain Scan May Tell What’s Best

Image not available.

Figure 2. Potential treatment-specific biomarker candidates. Mean regional activity values for remitters and nonresponders segregated by treatment arm are plotted for the 6 regions showing a significant treatment × outcome analysis of variance interaction effect. Regional metabolic activity values are displayed as region/whole-brain metabolism converted to z scores. Regions match those shown in Table 2. Escitalopram was given as escitalopram oxalate. CBT indicates cognitive behavior therapy. Figure courtesy the authors and the JAMA Psychiatry.

In a new paper out yesterday in JAMA Psychiatry, a team led by Emory University neurologist Helen Mayberg, whom I’ve written about several times, identifies a possible biomarker for predicting whether a depressed patient will respond better to an antidepressant or a type of talk therapy called cognitive behavioral therapy, or CBT. As the paper says, “if confirmed with prospective testing, this putative TSB [treatment-specific biomarker] has both clinical and pathophysiological implications” — that is, it might help improve and speed treatment while revealing physiological differences between two different strains of depression.

The 63 patients in the trial were all pretty sick, with depression scores averaging about 19 on the 26-point Hamilton depression scale. (1 to 7 is normal; 8 to 13 is moderately depressed, and if you’re at 20 you’re in a very dark place.) Each patients underwent a 40-minute session in a brain-activity scanner called PET (for positron emission tomography) scanner during which they lay awake, eyes closed, and were asked to not ruminate on any one subject the whole time. The scanner tracked the blood glucose levels in their different brain areas during that time — a type of study assumed to be a proxy for significant brain activity. After that, roughly half the patients took a 12-week course of standard doses of the antidepressant Lexapro, while the other half got 16 sessions, roughly weekly, of cognitive behavioral therapy — a talk therapy aimed at learning to rework negative loops of thought, and the talk therapy with the best-documented and highest rates of effectiveness.

When all this was over, the researchers went back and analyzed the pre-therapy brain scans to see if they could find anything distinguishing the patients who responded to Lexapro from those who responded to CBT.

They did. A brain area called the anterior insula, which is involved in many brain functions, was busier than normal in the patients who later responded to Lexapro and less active than normal in the patients who responded to CBT. To put it another way: patients with high insula activity tended to respond better than most depression patients to Lexapro but worse to CBT, while low-insula-activity patients responded better than most depression patients to CBT but worse to Lexapro. (Alas, no particular pattern in the insula or elsewhere marked the patients who didn’t respond to whichever treatment they got.)

Image not available.

Figure 1. Study design and outcomes. Outcome groups defined by Hamilton Depression Rating Scale (HDRS) scores. Remission was defined as an HDRS score of 7 or less; partial response, as an HDRS score decrease of more than 30% but not achieving remission; and nonresponse, as an HDRS score decrease of 30% or less. Escitalopram was given as escitalopram oxalate. CBT indicates cognitive behavior therapy and PET, positron emission tomography. Courtesy the authors and JAMA Psychiatry.

If this holds up — if PET scans can reliably predict which patients will respond at high rates to different therapies — then it will save patients much suffering and time, including time that is often critical and especially frustrating, even dangerous, as clinicians try different treatments to ease a depressed patient’s despair. This is hardly unusual; clinicians treating depression must often try several different therapies, often for weeks or months, before finding one that works (if indeed any work at all). A PET scan that helped shortcut this could save much grief, as well as substantial time and money.

This study, as Mayberg is quick to note, needs to be replicated by larger studies if it’s to be useful. Mayberg herself wants to run a study that treats half the patients at random with either CBT or Lexapro and half according to which type of insula activity they show — that is, within that second half of the study, low-insula-activity patients would get Lexapro while high-insula-activity patients would get CBT. That would directly test the predictive power of these scans: If they’re actually useful, these targeted-treatment patient groups in the study’s second half would get better results than the first-half control group that got assigned a therapy at random. So  70% of them might recover instead of the usual 50-ish that most therapies struggle to reach.

That, she says, might open the door to more precise treatment. “We’d finally have a way to discriminate the biology. You’d know you should use treatment A instead of B.” She’s waiting to hear whether her grant application to do such a study will get approved.

I’ll probably write more on this later, for there are layers and layers to this story, and many implications; this finding arises not from a whim, but from a couple decades of work by Mayberg and others trying to characterize the brain dynamics of depression — a body of work that shows both the potential and the difficulties of creating a brain-based psychiatry.

PS 6/14/13: Wanted to add a +1 to this note from Neurocritic in Neurocritic’s sharp, smart write-up of this study:

With the newly prominent nattering nabobs of neuroimaging negativity, it’s important to remember that it’s not all neuroprattle and bunk. Some of this research is trying to alleviate human suffering.

Second that. In the years I’ve reported on and written about Mayberg, keeping up with her work frequently (see below), I’ve always been impressed with the fierceness of her focus on helping patients, and in particular on relieving the strange torturous pain of depression.

Other coverage:

The Neurocritic: A New Biomarker for Treatment Response in Major Depression? Not Yet.

Brain scan predicts best therapy for depression : Nature News & Comment

No dishonour in depression : Nature News & Comment

Study Helps Predict Response to Depression Drug – WSJ.com

and some of my earlier work on Mayberg and/or the neurology of depression here:

A Depression Switch? – New York Times

Depression’s wiring diagram

Optogenetics Relieves Depression in a Mouse Trial 

Is Psychology Stuck In a Paradigm Shaft?

The Hole in My Brain: Amnesia’s Lessons About Memory, Depression, and Love

Rachel Maddow Gets Depressed

*Standard competing interest info: Some of the several authors report consulting relationships with pharmaceutical companies; other authors practice CBT. Mayberg declares a consulting relationship with a maker of neuromodulation instruments, for deep-brain stimulation treatments that she has experimented with in other studies. See the COI disclaimers in the study for details. I’ve written about Mayberg several other times, as noted above.

Photo appearing on homepage: Self-portrait by ndanger, licenseAttributionShare Alike Some rights reserved.

 

 

 

 

6 responses

  1. Good post.

    Mayberg’s proposed follow-up study is interesting but there’s a potential pitfall. You write that

    “This study, as Mayberg is quick to note, needs to be replicated by larger studies if it’s to be useful. Mayberg herself wants to run a study that treats half the patients at random with either CBT or Lexapro
    and half according to which type of insula activity they show — that is, within that second half of the study, low-insula-activity patients would get Lexapro while high-insula-activity patients would get CBT.”

    The trouble is that this might not be a well-controlled trial. It needs to be blinded such that the patients and clinicians didn’t know whether their treatment had been assigned randomly, or by PET scan.

    The placebo effect of being told “this pill is recommended by a brain scan” is going to be a lot bigger than if you were told “we picked this pill by flipping a coin”.

    The solution is simple – give everyone a PET scan, then randomize half of them to get the “right” treatment based on the scan, and half to get the “wrong” treatment (i.e. CBT if the scan recommends SSRI).

    A 2010 trial of a different biomarker for depression treatment prediction suffered from this flaw, and I blogged about it at the time. A few months later someone wrote to the journal saying the same thing.

    • Good point, Neuroskeptic. My description of her plans, of course, was quite brief, and based on a brief conversation, so it’s quite possible it (my description) left out details or even essentials about such controls, and that she plans on blinding as much as possible.

      I did, in fact, omit the fact that she described a study in which there would be a) a group of patients who were scanned and then given the treatment (CBT or Lexapro) that the scan purportedly recommended, as it were; and b) a large control group of patients of similar symptoms etc who got scanned and were then given either CBT or Lexapro treatment at random (rather than according to which treatment their scans seemed to recommend.) The clinicians receiving the patients for treatment wouldn’t know whether they got a study-group patient or a control-group patient, and neither would the patients, for that matter. Easy enough.

      I would also guess that the NIMH would be interested in funding controlled, properly blinded studies by other teams, since that’s always best. Mayberg’s DBS surgery for depression, for instance, is being followed up and expanded not just in her lab, where she’s working on refining targets and response prediction, but in larger, double-blind trials elsewhere using scores of patients.

  2. I am bothered by at least 2 aspects of this report. First,
    the authors cherry picked the data. They excluded analyses of nonremitters who
    nevertheless improved with treatment. Misleadingly, they labeled such cases
    partial responders, though for all we know many of these would meet the
    customary criterion of response (reduction of HDRS score by 50% and final HDRS
    score of 10 or less). They also excluded early terminators (dropouts), so this
    is not an ecologically valid predictive study. Overall, 40 of 82 randomized
    cases were excluded from consideration. The authors rationalized this approach by
    stating that these 40 cases were not included in the analyses “to avoid
    potential dilution of either the remission or the nonresponse groups.” JAMA
    Psychiatry should have required the authors to present the data on these 40
    cases. We would wish to inspect those data for an intermediate position between
    the remitters and the nonresponders. Has mere regression to the mean been
    considered? After all, the mean effect size for the putative biomarker was
    identical in all remitters versus all nonresponders.

    Second, these data are uncorrected for spontaneous
    improvement. We know all too well that spontaneous improvement rates (placebo
    response rates) can exceed 50% in depressed outpatients, and especially in symptomatic
    volunteer populations: an unstated number of these cases were recruited by
    advertising rather than being clinically referred. That also calls into
    question the ecological validity of the study. Before proceeding to speculate
    as these authors did about potential pathophysiology or association with
    genetics or transmitters, etc., there is a need to establish an association of the
    biomarker with specific or placebo-corrected treatment response. Until the
    authors address these matters their report will not be taken seriously.

    • Thanks for writing.

      As I remember Dr. Mayberg explaining to me (it’s possible I have this wrong; my notes aren’t before me), the study set aside partial responders because they sought to ID pts who reached full remission (Hamilton score less than 8), since partial responders in short trials like this — people who reduce Hamilton score but don’t get under 8 — often drift back up. In other words, they were out to ID how well the scans would predict reliable, full remission.

      Again, it’s worth noting that the authors stress that this needs prospective, double-blind testing at larger scale; thus their use of the phrase “putative” marker to describe their finding.

      On the spontaneous responders: Perhaps this lies beyond the reach of a small study like this? I confess I’m less familiar with how one guards against that, although obviously one must, especially in any larger study.

      To cover this and the other noted caveats, a larger study is much needed — and as I note above, Mayberg has applied for funding to do a larger study that could be more thoroughly controlled, blinded, etc., and is careful to stress this herself: it was the first and last thing she noted to me in my conversation with her, and in the paper as well.

      Science is incremental, and small is naturally how one starts with studies of this nature, no? In fact, you could argue that starting with a really big study on such a fairly bold idea would be an irresponsible use of resources, time, and funding.

      This gets into the difficulties of how to respond to the early studies in such an incremental venture. We must find a way, in both science and journalism, to demand rigor and relevance in large studies, especially studies on which treatment will be based, while recognizing the limitations and tentative nature of early efforts — without dismissing same. I’m sure there are indeed ways this (or almost any) study can be improved, and perhaps you’ve hit on some here. To me, a small study like this is fine as long as the researchers don’t overhype it. This one got a lot of attention; but I don’t feel either Mayberg or the NIMH have hyped it as a done deal or anything close to conclusive proof; indeed, both Mayberg and the NIMH people I talked to were careful to do otherwise, and to state several times that this is an “If proven in bigger studies” sort of finding.

      That said, all such criticisms and caveats aired, such as yours, are of value, as they may help not just Mayberg’s team or anyone else testing this hypothesis but others starting in on similar ventures.

      • I don’t give much credence to their explanation for the
        cherry picking because the authors presented no data on the durability of
        remission. They just got the patients over the finish line at 12 weeks and that
        was that… no follow up. It sounds like ex post facto talk.

        Rigorous design in pilot studies is the best way to optimize rigor and relevance in large studies.

  3. Pingback: Biomarker Brain Scans for Treatment Types

Leave a Reply to Neuroskeptic Cancel reply

Your email address will not be published. Required fields are marked *