Turns out one of the world’s ugliest creatures, the naked mole rat, does not get cancer, even if you really hard to make it happen.
The coverage is fabulous:
Ed Yong, at National Geographic’s Phenomena:
Put aside their inability to feel pain in their skin, their tolerance for chokingly low oxygen levels, their bizarrely rubbish sperm or their poor temperature control. Don’t even think about how they live in ant-like colonies, complete with queens and workers. Ignore their ability to live for more than 30 years—an exceptional lifespan for a rodent of their size.
Instead, let’s talk about the cancer angle.
They don’t get it.
Carl Zimmer at the Times:
Lab mice are especially prone to cancer, for example; 47 percent of them develop tumors of one sort or another. Naked mole rats, on the other hand, have a profoundly different sort of life. They can live more than 30 years, and scientists have yet to find a single mole rat with cancer.
To understand this phenomenon, scientists have examined the naked mole rats’ cells. They’ve infected them with viruses that reliably trigger cancer in mouse cells, finding that their efforts fail utterly in naked mole rat cells.
Ewen Callaway at Nature:
Naked mole rats (Heterocephalus glaber), which are more closely related to porcupines than rats, are freaks of nature. The short-sighted creatures spend their lives in subterranean colonies in the service of a single breeding queen — H. glaber is one of only two ‘eusocial’ mammals ever discovered. The rodent doesn’t feel the sting of acids or the burn of chilli peppers, and seems to be the only mammal that is unable to regulate its body temperature.
However, the animal’s longevity and impunity to cancer are the reason why biologist Andrei Seluanov keeps around 80 naked mole rats in a special facility near his lab at the University of Rochester in New York state. The rodents have been known to live for up to 32 years, and scientists have never seen one with cancer. Mice, by comparison, rarely live past the age of four and do often die of cancer.
The only real downside was when someone told Yong that mole rats look like penises. Photo above. Judgment call.
Updated 06/17/2013 11:16am EDT (see tail end of story)
James Gilbert has the goods over at The Conversation:
Climate sceptics have won, Martin Wolf lamented in the Financial Times, despite near-universal scientific consensus against them. The sheer longevity of this “debate” indicates deniers attract disproportionate attention – partly due to one of their main lines of attack: scientific bias.
Attacks on scientists’ financial and political motivation are increasing. We hear them not only from committed deniers, but also from commentators in mainstream media. Evenpoliticians and US presidential candidates are unafraid to label climate science a “hoax”.
Now, a new study in press at the Quarterly Journal of Experimental Psychology has shown that the public is particularly sensitive to financial bias when placing trust.
Brent Strickland of Yale University and Hugo Mercier of the National Centre for Scientific Research in France asked non-scientists toevaluate scientific studies. Participants decided whether they believed the results of experiments, given what experimenters expected to find, their financial motivation and whether the methods were sound. The results showed that mentioning financial bias reduced people’s belief in findings, even if the methods were flawless.
via I bet it’s biased: one easy step to squash expert opinions.
Update: A robust Twitter conversation erupted in response to Gilbert’s post. Katie Mack, aka @AstroKatie, assembled and annotated in a nice Storify, a bit of which is here:
Rosemary gets us to the point…
Rosemary White@RoseGWhite
@AstroKatie @james_gilbert OK, can see this. So what’s the solution or set of solutions?
7 HOURS AGO
REPLYRETWEETFAVORITE
Solution: Teach the scientific method?
Katie Mack@AstroKatie
@RoseGWhite @james_gilbert I might suggest: WAY more popular-level communication of scientific method (instead of just scientific results).
7 HOURS AGO
REPLYRETWEETFAVORITE
James Gilbert@james_gilbert
@AstroKatie @RoseGWhite Agree. There are “camps” of scientists, yes. But science itself is not a camp; it is a process – the best we have.
For more, go straight to Katie Mack’s Storify amalgamation of the Twitter exchanges amongst her, Gilbert, and others.
Video: Erick Erickson shoots down the data by saying scientists too can have ulterior motives … and then moves on. NB: Lou Dobbs (also in video) is no relation.
The fabulous writing how-to site The Open Notebook recently asked a bunch of writers what their single best piece of writing advice was. My 58-second answer had to do with how to end a story:
Single Best Dobbs from The Open Notebook on Vimeo.
As I note in the interview, I picked up this nugget from Atavist co-founder Evan Ratliff, who suggested it to me while I was writing (and he editing) My Mother’s Lover, my account of my mother’s secret WWII romance, which went on to become a #1-selling Kindle Single.
This and much more writerly goodness is at The Open Notebook. .
Earlier, in the wicious pride of my youth, I sometimes threw myself into postures, imitating writers I admired and producing a certain amount of Proust and water (the recipe for the Avignon lark pâté comes to mind: one lark, one horse) to Joyce and very small beer; but none of this survived the war, and by the time I was writing Testimonies, for example, I was setting down what I had to say in the words and with the rhythm that seemed right to what might perhaps be called my inner ear, and doing so without any immediate debt to anyone.
via Paris Review – The Art of Fiction No. 142, Patrick O’Brian, which is rich pleasure from the very start. For more of my favorite bits from the interview, see my Tumblr
In a new paper out yesterday in JAMA Psychiatry, a team led by Emory University neurologist Helen Mayberg, whom I’ve written about several times, identifies a possible biomarker for predicting whether a depressed patient will respond better to an antidepressant or a type of talk therapy called cognitive behavioral therapy, or CBT. As the paper says, “if confirmed with prospective testing, this putative TSB [treatment-specific biomarker] has both clinical and pathophysiological implications” — that is, it might help improve and speed treatment while revealing physiological differences between two different strains of depression.
The 63 patients in the trial were all pretty sick, with depression scores averaging about 19 on the 26-point Hamilton depression scale. (1 to 7 is normal; 8 to 13 is moderately depressed, and if you’re at 20 you’re in a very dark place.) Each patients underwent a 40-minute session in a brain-activity scanner called PET (for positron emission tomography) scanner during which they lay awake, eyes closed, and were asked to not ruminate on any one subject the whole time. The scanner tracked the blood glucose levels in their different brain areas during that time — a type of study assumed to be a proxy for significant brain activity. After that, roughly half the patients took a 12-week course of standard doses of the antidepressant Lexapro, while the other half got 16 sessions, roughly weekly, of cognitive behavioral therapy — a talk therapy aimed at learning to rework negative loops of thought, and the talk therapy with the best-documented and highest rates of effectiveness.
When all this was over, the researchers went back and analyzed the pre-therapy brain scans to see if they could find anything distinguishing the patients who responded to Lexapro from those who responded to CBT.
They did. A brain area called the anterior insula, which is involved in many brain functions, was busier than normal in the patients who later responded to Lexapro and less active than normal in the patients who responded to CBT. To put it another way: patients with high insula activity tended to respond better than most depression patients to Lexapro but worse to CBT, while low-insula-activity patients responded better than most depression patients to CBT but worse to Lexapro. (Alas, no particular pattern in the insula or elsewhere marked the patients who didn’t respond to whichever treatment they got.)
If this holds up — if PET scans can reliably predict which patients will respond at high rates to different therapies — then it will save patients much suffering and time, including time that is often critical and especially frustrating, even dangerous, as clinicians try different treatments to ease a depressed patient’s despair. This is hardly unusual; clinicians treating depression must often try several different therapies, often for weeks or months, before finding one that works (if indeed any work at all). A PET scan that helped shortcut this could save much grief, as well as substantial time and money.
This study, as Mayberg is quick to note, needs to be replicated by larger studies if it’s to be useful. Mayberg herself wants to run a study that treats half the patients at random with either CBT or Lexapro and half according to which type of insula activity they show — that is, within that second half of the study, low-insula-activity patients would get Lexapro while high-insula-activity patients would get CBT. That would directly test the predictive power of these scans: If they’re actually useful, these targeted-treatment patient groups in the study’s second half would get better results than the first-half control group that got assigned a therapy at random. So 70% of them might recover instead of the usual 50-ish that most therapies struggle to reach.
That, she says, might open the door to more precise treatment. “We’d finally have a way to discriminate the biology. You’d know you should use treatment A instead of B.” She’s waiting to hear whether her grant application to do such a study will get approved.
I’ll probably write more on this later, for there are layers and layers to this story, and many implications; this finding arises not from a whim, but from a couple decades of work by Mayberg and others trying to characterize the brain dynamics of depression — a body of work that shows both the potential and the difficulties of creating a brain-based psychiatry.
PS 6/14/13: Wanted to add a +1 to this note from Neurocritic in Neurocritic’s sharp, smart write-up of this study:
With the newly prominent nattering nabobs of neuroimaging negativity, it’s important to remember that it’s not all neuroprattle and bunk. Some of this research is trying to alleviate human suffering.
Second that. In the years I’ve reported on and written about Mayberg, keeping up with her work frequently (see below), I’ve always been impressed with the fierceness of her focus on helping patients, and in particular on relieving the strange torturous pain of depression.
Other coverage:
The Neurocritic: A New Biomarker for Treatment Response in Major Depression? Not Yet.
Brain scan predicts best therapy for depression : Nature News & Comment
No dishonour in depression : Nature News & Comment
Study Helps Predict Response to Depression Drug – WSJ.com
and some of my earlier work on Mayberg and/or the neurology of depression here:
*Standard competing interest info: Some of the several authors report consulting relationships with pharmaceutical companies; other authors practice CBT. Mayberg declares a consulting relationship with a maker of neuromodulation instruments, for deep-brain stimulation treatments that she has experimented with in other studies. See the COI disclaimers in the study for details. I’ve written about Mayberg several other times, as noted above.
Photo appearing on homepage: Self-portrait by ndanger, license Some rights reserved.
A while back I wrote about my experience being shaken down for over $4,000 when I had to take my daughter for a simple x-ray after she hurt her foot while we were vacationing. The x-ray was negative, but the charges — for a simple 3-view x-ray, an Ace bandage, and a pair of crutches — were over $4,300. Such charges are illegal under federal laws that limit charges for out-of-state emergency visits to what it would cost to stabilize the patient in her home state, but this hospital, as they say down in Texas, flat did not care. It plunged ahead, ignoring my insurer’s offered settlement and instead sending me threats that it would report me to collection agencies if I didn’t pay up. Someone familiar with such claims told me this is common in this situation; many hospitals simply routinely ignore this regulation, wave aside the insurer’s silly babbling about paying rates it considers too low, and harass the patients till they pay up what they can.
The hospital has now finally agreed — what d’ye know? — to actually file the claim with the insurer. In the meantime, however, they also sent the account to a collection agency because, they wrote, I had “failed to respond” to their attempts to collect. Apparently sending them your full insurance coverage information is a failure to respond. I’m now waiting to see whether they’ll accept the payment — and undo any damage they’ve done to my credit rating. All this over a sprained foot.
Apparently I’m not alone. A new study, small but clear, finds that patients who must seek care out-of-network regular face bills far larger than they should be.
Four themes characterize the perspective of individuals who experienced involuntary out-of-network physician charges: (1) responsibilities and mechanisms for determining network participation are not transparent; (2) physician procedures for billing and disclosure of physician out-of-network status are inconsistent; (3) serious illness requiring emergency care or hospitalization precludes ability to choose a physician or confirm network participation; and (4) resources for mediation of involuntary charges once they occur are not available.
In plain English, that means that the system is opaque, the billing procedures and prices are all over creation, and people who have no choice but to seek care are mercilessly dunned for the money.
via Patient Experiences with Involuntary Out-of-Network Charges.
Image: Dark Alley, by deryckh, via flickr. License Some rights reserved
All mice groom themselves to keep their fur clean, but some in a lab in Columbia University, New York, have started grooming to an unusual and excessive degree. This isn’t vanity. Instead, it’s the rodent equivalent of the repetitive rituals that many people with obsessive-compulsive disorder (OCD) go through, like an irresistible urge to wash their hands or clean themselves.
But as Ed Yong relates, researchers have cured this OCD — in mice — using a wee blue light to stimulate a wee tiny part of the brain; and another bunch of researchers have found they can use the light to buzz yet another wee brain area part and make it happen.
Again — in mice. Which we ain’t. But go read this thing: Yong, as usual, delivers all the caveats, as well as the full measure of wonder.
Ed Yong on Making and Breaking Compulsive Behaviour – Phenomena: Not Exactly Rocket Science.
From Today’s Times: Grouping Students by Ability Regains Favor With Educators:
Though the issue is one of the most frequently studied by education scholars, there is little consensus about grouping’s effects.Some studies indicate that grouping can damage students’ self-esteem by consigning them to lower-tier groups; others suggest that it produces the opposite effect by ensuring that more advanced students do not make their less advanced peers feel inadequate. Some studies conclude that grouping improves test scores in students of all levels, others that it helps high-achieving students while harming low-achieving ones, and still others say that it has little effect.
To me it seems obvious that a judicious grouping and teaching by ability is a good idea, for precisely the reasons that advocates outline in this article. Yet I’m flummoxed that in a country that has been debating this issue for two decades as tens of millions of schoolchildren go through the system each year, we apparently lack reliable data on whether it does work, or what makes the difference between making it work and having it fail. I recognize there are advantages to having many different school districts try many different things. But I can’t help but think we’re hampered by the fragmentation of the U.S. education system and what seems a lack of national-scale programs that can run controlled classroom experiments in such a vast population. We’re running lots of little experiments but have no way to compare their results. We have tens of thousands of experiments with n numbers of 15 or 25, but nothing to show for it but anecdotes of the sort related in this article. As a result, every district hashes this out in its clumsy way.
The U.S. education system much resembles the U.S. healthcare system that way: massive numbers, but very little data on what works.
John Hawks, the funny, fearless, adventurous anthropologist who writes one of the richest blogs in all academia, recently read an editorial at Current Biology that, “wishy-washing its way through a non-opinion about the value of blogging in science,” worries that blogging opens the door to “criticism [that] can be harmful.” Better, the editorial writer suggested, to limit discussion of science to peer-reviewed responses.
I have little patience for the risk-averse culture of academics.
The bottom line is: People need to decide if they want to be heard, or if they want to be validated. I have long been an associate editor at PLoS ONE, and once I edited a paper that received a lot of critical commentary. That journal has a policy of open comment threads on papers, so I told disgruntled scientists to please write comments. The comments appear right with the article when anybody reads it, they appear immediately without any delay, and they can form a coherent exchange of views with authors of the article and other skeptical readers.
Some of the scientists didn’t want to submit comments, they wanted to have formal letters brought through the editorial review process. “Why?” I wrote, when you could have your comments up immediately and read by anyone who is reading the research in the first place? If you want to make an impact, I wrote, you should put your ideas up there right now.
They replied, “How would you feel if someone published something wrong about Neandertals? Wouldn’t you want to publish a formal reply?”
I wrote: “In that case, I would probably get a blog.”
What is the difference between being heard and being validated? It’s whether you are contributing to the solution or to the hindsight.
Get over to john hawks weblog to take in the whole thing. And bookmark that sucka, explore it. Seriously. If you follow only 10 science blogs — hell, only 5 — Hawks‘ should be one of them.