A Funny Arsenic Smell Upstream — What questions is it fair to ask about squishy science?

Are we squeezing everything we should out of the arsenic story? Some would say so. I’m not so sure.

In a quick post-mortem yesterday on the Lake Mono bacterium, Brian Reid neatly ticks off how the “arsenic soap opera,” as he put it, “illustrates five trends in health and science communication that are likely to grow even more pronounced over time.”

He lists

  • a longer lifespan for media discussion of research
  • the spread of scientific commentary to more channels
  • increasing scrutiny of embargoes and their use
  • a hit to the trust people have in “gatekeepers” such as peer reviewers

Along with these (on which I think he’s right), Reid calls out one more that has gone mostly overlooked:

  • Upstream” Stories Are Becoming More Attractive: Earlier this year, I wrote a piece arguing that the future of medical/science journalism lies, in part, with more stories that are focused less on final results — which are never as definitive as they seem — and more about the process by which researchers do their work. The arsenic story, with its disputed conclusions, illustrates the risk of looking only at results, rather than the broader context of the overall research.

Alice Bell has been calling for just this sort of upstream story, and along with observers like Martin Fenner and Wired UK’s David Rowan, I agree we need to pay more attention to what’s upstream of scientific results. That’s one reason I like writing long features about science: It gives you room to  examine not just product but process. Looking inside the factory helps you better understand what comes out. It’s also fascinating; you get to watch people struggle to overcome all sorts of obstacles, including themselves, to do good work.

In this case, the upstream material includes not only the work done by Wolfe-Simon and colleagues but the peer review process. And the latter remains quite opaque. Yesterday at the American Geophysical Union, a panel hosted by journalist David Harris examined the whole arsenic bug, and Alexandra Witze, in rapidfire live-tweet coverage of the session, indicated that Charlie Petit, a smart, seasoned science journalist who now runs the Knight Science Journalism Fellowship at MIT, said that peer review worked fine in this case.

Given the room for error in contemporaneous live-tweet reporting, Petit may have said something more nuanced. But my own reporting on peer-review and on this paper* leaves me thinking that Science slipped up in its review of this paper. Many people who know the field have told me and others that they felt Science should have asked for a mass spectrometry assay of the bug’s DNA.  Such an assay would have been valuable, for (so I’m told) it could have shown for certain if the DNA did not contain arsenic or, alternatively, if the DNA had used arsenic, the assay would have provided strong but not definitive evidence that it did. That is, it could have definitively proven the researchers wrong in their assertion that the DNA included arsnenic, but could not have proven them absolutely right. (This, I’m told, because there’s essentially room for a false positive result for the arsenic but not not a false negative.)

A mass spec, in short, was a critical test that was doable within a week to a month and that would have answered many questions that raise doubt about an extraordinary claim. To request or demand it would not be extraordinary; in fact, it would apparently be asking for ordinary but decisive evidence about an extraordinary claim — just the sort of thing you’d want to prevent all this kerfuffle.

And why not ask? It’s not as if the authors would take the paper elsewhere. They wanted a top journal — and no way is NASA going across the pond to Nature with this paper. (Neither would they pull it from Science and go elsewhere if Science had insisted; but even if the researcher suggested they would have, that’d be no reason for Science to give in.) Perhaps there’s some good reason, or even a bad reason that looked good at the time, that Science didn’t ask for a mass spec. But as yet we’ve heard no reason at all. We just know that something fell out of the canoe upstream, and the only people who saw it aren’t talking. I’d love to hear a good explanation, and if I do, I’ll shut up about the peer review. Till then I’ll consider publishing this paper without more evidence a mystifyingly bad decision, and one that should be explained so we all understand what went amiss.

How else might we go upstream here? We could  learn more about the research itself, and I’d hope someone is working on that. (If I were a freelancer in the US with time on my hands right now, I’d be pitching that story ferociously.) I find it intriguing — both disturbing and sort of charming — that despite the numerous problems raised about this paper, most critics, especially among journalists, have so far been pretty gentle about pressing for details about how, for instance, the researchers’ theoretical assumptions may have created faults in methods (Possibly I’ve missed such speculation; if so, give a shout in the comments or at davidadobbs AT gmailDOTcom.) I understand and respect the calls to not get personal. And indeed there’s no reason to get personal in the sense of questioning character.But it does make sense to look at how assumptions and beliefs and theoretical frameworks can create mistakes or lapses, because seeing how that happens helps others avoid similar mistakes. It’s like learning to rinse your beakers thoroughly because failing to do so once embarrassed you; it’s a teachable moment.

As I said in the Guardian podcast the other day, it’s fine to have hunches or even to want to see a certain result — as long as your work looks like that of someone who doesn’t share your assumptions. You should in fact look like you’re trying to disprove your point — because only by trying and failing to falsify it can you show it is true.

This doesn’t look like that sort of work, and it’s perfectly legitimate to ask why: to ask what went wrong, to ask why assumptions or hubris or haste or just plain bad luck may have led to error or premature publication, or not. (Note I am NOT talking about fraud or even its faintest whiff here; I’m talking about the good old human fallibility that we’re all prey to.)  It was, after all, our money being spent. But more important, asking what went funny here  can teach everyone valuable lessons. It can be hard to paddle upstream. You’re sure to meet resistance. But sometimes you just need to go there.

*For a feature I’m writing on open science and science publishing, I’ve had many conversations with journal editors and others about scientific publishing and peer review over the past few months; and I’ve had quite a few conversations and done much reading on this paper over the last couple weeks.

Image: Big Muddy River, Murphysboro, Illinois. Courtesy National Weather Service