Hauser wake cont’d: Could the hivemind prevent fraud & misconduct?

As the ripples from the Hauser case spread, some of the feedback on my post on the Hauser case sees his position poorly.

From an SRPoole:

Hauser was investigated because his students accused him of *fabricating* data. Plus, the co-authors of the Cognition paper under question say that Hauser alone “collected and analyzed” the data. He alone is responsible. This is embarrassing to Harvard and if it was just a matter of bookkeeping or sloppiness, it would have been dealt with quietly and we probably wouldn’t haven’t heard about it. Also under question is a paper dating all the way back to 1995. Gordon G. Gallup Jr. of the State University of New York at Albany asked Dr. Hauser for videotapes of an experiment in which cotton-topped tamarins were said to recognize themselves in a mirror. Gallup could see no evidence for this.

NIH and NSF should investigate now and when they are done, they will issue a report. When they do, this case is likely to be a case of data fabrication, the worst type of scientific misconduct. How can any of Hauser’s work be trusted after this kind of violation. Cheaters never cheat once.

From a MonkeySkeptic:

There has been a long standing suspicion among primatologists about Hauser’s work. The results have just turned out to good, and the results have always supported the hypotheses. Some professional primatologists are curious about how Hauser’s group can publish work with minimal or weak support, while other’s papers are rejected. People who have seen Hauser at meetings report that he is defensive and dismissive about criticisms. All of this suggests that there is a deep underlying problem. I just hope that the whistle-blowers do not suffer from their important work. Hauser has lots of students that are in influential positions at this point – are they part of the problem or part of the solution?

Also, if your students accuse you of fabricating data, that is perhaps the most convincing ‘tell’ of all. Most graduate and undergraduate students worship and support their advisor, and do everything they can to make her or him look good. If one or several students are so concerned about what their advisor is doing that they report it to institutional officials, then in my book its a very serious situation.

This feedback itself warrants cautious reception, as it’s anonymous. (Anonymity may be necessary, but it still warrants caution.) But if this case is anywhere near this serious — if multiple former students are accusing Hauser of outright fabrication, or if many others in the discipline have harbored grave doubts about the integrity of the data — then this case turns us back to the perennial question of how to curb such shenanigans.

A few years back, I wrote an essay for the New York Times Magazine about one suggested inoculation against fraud — a reform and opening of the peer-review process so that papers and studies get wider scrutiny. Presumably such scrutiny might catch problems of the sort that Hauser may be accused of here. Below is the meat of the essay. It’s a bit shocking to see how up-to-date it remains, despite the various movements toward open science:

Journal editors say they can’t prevent fraud. In an absolute sense, they’re right. But they could make fraud harder to commit. Some critics, including some journal editors, argue that it would help to open up the typically closed peer-review system, in which anonymous scientists review a submitted paper and suggest revisions. Developed after World War II, closed peer review was meant to ensure candid evaluations and elevate merit over personal connections. But its anonymity allows reviewers to do sloppy work, steal ideas or delay competitors’ publication by asking for elaborate revisions (it happens) without fearing exposure. And it catches error and fraud no better than good editors do. “The evidence against peer review keeps getting stronger,” says Richard Smith, former editor of the British Medical Journal, “while the evidence on the upside is weak.” Yet peer review has become a sacred cow, largely because passing peer review confers great prestige – and often tenure.

Lately a couple of alternatives have emerged. In open peer review, reviewers are known and thus accountable to both author and public; the journal might also publish the reviewers’ critiques as well as reader comments. A more radical alternative amounts to open-source reviewing. Here the journal posts a submitted paper online and allows not just assigned reviewers but anyone to critique it. After a few weeks, the author revises, the editors accept or reject and the journal posts all, including the editors’ rationale.

Some worry that such changes will invite a cacophony of contentious discussion. Yet the few journals using these methods find them an orderly way to produce good papers. The prestigious British Medical Journal switched to nonanonymous reviewing in 1999 and publishes reader responses at each paper’s end. “We do get a few bores” among the reader responses, says Tony Delamothe, the deputy editor, but no chaos, and the journal, he says, is richer for the exchange: “Dialogue is much better than monologue.” Atmospheric Chemistry and Physics goes a step further, using an open-source model in which any scientist who registers at the Web site can critique the submitted paper. The papers’ review-and-response sections make fascinating reading – science being made – and the papers more informative.

Open, collaborative review may seem a scary departure. But scientists might find it salutary. It stands to maintain rigor, turn review processes into productive forums and make publication less a proprietary claim to knowledge than the spark of a fruitful exchange. And if collaborative review can’t prevent fraud, it seems certain to discourage it, since shady scientists would have to tell their stretchers in public. Hwang’s fabrications, as it happens, were first uncovered in Web exchanges among scientists who found his data suspicious. Might that have happened faster if such examination were built into the publishing process? “Never underestimate competitors,” Delamothe says, for they are motivated. Science – and science – might have dodged quite a headache by opening Hwang’s work to wider prepublication scrutiny.

In any case, collaborative review, by forcing scientists to read their reviews every time they publish, would surely encourage humility – a tonic, you have to suspect, for a venture that gets things right only half the time.

[From Trial and Error – New York Times]

One worry about more open review — which I can relate to as a journalist — is that one’s ideas get opened up and spread around before publication. This raises worries about ownership and priority and credit, worries that are reasonable, or at least hard to resist, in a culture that especially prizes and rewards these things, and which bases tenure, not to mention fame and prestige and all the accompanying goodies, on breaking the big theory or story. Science in that way closely parallels journalism.

Others argue that our emphasis on individual credit overlooks the collaborative nature of science to start with, and that a more honest approach (in a couple sense of the term) is to share data far earlier in the process. Such open science, the argument goes, would a) let many eyes mine the data so we get more out of it, b) reduce duplication of efforts, and c) serve as a constant check against everything from misreading data to fabricating it.

Open science isn’t the same as open peer review, though it carries the same principles further. Yet it could offer more of same hivemind checks against slop and sleight of hand. And as a story in today’s Times relates, it can create some incredibly powerful science:


August 12, 2010

Rare Sharing of Data Leads to Progress on Alzheimer’s

In 2003, a group of scientists and executives from the National Institutes of Health, theFood and Drug Administration, the drug and medical-imaging industries, universities and nonprofit groups joined in a project that experts say had no precedent: a collaborative effort to find the biological markers that show the progression of Alzheimer’s disease in the human brain.

Now, the effort is bearing fruit with a wealth of recent scientific papers on the early diagnosis of Alzheimer’s using methods like PET scans and tests of spinal fluid. More than 100 studies are under way to test drugs that might slow or stop the disease.

And the collaboration is already serving as a model for similar efforts against Parkinson’s disease. A $40 million project to look for biomarkers for Parkinson’s, sponsored by the Michael J. Fox Foundation, plans to enroll 600 study subjects in the United States and Europe.

The work on Alzheimer’s “is the precedent,” said Holly Barkhymer, a spokeswoman for the foundation. “We’re really excited.”

The key to the Alzheimer’s project was an agreement as ambitious as its goal: not just to raise money, not just to do research on a vast scale, but also to share all the data, making every single finding public immediately, available to anyone with a computer anywhere in the world.

As I noted In my first post, it should be quite interesting to see how this Hauser case plays out — and distressing as well. Along with the grim views of Hauser’s operations I excerpted above, I’ve received private messages — and read blog posts — from people who worked with him and admire him greatly; some good people are feeling a lot of pain, both personal and professional. However Hauser ends up looking after all this, it’s dreadful to think of the many who have worked with him or drawn on his work whose own works and histories are now compromised.

If this does turn out to be a huge case, it will hardly be unprecedented. Big scandals pop up every two or three years, and the pattern repeats itself with distressing familiarity. (Read Judson’s “The Great Betrayal” and you’ll know what I mean.) With several open-peer-review journals operating now, we should soon have a track record in which we can compare misconduct rates in those journals versus traditional-peer-review journals.

If such data shows that open peer-review decreases misconduct, will the big journals heed the findings?

Related posts at Neuron Culture:

Marc Hauser, monkey business, and the sine waves of science

Science bloggers diversify the news – w Hauser affair as case study

Watchdogs, sniff this: What investigative science journalism can investigate

More fraud — or more light?

Errors, publishing, and power

Leave a Comment

Your email address will not be published. Required fields are marked *