Free Science, One Paper at a Time (Neuron Culture Moving Party Track 6)

~

To get an idea of the paper’s excess weight, go to Cambridge, England, and find Mark Patterson. Patterson is a scientific-publishing old hand gone rogue. He formerly worked at two of the biggest scientific publishing companies, Elsevier and Nature Publishing Group (NPG), each of which puts out scores of journals. A few years ago he moved to the staff at PLoS.*  Patterson is now director of publishing there, and since he joined, PLoS has leveraged open-science principles to become one of the world’s biggest publishers of peer-reviewed science and the biggest single publisher of biomedical literature. Readers like it because they get free access to good science. Researchers like it because their work reaches more readers and colleagues. PLoS’s success is heartening open-science advocates greatly — and unsettling the traditional publishers.

To describe PLoS’s innovations, Patterson likes to talk about how PLoS’s most innovative journal, PLoS One, deals with four essential functions of science that are currently wrapped up in the scientific paper: registration, certification, dissemination, and preservation. The current publishing regime, he argues, locks up these functions too closely in the current, conventional version of the scientific paper — even though some of these functions can be met more efficiently by other means.

So what are these functions?

Registration is essentially a scientific claim of discovery — a marker crediting a particular researcher with an idea or finding. The current system registers these contributions via a paper’s submission date. Certification is essentially quality control: ensuring a paper is solid science. It is traditionally done via peer review. Dissemination means getting the stuff out there — publication and distribution, in printed journals or online. And preservation, or archiving, involves the  maintenance of the papers and citations to create a breadcrumb trail other researchers can later follow back to an idea or finding.

“The current journal system does all four of those things,” says Patterson. “But it doesn’t necessarily do them all well. The trick is finding a system that gets each of these done most efficiently, sometimes by other means, instead of having them all held by the publisher.” He and others contend that science would gain both speed and rigor by “unbundling” some of these functions from the paper and doing them in new ways.

PLoS loosens things up mainly in distribution and quality control.  All of its journals are open-access — that is, free to read. Instead of making every would-be reader either buy a journal subscription or pay a per-article price of $15 to $50, PLoS collects a fee from the researcher to publish — usually about $1400 or so — and then publishes the paper  online and makes it free. The author fee is substantial, but it’s actually a small addition to the other costs of doing science, and performs the essential function of getting it out there. It’s Panizzi’s dream realized: every poor schoolchild — or at least every schoolchild with web access  — can read PLoS. Researchers like this, and it works. A recent study showed that on average, papers and data published open-access receive more citations than did those behind paywalls.

PLoS’s rapid growth has shaken things up. Some journal groups, such as Elsevier, have responded by allowing authors to pay to have a paper open-access on publication. Yet commercial publishers that do this tend to retain certain rights that PLoS does not, and they’re less likely to release underlying data, metadata about the publications, or other data and rights. And the practice creates a weird and uncertain market: You can go to, say, Neuron, and find, in the same issue, one paper you can download for free and another that costs $30. The difference? The authors of the latter paper didn’t pay the open-access fee.

Meanwhile, PLoS’s biggest, most cross-disciplinary journal, PLoS One, streamlines quality control in a way that’s more complex and raises more ire. The traditional route, peer review, generally involves having two or three experts evaluate the entire paper — data, methods, findings, conclusions, significance. The publisher relays these peer critiques to the author, usually with requests for either changes or clarifications. If the author answers those to the publisher’s satisfaction, the paper gets approved.

PLoS One uses a similar process but — crucially — asks its reviewers to judge only on technical merits, and not on any assessment of the paper’s novelty, significance, or impact. “The idea,” says Patterson, “is to let the importance be determined later by how much the paper’s ideas and findings and conclusions are taken up by the community. We’re letting the scientific community at large determine a paper’s value and importance, rather than just a couple of reviewers.”

This makes many people at Patterson’s old workplaces uneasy. Gerry Altmann, editor of Cognition, an Elsevier journal, and an open-minded man, doubts this sort of post-publication filter can serve the purpose. “Peer review should be about ensuring that there’s a robust fit between findings and conclusions, and that a paper sits well within the context of a discipline,” Altmann told me. “These are insidious changes.”

Can the hivemind do quality control? Patterson answers by noting that any paper’s true value — its lasting contribution — is generally decided by the scientific community even under the current system. Yet he acknowledged that at present few scientists actually go online and make comments or otherwise review papers published there. We’re a long way from the vision of an active scientific community replacing peer review with a crowd-sourced rigor and fact-checking. The hivemind apparently has better things to do. Altmann thinks it’s starry-eyed to think that will change.

Others say researchers would engage these tasks if it was worth their while. They argue that you can make it worthwhile by giving researchers credit for a wider range of contributions to science, starting with post-publication peer review and evaluations.

This is the idea behind ORCID, a program that would give each researcher a unique, immutable digital identification, somewhat like a permanent url. That ID would serve like a deposit account: the researcher would accumulate reputational credit not just for papers published, but also for other  contributions the scientific community deems valuable. Reviews of others’ work could thus generate deposits, as could public outreach, talks, putting data online, even blogging — anything that helps science but currently goes unrewarded. This would allow hiring, tenure, grant, and awards committees to weigh a broader set of contributions to science. ORCID holds particular promise because it has already lined up buy-in from publishing giants Nature and Thomsons Reuters (though it’s unclear what contributions various stakeholders will agree to credit).

What would such a system look like? One idea is being developed by a team led by Luca de Alfaro, of the University of California, Santa Cruz. Working with Google, the team hopes to develop broader-based reputational metrics that are built, writes de Alfaro in a recent essay in The Scientist, “on two pillars”: tenure, grant, and similar rewards for authors of papers and their reviewers alike; and — crucially — a content-driven way of gauging the merit of both papers and reviews. Authors would get credit for work of high value, as measured by citations, re-use of data, and discussion generated. Reviewers, meanwhile, would get credit based not just on output but on how well their reviews predicted a work’s future value.

“Thus two skills would be required of a successful reviewer,” writes de Alfaro: “the ability to produce reviews later deemed by the community to be accurate, and the ability to do so early, anticipating the consensus. This is the main factor that would drive well-respected people to act as talent scouts, and to review freshly published papers, rather than piling up on works by famous authors.” De Alfaro says much of the technology to weigh such variables already exists in algorithms used at Google and (for evaluations of reviewers) Amazon.

Such a system could readily be incorporated into a program like ORCID. It could also give researchers incentives and credits — points, essentially — for public outreach or for openly sharing underlying data and details about method, both after and even before publication, so that other researchers can more easily test or use the data and methods. In short, a more flexible credit system could generate more activity in almost any area of science simply by weighting it more heavily.

Leave a Reply

Your email address will not be published. Required fields are marked *