Is the Open Science Revolution For Real?

Monty Python’s rebels ponder all they must replace if they kill the Romans

The researcher rebellion against the closed research-and-publishing system, tallied most explicitly in a petition boycotting publisher Elsevier, continues to expand. (The Economist covers it here, and I covered the complaints last year in a feature.) The big question, of course, is whether this noisy riot will engender something like a real revolution. Will it replace the old regime with a new?

That will be depend on many things, but a key will be the construction of a replacement model for the traditional academic publishing system that so frustrates open-science advocates. As studious rebels know, a key part of a successful revolution is building an alternate set of institutions and services  — an alternate infrastructure — to offer people as and after you topple the regime: Give the oppressed good schools, healthcare, and clean water, the thinking goes, and you win their hearts and minds even as you learn to govern.How well are the open-science revolutionaries doing on that front? To replace the traditional publishing system, they need to provide alternatives to its main functions. Those functions, as I described in my feature Free Science, One Paper at a Time, are:

Editing  & review —  making sure a paper is logical and intelligible; also assessing its value and significance. Review has traditionally been formal peer review.

publication/distribution — getting the thing out there so people can read it

credit/reputation — ensuring that the author or authors get credit for the work

archiving – making the work available to future researchers.

In the current system, the journal system bundles all these functions into the paper: The journal edits the paper and puts it through peer review; publishes it in print and online (usually requiring payment, a key complaint); provides a formalized citation that the authors can put on their resumes and wave around at tenure or job application time; and keeps the work available in print and online so future scholars can hopefully find and use it.

This system’s strength lies partly in its convenience and familiarity: everyone knows how this works, and knows where to go to try to publish or find things or see how many papers a research has produced. Major downsides include inefficiency and  the fact that paywalls prevent more thorough distribution and availability to future scholars. (See my feature for more detail on that.)

What are the rebels offering to replace that system? By casting around the last couple days, I’ve assembled a list of tools created by the open-science community that seek to replace or amend the various parts of the conventional journal system. You can skip the indented text here if you don’t want the details. In any case,  together they show that the rebels (to indulge my metaphor) have gone a long way to creating the alternate infrastructure.

The tools below are mostly new, and they are atop the highly significant alternative publishing systems offered by outfits like PLoS, ArXiv, and BioMedCentral, all of which are publication platforms, and the massive database of citations and discussion that Mendeley has established over the past couple of years. (Mendeley is also drawn on by quite a few of the projects listed below to help disseminate or evaluate papers.) Below I describe a few in terms of how they perform the four functions just outlined. (If you know of one that should be here, please email a brief concise description and a link to me at davidadobbs  gmail. Thanks.)

Editing and Review

The open science movement stresses that the traditional peer review built into the current system can be done after publication rather than before. This is going on right now with the so-called “Arsenic Life” paper published in Science last year; this Chronicle of Higher Education story looks at another example — nice meta — of both pre-and post-publication review of a paper on whether Twitter mentions affected citation rates. Publish and then Splitting the traditional peer-review function into parts is one premise behind PLoSOne, for instance: There a paper is rigorously reviewed for soundness of method  — and then so that the hivemind can evaluate its value and significance. (This is what happens long-term anyway, even with papers published in peer-reviewed journals.) Much of that can be done informally, in blog comments, separate blog posts, responding papers, and so on. But two new tools, PaperCritic and Kleenk, provide ways to do that in more centralized fashion. Faculty 1000, which has some very fine post-publication reviews by scientists, provides another model. Those reviews aren’t open-access at this point, but it’s clearly a strong model for rigorous and helpful peer review.

There’s work to do here. But a key point is that the significance of papers have never been determined by peer review, but rather by the discussion and testing that goes on afterwards. This is actually more robust than ever right now, which is why many open-science advocates argue that the traditional peer-review system can be jettisoned.

Publication/Dissemination

Along with the big players like PLOSOne, which do some rigorous editing and then publish, several greased-skid publication conduits have been built lately to let scientists publish fast and fluidly, with minimal or no editing.

  • F1000 Research, just announced, could be a big player, a sort of ArXiv for biomed; a manuscript will undergo a quick quality check, to make sure it’s coherent, and then get published and archived.
  • Annotum is a publishing platform, built on WordPress, that FigShare creator Mark Hahnel calls “an open-scource, open-process, open-access scholarly authoring and publishing platform” — a sort of blog platform for sharing research. Other platforms can be built atop it and probably will be. Possibly F1000 is built atop Annotum.
  • skolr.com – poster sharing

A mess of databases where data can be deposited and shared. For example:

  • http://www.barcodeoflife.org/ – open database of organism analytical fingerprints
  • http://www.treebase.org/ – open databsse of phylogenetic trees & data
  • Dryad – repository of biological data. Twitter: @datadryad
  • Biodiversity Heritage Library – See @rdmpage
  • http://cancercommons.org/  – database of disease models
  • http://biomarkercommons.org – database of disease biomarkers

Publication is both the simplest and hardest nut to crack. As with blogging, it’s easy enough to just get the stuff out there. The challenge is how to make it findable. My guess is that models like PLoSOne and F1000 — places associated with large organizations — will become the main outlet for non-pre-peer-reviewed publications. In any case, this is a need fairly easily met, and these efforts can pretty soundly replace the dissemination function of the traditional publishing machinery.

Credit and Reputation

A less obvious part of the journal system. Academics live and die — or at least get hired and fired — by their publication records. The journal system is the de facto system for giving credit for a finding: He or she who first submits a paper finding, say, that dinosaurs had feathers, is the one credited with the discovery. But open-science advocates say the repetitional focus on the published paper ignores many other ways scientists contribute, such as evaluating or testing other research, talking with the public, curating databases, and so on. Thus many open-science advocates are seeking to create tools to track these other contributions

Altmetrics, for instance, registers not just published papers but other activity around them, such as mentions in the media, other online conversations, tweets, reference manager counts (such as from desktop reference manager software such as Mendeley), and other forms of scientific discussion or comment. The new FigShare offers roughly similar functions in what is arguably a slicker, more fluid interface. In essence, all this commentary and discussion provides the crucial post-publication peer review. Altmetrics and other efforts provide a way to track it. The dream is that hiring and tenure committees will look to these metrics, rather than just traditional C.V. material, to evaluate a scientist’s contributions to science.

Some others:

  • http://sciencecard.org/
  • http://readermeter.org/
  • http://total-impact.org

Archiving/Access

This is moving fast. Outfits like ArXiv, PLoS, and BioMedCentral archive and index their papers, as, presumably the coming F1000 Research will do, and Altmetrics appears to offer a catch-all spot in which to register and archive not only papers but commentary surrounding them. This system is not fully mature yet, but it has gone a long way fast with just a few sites, and it seems safe to say it can scale up and stabilize pretty quickly. And Google Scholar lets you find most papers, and not just at the original publisher but at other sites. This one’s relatively easy.

SO WHEN DO THEY BUILD THE GUILLOTINE?

Add that all up, the revolt is looking pretty good. Does this mean they’ll swarm right over the ramparts? Hard to say. Some of the traditional publishers are far more fluid and responsive than others, and might either join or co-0pt the rebellion. Others seem to be digging in for a siege. The same division splits the scientific societies that depend on journal subscriptions for revenue; some trying to figure while others, such as the American Anthropological Association, are digging in.  Meanwhile, in some research sectors, such as biomedicine,  tradition and/or reluctance to share valuable intellectual property seem to be discouraging revolt. The rebels still face the task of storming a very big edifice with a force that by both nature and design is decentralized.

But the pressure to change keeps building and is unlikely to stop. Some of the biggest targets are showing signs of the sort of inflexibility that mark the titan ready to fall. One source this week told me, for instance, that in the halls of one major scientific publishing concern, the senior leadership are mystified and the younger middle ranks terrified: a telling combination.

My own take, having followed this for a couple years now, is that the surprises from here on out will more often be jerks of sudden acceleration — like this rally against Elsevier —  than unexpected stops. The past couple of weeks have shown how incredibly volatile the ground is around the traditional publishing structures and practices. As described deftly in the Economist story, a couple of missteps by Elsevier seemed to ignite an extraordinary amount of fuel. And as efforts outlined above show, the advocates of change have not just been complaining; they’ve been building a replacement for the old vehicle they want to ditch.

As I’ve covered this, I’ve often thought of happened to the music industry a decade ago and is happening to the consumer newspaper, magazine, and book industries now. In every case, change arrived in a way that brings to mind a roller coaster ride as it approaches and then descends that first wild drop: The change came slowly at first, sped almost imperceptibly for a bit — and then accelerated wildly in a long drop that would forever transform the ride. Even among the industry insiders who saw change coming, most badly underestimated the steepness of the drop they were approaching. Once the drop started in earnest, the only ones having fun were those who saw the big drop and, whooping, leaned into it.

__

NB: Just as I published this, I found that Michael Eisen, a leading open science advocate, has written an interesting lament about a former missed chance at spreading the rebellion. That’s obviously both caution and a prod. But Eisen also feels that things are different this time:

But there are very good reasons to believe that things are different now, and that a new organized effort to deny publishers that are not serving our interests the papers on which they depend. The most obvious and important difference is that the landscape of open access publishing has changed dramatically since the original PLoS boycott. Indeed it was its failure – or more precisely the excuses colleagues gave us about why they weren’t participating – that led Pat, Harold Varmus and I to refocus PLoS as a publisher.

 H/t to Amy Harmon for calling the Eisen post to my attention.

02-03-12 1:42 pm: Made some corrections and additions. As usual, deletions are marked with striketrhoughs, and additions are underlined.

 

8 Comments

  1. I think you’re understating a big factor by saying “the journal edits the paper and puts it through peer review”. The reason this issue is getting such a strong reaction is that in reality, academics edit every paper without pay when the paper is going through the peer review process, this shouldn’t leave much if any editing for the publisher.

    By wholly outsourcing their work (editing), not paying their workers (academics) and charging their consumers (other academics) they have made themselves pretty bad bed fellows. Furthermore, by also charging extortionate publishing fees to their only commodity (publishing academics) they have long been sowing the seeds for a revolution. 

    The bundling, unexplained increases in fees and profit margins, withdrawl of discounts for the third world universities and general unmitigated greed that has recently become so evident are just the many straws breaking the camel’s back.

    1. This is a good point to call out, one I left unexamined due to space considerations. Most journals do indeed farm this work out at no pay, which is a major bone of contention in the Elsevier boycott and the wider dissatisfaction with the current system. 

  2. Post-publication will mean that quality of papers will sink like a rock. Besides, peer-review is done for free, so there are no obstacles in setting up a non-profit entity coordinating that.

  3. Well clearly the poast facto review system is running just fine. ..and really always has. Getting things into the journals in the first place is where significant hurdles lie. Particularly through heads of university departments and other people who would like to get their name in there for the prestige it can mean. ..and certainly the risk of prestige therein causes a lot of worry and delay as well in the current system.

    Part of out problem really lies in our extraordinary ability to archive.  Now we can save much more then any group could ever read. It places a much higher priority on the need to be able to find relevant information. It also makes repuation stand out much more, since those who come out with relevant papers will come up with their own version of celebrity.

    Its not so different. People like to pretend that because we move the deckchairs the ship will be different. Its just that the traditional publishing houses that have been in the game for hundreds of years dont want to have to move over for anyone. Really they have no choice in the matter, and it only remains to be seen which entities have the x factor to survive in the long term.

  4. Good for you for recognizing that scholarly publishing involves real work, real expertise, and significant investments in infrastructure.  This won’t just all happen in a post-Elsevier world– it will take some serious work and dollars to build out an alternative system, as per the Monty Python video clip.  The good news is that the top fifty research universities probably spend a half billion dollars per year on scholarly journals– and the next hundred or so aren’t chopped liver.  If we can get to the point that we’re investing in an alternative structure (as opposed to an additive system existing side-by-side with the traditional commercial commitments), our campuses should be able to muster the needed resources and expertise to create some robust and creative systems for managing this process.  The harder question is whether we can muster the will.  Good article.  

  5. I think one useful model for an alternative structure is the way specialized academic groups now put together conferences (not much different than how journals are prepared, except with the middleman cut out). Experts in the field review papers, select those best qualified to be presented at the conference, and then a conversation about the quality or impact of the papers continues after the work is released to the audience. Universities fund such events because of both the prestige and the positive impact such exchange has on future research efforts.

    Why not do it the same way with a journal? Say your academic sub-specialty has about a thousand experts. They join together as a guild of sorts, just as they do now in various interest groups. They elect volunteers–thus deciding democratically who is best qualified for these tasks–to sit on committees that will perform the same editorial and peer-review functions that journals require. Most of these people are already providing the same valuable functions for free today, to journals, as conference organizers, and so on. They can provide different levels of recognition to quality work, even as anything passing a minimum qualification is still available to the public for further review. 

    Overlooked gems can still bubble up in this process, but the guild of experts still brings the best work to public attention, just as they might do today by selecting a conference program. Competition among different groups of experts might occur, but just as we find today with top journals in a field, interest tends to coalesce around the outlets that provide consistent high quality recommendations. Universities can reward researchers whose work is recognized in this fashion, for purposes of tenure and promotion. I like this idea much better than the proposal to aggregate subsequent “activity” around a published article, which might only reflect the large-scale ignorance of people who happen to find it popular, rather than offering an assurance of quality, while great work may well be ignored by most.

  6.  I’ve accidentally hit on a point I wanted to make, which is that, while
    many view political ideologies along a straight line with two extremes,
    to me it’s more of a circle. The very far right and the very far left
    begin to resemble each other in terms of extreme rhetoric, a tendency to
    want to silence opponents and a drift towards totalitarianism. So it’s
    no surprise that Nazi Germany and Stalinist Russia start to look like
    each other.

  7. I am not sure if you are aware that WebmedCentral has published nearly a thousand articles with its post publication peer review model. We publish everything and reviews happen afterwards. Yes, there are some teething troubles with quality of submissions and reviews. But we expect it to continue to improve. Our model is cheap and effective, gets the message out there quick and is free for all to read. I would like to think we are adding some value to the community and so far not a single researcher has been asked to pay anything for our services. We are exploring various options on how to keep it sustainable including voluntary contributions and a very basic article processing charge. 
    I do feel that resistance to our model is decreasing and launch of F1000 Research this year (nearly two years after our WebmedCentral) is a further endorsement of our beliefs. However, scientists wouldn’t move enmass to this model until PubMed starts tracking it and we get impact factor for research published in this manner. All this is not impossible. Question is: Are we ready for it yet as a community?

Leave a Comment

Your email address will not be published. Required fields are marked *