To get a sense of how the current system curbs science, consider a rare case in whichresearchers attacked a big medical problem with an open-science model. In 2004, in the United States, a network of government and private researchers, including large drug companies, used open-science principles to accelerate research into Alzheimer’s. The project, as Gina Kolata aptly described it in the New York Times last summer, “was an agreement … not just to raise money, not just to do research on a vast scale, but also to share all the data, making every single finding immediately available to anyone with a computer anywhere in the world. Before that, researchers worked separately, siloing off much of their work. Now methods and data formats were standardized. The data would immediately enter the public domain, where anyone could build on it.”
An extraordinary project ensued. The U.S. National Institute on Aging contributed over $40 million, and 20 companies and two nonprofit groups kicked in another $25 million to fund the first six years. The program produced an explosion of papers on early diagnosis and helped generate more than 100 studies to test drugs or other treatments. It greatly sped and opened the flow of findings and data. According to the New York Times, the project’s entire massive database had been downloaded more than 3,200 times by last summer, and the data sets containing images of brain scans was downloaded almost a million times. Everyone was so pleased with the results that they renewed the accord this year. And all because, as a researcher told Kolata, “we parked our egos and intellectual-property noses at the door.”
The language used here — everything entering the public domain, the dismantling of silos, the parking of egos and IP padlocks — might have been lifted from an open-science manifesto. And even Big Science appreciated the outcome. To open-science advocates, this raises a good and somewhat obvious questiknowleon: Why don’t we do science like this all the time?
Part of the answer, strangely, is the very thing at the center of science: the paper. Once science’s main conduit, the paper has become its choke point.
It’s not just that the paper is slow, though that is a huge problem. A researcher who submits a paper to a traditional journal right now, for instance, won’t see the published piece for about a year. She must wait while the paper gets passed around among editors, then goes through rounds of peer review by experts in her field, who might and often do object not just to her methods or data but to her findings and interpretations. Finally, she must wait while it moves through an editing, layout, and publishing pipeline that itself might run anywhere from 2 to 12 weeks.
Yet the paper is not simply slow; it’s heavy. Even as increasingly data-rich science has outgrown the paper’s ability to deliver and describe all that science has to offer — its deep databases, its often elaborate methods — we’ve loaded it up needlessly with reputational weight and vital functions other than carrying data.
The paper is meant to be a conduit for the real content and currency of the science: the ideas, methods, data, and findings of the people who do science. But the tremendous publishing and commercial infrastructure built around the academic paper over the last half-century has concentrated so many functions and so much value in the journal that the paper itself, rather than the information in it, has become science’s main currency. It is the paper you must buy; the paper you must publish; the paper you must cite; the paper on which not just citations but tenure, reputation, status, and even school rankings are built.