Liveblogging Michael Nielsen’s presentation on Doing Science in the Open at the Berkman Center. Please excuse misrepresentation, misinterpretation, typos, and general stupidity (all of which are mine and mine alone).
Michael Nielsen is at the Berkman Center today to talk about the ideas in his new book, Reinventing Discovery, which explores how the Internet is changing the way we collectively tackle complex scientific problems and enabling us to collaboratively break new scientific ground. Peter Suber, introducing Nielsen, points out that the timing of this talk is particularly good, given that it’s Open Access Week.
Nielsen begins with the story of Tim Gowers (recounted in full detail in the first chapter of Reinventing Discovery). Gowers is a mathematician and a blogger who, in January 2009, decided to invite his readers to collaborate with him to solve a fairly difficult math problem. Within a little over a month, nearly 30 people contributed 800 comments to Gowers’s initial blog post, solving not only the initial problem but also a slightly harder problem. The project—which Gowers called the Polymath Project—was a success.
Nielsen wonders why collaboration of this type isn’t more common in science. He describes the “significant failure” of Qwiki, a research wiki for quantum computing. Qwiki was announced at a workshop in 2005 to somewhat mixed reactions: some people were horrified by the idea, others ambivalent, and a few enthusiastic. Those who were supportive, though, intended to engage only as readers, not as contributors. “Science is littered with examples of wikis like this,” Nielsen says—”ghost towns” of research and collaboration.
“The fundamental problem,” he argues, “is one of opportunity cost.” The publication of scientific papers is central to scientists’ careers, and one paper will do far more for a career than a slew of well-reasoned comments on a website.
So why did the Polymath Project work? Nielsen argues that the project’s success was due in large part to the fact that before the project even started, Gowers and others were discussing the papers that would result and who would be counted as an author. The project was an unconventional means to a traditional end: publication.
What we need is a way for people to be recognized and rewarded when they contribute scientific research in unconventional formats, Nielsen claims. But how do we do this? How do we change how the scientific community makes judgements about values?
Nielsen points to the Bermuda Principles, a set of accords designed to encourage the open sharing of pre- or unpublished data on the human genome project. The principles stated, among other things, that any sequence of data larger than a certain size would be released publicly within 24 hours. These principles eventually became policy, meaning that subsequent funding for gene sequencing research was tied to an obligation to conduct this research in an open matter.
This is great, Nielsen says, but this “mandate approach” isn’t enough. Scientists need to internalize the value of sharing their data and to enthusiastically accept principles of openness and sharing as their own. We need a cultural shift.
A couple of steps in the right direction: the Journal of Visualized Experiments sends camera crews into labs to document researchers explaining their projects. The resulting videos are often far more clear than print explanations would be, creating an incentive for researchers to disclose information in new ways. Another journal encourages researchers to approach publication as an effort to release their data first, accompanied by some explanatory material, rather than their explanation/paper, with the data as supplementary. Finally, Neilsen also mentions that blog posts have begun to show up in Google Scholar, along with citation statistics. This may be a good or a bad thing, he says, but it’s another way to explore the use of digital collaborative tools to enable greater openness in science.
This is a very interesting topic. One thing that I find frustrating in economics, and I imagine applies to other fields as well, is that there is no mechanism by which we can learn from each others’ mistakes. Since failed research projects are not publishable, there is no way of knowing whether your idea is new and unique, or whether other researchers have had your idea, applied it, and found that it doesn’t work. Particularly in the case where good ideas fail due to lack of sufficient data, I believe this leads to great inefficiency.