I’m at Truthiness in Digital Media today, an event co-hosted by the Berkman Center and the MIT Center for Civic Media that “seeks to understand and address propaganda and misinformation in the new media ecosystem.” We’re playing with using Storify to track/report out from the conference. Check it out:
Update: All of the Storify sessions are now available. For more information on each session, see the agenda.
Liveblogging Felipe Heusser’s presentation on Open Government Data for Open Accountability at the Berkman Center. Please excuse misrepresentation, misinterpretation, typos, and general stupidity (all of which are mine and mine alone).
Felipe’s at the Berkman Center today to talk about transparency, open data, technology, and accountability. His talk is based on three key points:
1. Transparency is a cliche, and FOIA is outdated.
Transparency is “a very sticky word,” Felipe tells us— “a pop concept that everyone likes”—and freedom of information laws have historically been one of the most popular ways to spread transparency policies.
He shows us a map of the world, shaded according to the existence/strength of freedom of information laws. He points out that this map is not highly correlated with levels of corruption, suggesting that greater transparency doesn’t necessary lead to more accountability. Regardless, transparency policy—a term Felipe is using today to mean freedom of information regulation—has spread rapidly.
Freedom of information has been the cornerstone of transparency policy since at least the late 18th century, when a Finnish priest who also served in in the Swedish-Finnish parliament developed the first freedom of the press act. Modern freedom of information laws allow you to request information from the government and also require governments to proactively disclose certain types of information.
The mindset behind FOIA, Felipe says, is not about access to data, but about access to documentation. In the age of the web, this way of thinking is becoming increasingly obsolete, he argues. Rather than the reports offered by FOIA, data records offer us neutral facts, which are more versatile. FOIA offers two-way communication, rather than an open flow of shared information. FOIA holds up a barrier between citizens and government: channels of bureaucracy in which requests can be mired for ages—up to 20 years for certain US government agencies.
2. Open data policies are necessary.
None of the above issues jibe with the way the Internet works, Felipe points out. We need open data to keep freedom of information up to date. The good news: some governments and organizations have made small steps toward open data. Data.gov is one, along with data.gov.uk, Open Kenya, World Bank data, and others.
Why does this matter? Open government data can help keep our freedom of information right up to date by allowing us to access much of the information the government holds. Open data also allows for multiway communication and sharing—it “understands the logic of information abundance.” Open data doesn’t require as many gatekeepers: you go online, search, and download, rather than demanding a formal request.
What does this mean? Open data can promote a more open accountability. Historically, government accountability has been exercised formally through clear procedures. With open data, accountability can be more informal and crowd-sourced. Rather than relying on scarce institutional watchdogs, accountability can now draw on an abundance of web-based watchdogs.
Felipe gives an overview of one of his projects, Ciudadano Inteligente. The organization’s Interest Inspector uses open data to compare the personal interests and financial ties of Chilean congressmembers to the official policies they support to uncover conflicts of interest in what Felipe calls an “ongoing accountability exercise.”
3. Stay cautious.
There’s a lot of “talking, cocktails, and pictures” about transparency and open data, Felipe says, but even more importantly, there is still a lot of work to be done. Most of the open data experiments today are coming from the local level—cities releasing data, for example—instead of growing out of a truly national, state-level commitment to transparency. Also, most of the data available online today is not all that useful: it’s images and transportation data, with hardly anything about, for example, banking and finance.
Still, the implementation of open data policies matters, Felipe says, and it matters for accountability (and potentially for business and the delivery of public services as well). Promoting the use of these policies is crucial for securing access to information.
Doc Searls asks about personal data: MyData in the UK and Google Takeout. How does this relate?
Jennifer Shkabatur challenges both Felipe’s assertion that open data is “neutral”—data sets are still structured by someone, she points out—and the idea that open data is free of gatekeepers. As Felipe noted, the datasets that do exist openly are heavily weighted toward simpler, more fluffy data instead of making information about finance, education, and health more available. She also questions whether open data forces people to rely on NGOs and other organizations that can interpret and make use of this data—how does this lead to real accountability?
Felipe responds that documents are different from data sets in that they are more structured/created/manipulated than the data itself. Neutral facts are lists of data—the number of policemen on the streets, etc. A civil servant still creates columns and files, but the numbers are neutral. Under today’s FOI laws, governments are not compelled to release this information—they only have to release documents about this information, so if the number you want is not in a report, you’re out of luck.
With respect to gatekeepers, Felipe agrees that they still exist but argues that they are fewer under open data policies than under FOI laws. Open data allows you to, for example, cross-reference data from your own government with World Bank data, which may lead to new discoveries.
Jennifer jumps back in with a question about supply and demand: journalists use FOIA to find specific information. Relying only on data sets and hoping you can cross-reference and find informations shifts the situation from one that is demand-driven to one that is supply-driven.
Felipe responds that, even under the current FOI laws, the situation is still supply-driven. We haven’t seen yet whether there’s a change in supply and demand.
Sascha Meinrath points out that certain data sets are released in response to citizen demand, but he’s skeptical that the government will voluntarily release “data that really matters.”
Felipe notes that open data policies aren’t intended to replace FOIA, but rather to complement it.
Yochai Benkler notes that FOIA can still be helpful—we care less about transportation data and more about say, whether the Department of Homeland Security is monitoring our tweets, which is information we may be able to get from contracts and other documents available through FOIA.
Yochai also points out that there’s a lot of excitement and political/emotional energy that gets poured into open data, which can make those seem like “good actors” who are actually not all that good: the holding up of the US as a model of transparency by a mobilized global community is dangerous, he warns.* If you think the core of freedom of information is forcing the government to release data it doesn’t want to release, then FOIA is more important, and data is supplementary. If you believe the core is that once the data is out there, you can’t hide, then data is the key.
Felipe notes that data.gov was not the best example, just one of the first, and points to tools like Accesso Inteligente, which helps citizens file FOIA requests and tracks all questions and answers in a searchable database.
*Someone from the audience pushes back against this, pointing out that data.gov has many detractors.
Isaac Meister asks about “two-way sharing,” which seems to him to be a departure from, rather than a description of, the FOIA model. He asks what data private citizens should be encouraged to share with the government, what rights they may gain or lose by doing so, and what obligations the government may have to share that data.
An audience member (whose name I didn’t catch, sorry!) asks how well open data is being used by average citizens. Does the general public rely on others to interpret the data for them? Do they care that it exists? Who is using the data interpretation tools (like the Interest Inspector) that currently exist?
Felipe responds that yes, average citizens do still rely on intermediaries, usually the press. Ciudadano Inteligente has seen an increase in users of its applications (both general users and the press itself).
For the past two weeks I’ve been besieged by what I can only assume is the plague, and in the process, I’ve lost my voice. It started out like this:
Over the weekend, my camping buddies decided I sounded more like Sarah Michelle Gellar:
On Tuesday I turned into Kathleen Turner:
Then during a conference call yesterday, I was called Suzanne Plachette:
But I think I sound more like this:
Other names in the running include Squeaky, Snuffles, The Snuff Creature, Schnupfi, Coughy McCougherson, and, after my pathetic attempts to communicate in hand gestures, “satanic mime” and “big flapping bird.” I have the best coworkers.
Spammers are getting more targeted, if not always more specific, these days, as evidenced by this lovely comment left on a website I run for work:
Thanks for all your valuable efforts on this blog. My daughter take interest in working on internet research and it’s really easy to see why. Almost all learn all relating to the compelling form you render informative tactics by means of this web site and therefore recommend response from people about this theme then our favorite daughter is always discovering a lot. Take advantage of the remaining portion of the new year. Your performing a powerful job.
Good luck, favorite daughter! May you continue to discover a lot about rendering informative tactics.
Michael Nielsen is at the Berkman Center today to talk about the ideas in his forthcoming book, Reinventing Discovery, which explores how the Internet is enabling us to collectively tackle complex scientific problems and collaboratively break new scientific ground.
Liveblogging Michael Nielsen’s presentation on Doing Science in the Open at the Berkman Center. Please excuse misrepresentation, misinterpretation, typos, and general stupidity (all of which are mine and mine alone).
Michael Nielsen is at the Berkman Center today to talk about the ideas in his new book, Reinventing Discovery, which explores how the Internet is changing the way we collectively tackle complex scientific problems and enabling us to collaboratively break new scientific ground. Peter Suber, introducing Nielsen, points out that the timing of this talk is particularly good, given that it’s Open Access Week.
Nielsen begins with the story of Tim Gowers (recounted in full detail in the first chapter of Reinventing Discovery). Gowers is a mathematician and a blogger who, in January 2009, decided to invite his readers to collaborate with him to solve a fairly difficult math problem. Within a little over a month, nearly 30 people contributed 800 comments to Gowers’s initial blog post, solving not only the initial problem but also a slightly harder problem. The project—which Gowers called the Polymath Project—was a success.
Nielsen wonders why collaboration of this type isn’t more common in science. He describes the “significant failure” of Qwiki, a research wiki for quantum computing. Qwiki was announced at a workshop in 2005 to somewhat mixed reactions: some people were horrified by the idea, others ambivalent, and a few enthusiastic. Those who were supportive, though, intended to engage only as readers, not as contributors. “Science is littered with examples of wikis like this,” Nielsen says—”ghost towns” of research and collaboration.
“The fundamental problem,” he argues, “is one of opportunity cost.” The publication of scientific papers is central to scientists’ careers, and one paper will do far more for a career than a slew of well-reasoned comments on a website.
So why did the Polymath Project work? Nielsen argues that the project’s success was due in large part to the fact that before the project even started, Gowers and others were discussing the papers that would result and who would be counted as an author. The project was an unconventional means to a traditional end: publication.
What we need is a way for people to be recognized and rewarded when they contribute scientific research in unconventional formats, Nielsen claims. But how do we do this? How do we change how the scientific community makes judgements about values?
Nielsen points to the Bermuda Principles, a set of accords designed to encourage the open sharing of pre- or unpublished data on the human genome project. The principles stated, among other things, that any sequence of data larger than a certain size would be released publicly within 24 hours. These principles eventually became policy, meaning that subsequent funding for gene sequencing research was tied to an obligation to conduct this research in an open matter.
This is great, Nielsen says, but this “mandate approach” isn’t enough. Scientists need to internalize the value of sharing their data and to enthusiastically accept principles of openness and sharing as their own. We need a cultural shift.
A couple of steps in the right direction: the Journal of Visualized Experiments sends camera crews into labs to document researchers explaining their projects. The resulting videos are often far more clear than print explanations would be, creating an incentive for researchers to disclose information in new ways. Another journal encourages researchers to approach publication as an effort to release their data first, accompanied by some explanatory material, rather than their explanation/paper, with the data as supplementary. Finally, Neilsen also mentions that blog posts have begun to show up in Google Scholar, along with citation statistics. This may be a good or a bad thing, he says, but it’s another way to explore the use of digital collaborative tools to enable greater openness in science.