Category Archives: technology

things I am appreciating today

From David Weinberger’s “Copyright’s Creative Disincentive”:

It takes culture. It takes culture to build culture.

Whether it’s Walt Disney recycling the Brothers Grimm, Stephen King doing variations on a theme of Bram Stoker, or James Joyce mashing Homer up with, well, everything, there’s no innovation that isn’t a reworking of what’s already there. An innovative work without cultural roots would be literally unintelligible. So, incentives that require overly-strict restrictions on our use of cultural works directly diminish the innovativeness of that culture.

The facts are in front of us, in overwhelming abundance. The signature works of our new age are direct slaps in the face of our old assumptions about incentives. Wikipedia was created by unpaid volunteers, some of whom put in so much time that their marriages suffer. Flickr has more beautiful photos than you could look at it in a lifetime. Every sixty seconds, people upload twenty hours (72,000 seconds) of video to YouTube — the equivalent of 86,000 full-length Hollywood movies being released every week. For free. The entire Bible has been translated into LOLcat (“Oh hai. In teh beginnin Ceiling Cat maded teh skiez An da Urfs, but he did not eated dem.”) by anonymous, unpaid contributors, and while that might not be your cup of tea — it is mine — it is without dispute a remarkably creative undertaking.

From Amanda French’s “Imagine a National Digital Library: I Wonder If We Can”:

…the Korean dibrary [digital library] is not just about fancy physical spaces or symbolic cartoon characters: it’s very much about providing a whole set of national library services for Korea. In September 2009, just a few months after the dibrary first opened, Korean law was altered in order to give Korean dibrarians the authority to collect and indeed responsibility for collecting Korean data from the open web. Certain kinds of data were legally required to be deposited in the national digital library so as to enable not only preservation but also “the production and distribution of alternative materials for the disabled.” Now centrally coordinated by the National Digital Library of Korea are all kinds of digital services, from training programs to inter-library loan. The dibrary is even charged with creating a “one card system that gives access to 699 public libraries nationwide,” a system scheduled to go live in 2012. And once Korea has fully nationalized as many library materials and services as it can, it’s apparently not going to stop there: last summer a meeting was held to plan a China-Japan-Korea Digital Library, an Asian digital library or portal modeled after The European Library project. To me it sounds like the second step toward the single digital library filed contentedly away in the humming systems of the starship Enterprise, waiting to be addressed with a question: “Computer . . .”

From Joachim Buwembo’s editorial in The East African, “Uganda’s runaway vote price inflation has economists baffled”:

In 2001, instead of paying heavily for votes, you could reduce the votes of your candidate’s opponent by killing off some of his voters.

The more subtle methods used could include driving an army truck through a crowd of his supporters.

In 2006, things could get a bit more direct and you could fire a sub-machinegun into a crowd of the supporters of your candidate’s rival in broad daylight in the capital city.

But come 2011, things have become more humane and it is market forces that are determining the direction of flow of votes.

Lunch at Berkman: DDoS Attacks Against Independent Media and Human Rights Sites

Liveblogging Hal Roberts, Ethan Zuckerman and Jillian York’s presentation on Distributed Denial of Service Attacks Against Independent Media and Human Rights Sites at the Berkman Center. Please excuse misrepresentation, misinterpretation, typos and general stupidity.


Hal begins by outlining the history of denial of service attacks, which “have been around as long as the Internet.” The rise of botnets allowed for distributed denial of service (DDoS) attacks, in which the attacks are coming from multiple places at the same time. Early botnets were controlled by IRC; these days, many are operated through Twitter accounts.

Ethan points out that we’re seeing a rise in botnets being used to attack each other. One of the largest Internet outages of all time — 9 hours long, in China — was caused by a botnet-fueled “turf war” between two online gaming providers.

(Interesting factoid: early DDoS defense systems grew from the needs of online gambling sites that were being attacked, who operate in a gray area and may not want to ask authorities for help defending against attacks.)

Arbor’s ATLAS, which tracks DDoS attacks worldwide, estimates that 500-1500 attacks happen per day. Hal & Ethan believe that ATLAS “only sees the big ones,” meaning the 500-1500 number is a gross underestimate.

DDoS attacks comprise a wide variety of approaches: slowloris attacks overwhelm machines by slowing down their response rates to requests, while random incessant searches require a server to repeatedly execute database calls, using up all available resources. These two examples are application attacks that essentially “crash the box” (affect a single server). Network attacks that involve volunteers, bots, and/or amplifiers work by “clogging the pipe,” or slowing down the the flow of traffic, for example by requesting huge amounts of data that flood a server.

People who face DDoS attacks have several options. One is to obtain a better machine with a higher capacity to handle requests. Another option is to rent servers online in order to add resources only when they’re needed. Packet filtering can block malicious traffic (assuming it can be identified); scrubbing involves having a data center filter packets for you. Source mitigation and dynamic rerouting are used when the network is flooded. At that point, packet filtering and scrubbing is impractical. Both tactics involve preventing that flood of traffic from arriving, whether by stopping it in its tracks or by sending it somewhere else.

All of these tactics are problematic in some way: they’re expensive (scrubbing can cost $40,000-50,000 per month), they require considerable advance planning or high-level connections, or they’re tricky to execute (the “dark arts” of DDoS defense).

“All of this is background,” Hal says. Their specific research question involves independent media and human rights sites — what kinds of DDoS attacks are used against them, and how often? How can they defend themselves?

Hal describes a “paradox” of DDoS attacks: overall, the defenses are working pretty well. Huge sites — Google, the New York Times, Facebook — are attacked often, but they manage to stay online. This is because these sites are located close to the core of the network, where around 75% of ISPs are able respond to DDoS attacks in less than an hour, making DDoS attacks a “manageable problem.” The sites at the edge of the network are much more vulnerable, and they’re also much more likely to be attacked.

Ethan describes the case of Viet Tan, which is under DDoS attacks almost constantly — to the extent that when they put up a new web service, it is attacked within hours. As a result, Viet Tan has shifted many of their new campaigns to Blogger ( blogs.

Viet Tan is struggling in particular because they’re not only experiencing DDoS attacks. They also face filtering at the national level, from a government who wants to prevent people in Vietnam from accessing their site. Ethan says that 81% of sites in the study that had experienced a DDoS attack have also experienced intrusion, filtering, or another form of attack. In the case of Viet Tan, the site was being attacked unknowingly by its target audience, many of whom were using a corrupted Vietnamese keyboard driver that allowed their computers to be used as part of a botnet attack.

One of the big problems for sites that are DDoS-ed is that their ISPs may jettison them in order to protect other sites on the same server. Of the sites in the study, 55% of sites that were attacked were shut down by their ISP, while only 36% were successfully defended by their ISP.

An attack against Irrawaddy, a Burmese activist site hosted in Thailand, essentially caused all of Thailand to go offline. In response, Irrawaddy’s ISP asked it to move elsewhere. This year, they were attacked again with a larger attack. They were on a stronger ISP that may have been able to protect them, but they hadn’t paid for the necessary level of protection and were again shut down.

Hal and Ethan suggest that a system of social insurance is happening online, at least with larger sites — everything is starting to cost a little bit more, with the extra cost subsidizing the sites that are attacked. The problem with this is that small Internet sites aren’t protected because they’re not in the core.

Hal and Ethan wonder whether someone should build dedicated human rights hosting to protect these sites from attacks. The problem with this is that it collects all these sites into a single location, meaning any company that hosted a group of these sites would be a major target for DDoS attacks. Devising a fair pricing system in this case is tricky.

Ethan raises the issue of intermediary censorship — the constant threat that your hosting company may shut your site down for any reason (e.g., when Amazon shut down Wikileaks). This is a problem of Internet architecture, he says, and there are two solutions: building an alternative, peer-based architecture, or creating a consumer movement that puts sufficient pressure on hosting companies not to take sites down.

What Hal and Ethan ended up recommending to these sites is to have a back-up plan; to minimize dynamic pages; to have robust mirroring, monitoring and failover; to consider hosting on Blogger or a similar large site; and to avoid using the cheapest hosting provider.

Within some communities, Ethan says, a person or group emerges that is the technical contact. This person or group advocates for sites that are under attack. These “tech leaders” are connected to one another and to companies in the core that want to help. The problem is that this isn’t a particularly scaleable model — a better chain needs to be established, so that problems can escalate through a team of local experts up to larger entities. In the meantime, it’s essential to increase organized public pressure on private companies not to act as intermediary censors, but rather to support these sites.

Tools for Transparency: Google Refine

Originally posted as a guest post on the Sunlight Foundation blog.

For the past six months, I’ve served as the co-director of the Technology for Transparency Network, an organization that documents the use of online and mobile technology to promote transparency and accountability around the world. One of the most common challenges the project leaders we’ve interviewed face is making sense of large amounts of data.

In countries where governments keep detailed digital records of lobbying data and education expenditures, data wrangling is a time-consuming, labor-intensive task. In countries where these records are poorly maintained, this task becomes even harder — everything from inconsistent data entry practices to simple typos can derail data analysis.

Google Refine (formerly Freebase Gridworks) is a free, open-source tool for cleaning up, combining, and connecting messy data sets. Rather than acting like a traditional spreadsheet program, Google Refine exists “for applying transformations over many existing cells in bulk, for the purpose of cleaning up the data, extending it with more data from other sources, and getting it to some form that other tools can consume.”

At its most basic level, Google Refine helps users quickly summarize, filter and edit data sets by allowing them to view patterns and to spot and correct errors quickly. More advanced features include reconciling data sets (i.e., matching text in the set with existing database IDs) with data repository Freebase, geocoding, and fetching additional information from the Web based on existing data.

Though it runs through an Internet browser, Google Refine operates offline, making it attractive for those with limited bandwidth or privacy concerns — a group that includes many of the projects listed on the Technology for Transparency Network.

Google Refine isn’t going to solve the problem of poor data availability, but for those who manage to gain access to existing records, it can be a powerful tool for transparency.

For more information, check out the links and video below:

SIPA Shushing Students over CableGate. Seriously?

Yesterday a friend forwarded me a link to a blog post about Wikileaks. Not surprising, given the number of Wikileaks-related blog posts that are floating around the Internet in the wake of the organization’s release of a quarter of a million U.S. Embassy cables. But this blog post was different: this blog post referenced the Columbia University School of International and Public Affairs (SIPA), from which I graduated six months ago.

The author reposts an e-mail sent from SIPA’s Office of Career Services to all current students. It reads:

From: “Office of Career Services”

Date: November 30, 2010 15:26:53 ESTTo:

Hi students,

We received a call today from a SIPA alumnus who is working at the State Department. He asked us to pass along the following information to anyone who will be applying for jobs in the federal government, since all would require a background investigation and in some instances a security clearance.

The documents released during the past few months through Wikileaks are still considered classified documents. He recommends that you DO NOT post links to these documents nor make comments on social media sites such as Facebook or through Twitter. Engaging in these activities would call into question your ability to deal with confidential information, which is part of most positions with the federal government.

Office of Career Services

I’m currently happily employed at the Berkman Center for Internet & Society, but while I was at SIPA I seriously considered a career in the Foreign Service. I applied for (and was offered) a summer internship at the State Department, and I coordinated a conference on Policy Making in the Digital Age, at which the State Department’s Director of the Office of eDiplomacy and a representative of the Office of Innovative Engagement spoke.

I guess I can kiss that possible alternate career path goodbye, given that I tweeted a link yesterday to an article about CableGate. Seriously, State Department? This is all over the news. What’s more, it’s become a focal point for discussions on how digital technology is changing our expectations for government transparency (for those who’ve forgotten: the State Department is big on using tech to promote transparency in other countries. Just not here in the US?).

Seriously, SIPA? As fellow SIPA alum Ben Colmery pointed out in a comment on my Facebook wall, since when does having an opinion about a site leaking documents equate to actually leaking documents oneself? You claim to provide committed students with the necessary skills and perspectives to become responsible leaders. Apparently that means curtailing their academic freedom and teaching them how to bury their heads in the sand.

Crossposted on The Morningside Post

Update, December 6: The State Department is denying that it provided “advice to anyone beyond the State Department” regarding Wikileaks and claiming the information in the OCS email “does not represent a formal policy position.”

Tech for Transparency: New Interviews Posted

Avid readers of my blog (here’s looking at you, Rev) may remember that several months ago I announced that research was beginning for the second phase of the Technology for Transparency Network. The first phase consisted of interviews with over 30 projects around the world who are using technology to promote transparency and accountability in the government and/or private sector. Our goal in the second phase was twofold: to double the number of case studies on the site and to expand the geographic regions we covered.

Since then, I’ve been largely silent about the project — we’ve been working so hard to complete and edit the interviews that I haven’t had much time to breathe. But today I’m thrilled to announce that we have eight new case studies online, with lots more to come over the next few weeks. The case studies that have been posted so far are:

Accountability Initiative
Accountability Initiative researches and creates innovative tools to promote transparency and accountability in India’s public services.

Amatora mu Mahoro
Amatora mu Mahoro (“Peaceful Elections”) is an Ushahidi-based project created to monitor Burundi’s 2010 elections.

Association for Democratic Reforms
ADR India works to monitor national elections through country-wide SMS and helpline campaigns and an informational website. seeks to empower citizens by helping them collectively send petitions and inquiries to government bodies.

Excelências fights corruption in the Brazilian government by publishing data about politicians and government activities online.

Golos (Voice) has introduced several online tools for better election monitoring in Russia.

Mam Prawo Wiedzieć
Mam Prawo Wiedzieć helps Polish citizens access information about their elected representatives in an easy, user-friendly way.

Pera Natin ‘to!
Pera Natin ‘to! (It’s Our Money!) encourages Filipino citizens to report times when they are asked for bribes.

In addition to continuing to post new case studies (you can subscribe to our case study feed via RSS), we’ll also be publishing our final report on both phases of the project by the end of the month. In the meantime, check out @techtransparent and our Facebook page for daily updates and our podcast for interviews with the project leaders!

Juliet Schor on “Post-Industrial Peasants”

Liveblogging Juliet Schor’s presentation “Using the Internet to ‘Save the Planet'” at the Berkman Center. Please excuse misrepresentation, misinterpretation, typos and general stupidity.


Sociology professor Juliet Schor is at the Berkman Center today to talk about how the sustainability community — both activists and practitioners — is increasingly using the Internet to “foster new lifestyles, consumption patterns and ways of producing.” Her presentation is based on her recent book Plenitude: The Economics of True Wealth, in which Schor argues that by shifting to a more sustainable way of life, we can improve both the environment and our economic situation. While writing the book, Schor says, she came to believe that the sustainability and technology communities should have a much closer relationship.

It sounds crazy — “post-industrial peasants” — but there are some very important features to that: diversity of activities and income streams is key. Putting all your eggs in the basket of one employer is riskier and riskier in times of economic uncertainty. The single income stream strategy is becoming less attractive, and diversification is smart. The reason it makes sense now in a way it wouldn’t have 50 years ago is because of technology. Technology allows a single individual or a small company to be productive in ways they couldn’t have before — access to the network, access to information. This is the next stage after “big.” The large economies of scale will be less important going forward, and small-scale efforts will become more important.

Schor starts out by describing a “dramatic collapse” in biodiversity since the 1970s, the growing ecological footprints of different countries (hint: the United States is at the top, using more than four times the world average biocapacity per person), and our collective failure to reduce global carbon dioxide omissions. (She points out that recent data shows that the best way to reduce emissions is to have economic collapse, though that’s not practical as a long-term strategy.)

Schor argues that a purely technological approach won’t halt climate change — this is also a problem of scale. According to a recent paper in Nature, we have already exceeded two of nine different “planetary boundaries” (in categories such as climate change, ocean acidification, biodiversity loss, and others), and we’re close to hitting the sustainable boundaries on a number of others. The strategy of de-materialization (reducing the “material intensity” of our energy use) has had some success, but our economic growth has “more than outweighed the decline” in material intensity. On a worldwide basis, Schor says, our material intensity has actually increased by around 45%, while North America has been a particularly egregious user of materials — our material extraction has increased by about 66% since 1980. This is largely due to our use of fossil fuels and the construction boom.

The Challenges

Schor argues that the world needs to cut its ecological impact rapidly. The problem, she says, is that we’re in the midst of an unemployment crisis. This is a disaster both economically and environmentally speaking. We also shouldn’t take any paths that worsen the distribution of wealth — there’s a negative correlation between income inequality and certain environmental indicators — or decrease human development (i.e., wealth and well-being) overall.

Plenitude: The Economic Model

Switching to green technology (a clean consumption and production system) will help, Schor says. So will improving eco-knowledge, which she defines as “open source transmission and ecological skill diffusion.” We’re “centuries behind” in terms of developing both an understanding of nature as a scarce resource and technology that would allow us to increase the productivity of that resource.

Schor points to working hours, which have declined dramatically between 1870 and the 1970s (from around 3000 to around 2000 per year). Since 1973, however, the annual hours worked have been increasing the United States. A country’s ecological footprint rises with its average annual hours worked, even when income is held constant. Schor says that as we move forward, we need to focus on increasing productivity growth in fewer working hours, rather than by adding new hours. She wants to move hours from the “business as usual” economy to “self-providing” and green entrepreneurship. This will reduce market dependence and reliance on large corporations and provide more time for people to increase their skills, build local resilience, and help create a small-scale, low-impact sector of enterprises.

Schor provides an example in the form of permaculture (a high-productivity approach to agriculture) and urban agriculture. This form of micro-generation, which applies not only to farmers’ markets and fruits and vegetables but also to energy and homes (DIY yurts, anyone?), is low-cash and low-footprint in comparison to more market-driven methods and mechanisms. Schor is currently working on a number of other case studies, including a permaculture farm in the Netherlands, a converted soybean farm in Kansas build with fab lab technology. This farm is also trying to build a blueprint for other communities to follow. What’s cool about this, Schor says, is the low financial barriers to entry: communities can purchase the machines and the costs of materials are low.

Schor’s also interested in the principle of sharing: couches, homes, cars, tools, etc. She says the recession has “changed the calculus of time and money,” creating an environment that fosters these sorts of sharing schemes. Another initiative that has sprung up in this environment is the transition movement, which focuses on helping communities build local resilience.

Overall, Schor says, our constraint is much more about time — we work long hours in formal jobs, which we need in order to have access to health insurance, housing, and education. We need to find ways to allow people to “delink” from these jobs, which are high footprint jobs, to allow them to do more of this kind of small-scale activity.

Tech for Transparency, v2

Today we officially launched the second phase of the Technology for Transparency Network, a Rising Voices project that documents and maps projects around the world that use online technology to promote transparency and accountability.

Technology for Transparency Network

During the first phase, which ran from January to May of this year, we mapped 37 case studies from Central & Eastern Europe, China, Latin America, South Asia, Southeast Asia and anglophone Sub-Saharan Africa. Between now and September, we’ll be nearly doubling that number and expanding our focus to include projects from the Middle East & North Africa, the former Soviet Union and francophone Africa.

Researchers from the Technology for Transparency Network present at the 2010 Global Voices Summit in Santiago, Chile. Photo courtesy of FabsY_ on Flickr.

I am psyched to be co-heading the project along with the formidable and talented Renata Avila. We’re thrilled to be working with an amazing team of researchers and advisors, including our new editorial advisor Hzel Feigenblatt. Hazel is the Media Projects Director at Global Integrity and will be working with us to make sure we interview the most innovative and exciting projects in this space.

If you have an idea for a case study, let us know! We’re currently taking suggestions in English, Spanish and Portuguese. You can also subscribe to our RSS feed to get updates when we publish new case studies, follow us on Twitter (@techtransparent) and become a fan on Facebook.

Little Brother and America as a police state

On Jer’s recommendation, I’m reading Cory Doctorow’s Little Brother, which you can and should download for free from his site.

The book is a fictional account of a high school kid — a smart, technologically skilled high school kid — who ends up on the wrong side of the Department of Homeland Security after a terrorist attack in San Francisco. As I sat in Dulles airport last night waiting for my flight back to Boston, I realized just how much information I put online and how little effort it would take the DHS to throw me in a holding cell were the American government so inclined.

I came to work this morning to news that the Senate Committee on Homeland Security and Governmental Affairs has approved the Protecting Cyberspace as a National Asset Act, which among other things gives the president the power to force ISPs and search engines to limit or shut down connections at his whim. Oh, and by the way, the ACLU has announced that “Americans have been put under surveillance or harassed by the police just for deciding to organize, march, protest, espouse unusual viewpoints and engage in normal, innocuous behaviors such as writing notes or taking photographs in public” in at least 33 states.

I’m trying not be alarmist about this, but maybe I should be?

Learning Ruby: Recursion

One of my fellow Berkterns is teaching some of us Ruby this summer. Our first lesson included an example in which we define an array, set a variable equal to that array, and then added that variable into the array, effectively defining an infinite array (one that, in our case, repeated “dog, pony, show” over and over again).

“This is recursion,” our wise teacher announced. “If you don’t know what recursion is, Google it.”

We all dutifully followed his instructions and found this:

Recursion.  Did you mean recursion?

Oh, Google.

GV Sudan: Checking in with Sudan Vote Monitor

My next post is up at Global Voices Online:

On the eve of Sudan's 2010 presidential elections, I interviewed Fareed Zein, who heads the citizen election monitoring project Sudan Vote Monitor, for the Technology for Transparency Project. Zein was hopeful that the project would bring greater transparency to the country's first democratic elections in more than two decades. “There was basically no idea what was going on on the ground” during previous political events, Zein said at the time. “What we're hoping to do is shine a light and give people access to events that are occurring at remote election centers.” On Wednesday I checked in with Zein to get his thoughts on the project now that the elections have ended.

Read the interview »