Feeds:
Posts
Comments

Posts Tagged ‘open data’

At our April 5th Salon in Washington, DC we had the opportunity to take a closer look at open data and privacy and discuss the intersection of the two in the framework of ‘responsible data’. Our lead discussants were Amy O’Donnell, Oxfam GB; Rob Baker, World Bank; Sean McDonald, FrontlineSMS. I had the pleasure of guest moderating.

What is Responsible Data?

We started out by defining ‘responsible data‘ and some of the challenges when thinking about open data in a framework of responsible data.

The Engine Room defines ‘responsible data’ as

the duty to ensure people’s rights to consent, privacy, security and ownership around the information processes of collection, analysis, storage, presentation and reuse of data, while respecting the values of transparency and openness.

Responsible Data can be like walking a tightrope, noted our first discussant, and you need to find the right balance between opening data and sharing it, all the while being ethical and responsible. “Data is inherently related to power – it can create power, redistribute it, make the powerful more powerful or further marginalize the marginalized. Getting the right balance involves asking some key questions throughout the data lifecycle from design of the data gathering all the way through to disposal of the data.

How can organizations be more responsible?

If an organization wants to be responsible about data throughout the data life cycle, some questions to ask include:

  • In whose interest is it to collect the data? Is it extractive or empowering? Is there informed consent?
  • What and how much do you really need to know? Is the burden of collecting and the liability of storing the data worth it when balanced with the data’s ability to represent people and allow them to be counted and served? Do we know what we’ll actually be doing with the data?
  • How will the data be collected and treated? What are the new opportunities and risks of collecting and storing and using it?
  • Why are you collecting it in the first place? What will it be used for? Will it be shared or opened? Is there a data sharing MOU and has the right kind of consent been secured? Who are we opening the data for and who will be able to access and use it?
  • What is the sensitivity of the data and what needs to be stripped out in order to protect those who provided the data?

Oxfam has developed a data deposit framework to help assess the above questions and make decisions about when and whether data can be open or shared.

(The Engine Room’s Responsible Development Data handbook offers additional guidelines and things to consider)

(See: https://wiki.responsibledata.io/Data_in_the_project_lifecycle for more about the data lifecycle)

Is ‘responsible open data’ an oxymoron?

Responsible Data policies and practices don’t work against open data, our discussant noted. Responsible Data is about developing a framework so that data can be opened and used safely. It’s about respecting the time and privacy of those who have provided us with data and reducing the risk of that data being hacked. As more data is collected digitally and donors are beginning to require organizations to hand over data that has been collected with their funding, it’s critical to have practical resources and help staff to be more responsible about data.

Some disagreed that consent could be truly informed and that open data could ever be responsible since once data is open, all control over the data is lost. “If you can’t control the way the data is used, you can’t have informed people. It’s like saying ‘you gave us permission to open your data, so if something bad happens to you, oh well….” Informed consent is also difficult nowadays because data sets are being used together and in ways that were not possible when informed consent was initially obtained.

Others noted that standard informed consent practices are unhelpful, as people don’t understand what might be done with their data, especially when they have low data literacy. Involving local communities and individuals in defining what data they would like to have and use could make the process more manageable and useful for those whose data we are collecting, using and storing, they suggested.

One person said that if consent to open data was not secured initially; the data cannot be opened, say, 10 years later. Another felt that it was one thing to open data for a purpose and something entirely different to say “we’re going to open your data so people can do fun things with it, to play around with it.”

But just what data are we talking about?

USAID was questioned for requiring grantees to share data sets and for leaning towards de-identification rather than raising the standard to data anonymity. One person noted that at one point the agency had proposed a 22-step process for releasing data and even that was insufficient for protecting program participants in a risky geography because “it’s very easy to figure out who in a small community recently received 8 camels.” For this reason, exclusions are an important part of open data processes, he said.

It’s not black or white, said another. Responsible open data is possible, but openness happens along a spectrum. You have financial data on the one end, which should be very open as the public has a right to know how its tax dollars are being spent. Human subjects research is on the other end, and it should not be totally open. (Author’s note: The Open Knowledge Foundation definition of open data says: “A key point is that when opening up data, the focus is on non-personal data, that is, data which does not contain information about specific individuals.” The distinction between personal data, such as that in household level surveys, and financial data on agency or government activities seems to be blurred or blurring in current debates around open data and privacy.) “Open data will blow up in your face if it’s not done responsibly,” he noted. “But some of the open data published via IATI (the International Aid Transparency Initiative) has led to change.”

A participant followed this comment up by sharing information from a research project conducted on stakeholders’ use of IATI data in 3 countries. When people knew that the open data sets existed they were very excited, she said. “These are countries where there is no Freedom of Information Act (FOIA), and where people cannot access data because no one will give it to them. They trusted the US Government’s data more than their own government data, and there was a huge demand for IATI data. People were very interested in who was getting what funding. They wanted information for planning, coordination, line ministries and other logistical purposes. So let’s not underestimate open data. If having open data sets means that governments, health agencies or humanitarian organizations can do a better job of serving people, that may make for a different kind of analysis or decision.”

‘Open by default’ or ‘open by demand’?

Though there are plenty of good intentions and rationales for open data, said one discussant, ‘open by default’ is a mistake. We may have quick wins with a reduction in duplicity of data collection, but our experiences thus far do not merit ‘open by default’. We have not earned it. Instead, he felt that ‘open by demand’ is a better idea. “We can put out a public list of the data that’s available and see what demand for data comes in. If we are proactive on what is available and what can be made available, and we monitor requests, we can avoid putting out information that no one is interested in. This would lower the overhead on what we are releasing. It would also allow us to have a conversation about who needs this data and for what.”

One participant agreed, positing that often the only reason that we collect data is to provide proof and evidence that we’re doing our job, spending the money given to us, and tracking back. “We tend to think that the only way to provide this evidence is to collect data: do a survey, talk to people, look at website usage. But is anyone actually using this data, this evidence to make decisions?”

Is the open data honeymoon over?

“We need to do a better job of understanding the impact at a wider level,” said another participant, “and I think it’s pretty light. Talking about open data is too general. We need to be more service oriented and problem driven. The conversation is very different when you are using data to solve a particular problem and you can focus on something tangible like service delivery or efficiency. Open data is expensive and not sustainable in the current setup. We need to figure this out.”

Another person shared results from an informal study on the use of open data portals around the world. He found around 2,500 open data portals, and only 3.8% of them use https (the secure version of http). Most have very few visitors, possibly due to poor Internet access in the countries whose open data they are serving up, he said. Several exist in countries with a poor Freedom House ranking and/or in countries at the bottom end of the World Bank’s Digital Dividends report. “In other words, the portals have been built for people who can’t even use them. How responsible is this?” he asked, “And what is the purpose of putting all that data out there if people don’t have the means to access it and we continue to launch more and more portals? Where’s all this going?”

Are we conflating legal terms?

Legal frameworks around data ownership were debated. Some said that the data belonged to the person or agency that collected it or paid for the cost of collecting in terms of copyright and IP. Others said that the data belonged to the individual who provided it. (Author’s note: Participants may have been referring to different categories of data, eg., financial data from government vs human subjects data.) The question was raised of whether informed consent for open data in the humanitarian space is basically a ‘contract of adhesion’ (a term for a legally binding agreement between two parties wherein one side has all the bargaining power and uses it to its advantage). Asking a person to hand over data in an emergency situation in order to enroll in a humanitarian aid program is akin to holding a gun to a person’s head in order to get them to sign a contract, said one person.

There’s a world of difference between ‘published data’ and ‘openly licensed data,’ commented our third discussant. “An open license is a complete lack of control, and you can’t be responsible with something you can’t control. There are ways to be responsible about the way you open something, but once it’s open, your responsibility has left the port.” ‘Use-based licensing’ is something else, and most IP is governed by how it’s used. For example, educational institutions get free access to data because they are educational institutions. Others pay and this subsidized their use of this data, he explained.

One person suggested that we could move from the idea of ‘open data’ to sub-categories related to how accessible the data would be and to whom and for what purposes. “We could think about categories like: completely open, licensed, for a fee, free, closed except for specific uses, etc.; and we could also specify for whom, whose data and for what purposes. If we use the term ‘accessible’ rather than ‘open’ perhaps we can attach some restrictions to it,” she said.

Is data an asset or a liability?

Our current framing is wrong, said one discussant. We should think of data as a toxic asset since as soon as it’s in our books and systems, it creates proactive costs and proactive risks. Threat modeling is a good approach, he noted. Data can cause a lot of harm to an organization – it’s a liability, and if it’s not used or stored according to local laws, an agency could be sued. “We’re far under the bar. We are not compliant with ‘safe harbor’ or ECOWAS regulations. There are libel questions and property laws that our sector is ignorant of. Our good intentions mislead us in terms of how we are doing things. There is plenty of room to build good practice here, he noted, for example through Civic Trusts. Another participant noted that insurance underwriters are already moving into this field, meaning that they see growing liability in this space.

How can we better engage communities and the grassroots?

Some participants shared examples of how they and their organizations have worked closely at the grassroots level to engage people and communities in protecting their own privacy and using open data for their own purposes. Threat modeling is an approach that helps improve data privacy and security, said one. “When we do threat modeling, we treat the data that we plan to collect as a potential asset. At each step of collection, storage, sharing process – we ask, ‘how will we protect those assets? What happens if we don’t share that data? If we don’t collect it? If we don’t delete it?’”

In one case, she worked with very vulnerable women working on human rights issues and together the group put together an action plan to protect its data from adversaries. The threats that they had predicted actually happened and the plan was put into action. Threat modeling also helps to “weed the garden once you plant it,” she said, meaning that it helps organizations and individuals keep an eye on their data, think about when to delete data, pay attention to what happens after data’s opened and dedicate some time for maintenance rather than putting all their attention on releasing and opening data.

More funding needs to be made available for data literacy for those whose data has been collected and/or opened. We need to help people think about what data is of use to them also. One person recalled hearing people involved in the creation of the Kenya Open Government Data portal say that the entire process was a waste of time because of low levels of use of any of the data. There are examples, however, of people using open data and verifying it at community level. For example, high school students in one instance found the data on all the so-called grocery stores in their community and went one-by-one checking into them, and identifying that some of these were actually liquor stores selling potato chips, not actual grocery stores. Having this information and engaging with it can be powerful for local communities’ advocacy work.

Are we the failure here? What are we going to do about it?

One discussant felt that ‘data’ and ‘information’ are often and easily conflated. “Data alone is not power. Information is data that is contextualized into something that is useful.” This brings into question the value of having so many data portals, and so much risk, when so little is being done to turn data into information that is useful to the people our sector says it wants to support and empower.

He gave the example of the Weather Channel, a business built around open data sets that are packaged and broadcast, which just got purchased for $2 billion. Channels like radio that would have provided information to the poor were not purchased, only the web assets, meaning that those who benefit are not the disenfranchised. “Our organizations are actually just like the Weather Channel – we are intermediaries who are interested in taking and using open data for public good.”

As intermediaries, we can add value in the dissemination of this open data, he said. If we have the skills, the intention and the knowledge to use it responsibly, we have a huge opportunity here. “However our enlightened intent has not yet turned this data into information and knowledge that communities can use to improve their lives, so are we the failure here? And if so, what are we doing about it? We could immediately begin engaging communities and seeing what is useful to them.” (See this article for more discussion on how ‘open’ may disenfranchise the poor.)

Where to from here?

Some points raised that merit further discussion and attention include:

  • There is little demand or use of open data (such as government data and finances) and preparing and maintaining data sets is costly – ‘open by demand’ may be a more appropriate approach than ‘open by default.’
  • There is a good deal of disagreement about whether data can be opened responsibly. Some of this disagreement may stem from a lack of clarity about what kind of data we are talking about when we talk about open data.
  • Personal data and human subjects data that was never foreseen to be part of “open data” is potentially being opened, bringing with it risks for those who share it as well as for those who store it.
  • Informed consent for personal/human subject data is a tricky concept and it’s not clear whether it is even possible in the current scenario of personal data being ‘opened’ and the lack of control over how it may be used now or in the future, and the increasing ease of data re-identification.
  • We may want to look at data as a toxic asset rather than a beneficial one, because of the liabilities it brings.
  • Rather than a blanket “open” categorization, sub-categorizations that restrict data sets in different ways might be a possibility.
  • The sector needs to improve its understanding of the legal frameworks around data and data collection, storage and use or it may start to see lawsuits in the near future.
  • Work on data literacy and community involvement in defining what data is of interest and is collected, as well as threat modeling together with community groups is a way to reduce risk and improve data quality, demand and use; but it’s a high-touch activity that may not be possible for every kind of organization.
  • As data intermediaries, we need to do a much better job as a sector to see what we are doing with open data and how we are using it to provide services and contextualized information to the poor and disenfranchised. This is a huge opportunity and we have not done nearly enough here.

The Technology Salon is conducted under Chatham House Rule so attribution has not been made in this post. If you’d like to attend future Salons, sign up here

 

Read Full Post »

This post was originally published on the Open Knowledge Foundation blog

A core theme that the Open Development track covered at September’s Open Knowledge Conference was Ethics and Risk in Open Development. There were more questions than answers in the discussions, summarized below, and the Open Development working group plans to further examine these issues over the coming year.

Informed consent and opting in or out

Ethics around ‘opt in’ and ‘opt out’ when working with people in communities with fewer resources, lower connectivity, and/or less of an understanding about privacy and data are tricky. Yet project implementers have a responsibility to work to the best of their ability to ensure that participants understand what will happen with their data in general, and what might happen if it is shared openly.

There are some concerns around how these decisions are currently being made and by whom. Can an NGO make the decision to share or open data from/about program participants? Is it OK for an NGO to share ‘beneficiary’ data with the private sector in return for funding to help make a program ‘sustainable’? What liabilities might donors or program implementers face in the future as these issues develop?

Issues related to private vs. public good need further discussion, and there is no one right answer because concepts and definitions of ‘private’ and ‘public’ data change according to context and geography.

Informed participation, informed risk-taking

The ‘do no harm’ principle is applicable in emergency and conflict situations, but is it realistic to apply it to activism? There is concern that organizations implementing programs that rely on newer ICTs and open data are not ensuring that activists have enough information to make an informed choice about their involvement. At the same time, assuming that activists don’t know enough to decide for themselves can come across as paternalistic.

As one participant at OK Con commented, “human rights and accountability work are about changing power relations. Those threatened by power shifts are likely to respond with violence and intimidation. If you are trying to avoid all harm, you will probably not have any impact.” There is also the concept of transformative change: “things get worse before they get better. How do you include that in your prediction of what risks may be involved? There also may be a perception gap in terms of what different people consider harm to be. Whose opinion counts and are we listening? Are the right people involved in the conversations about this?”

A key point is that whomever assumes the risk needs to be involved in assessing that potential risk and deciding what the actions should be — but people also need to be fully informed. With new tools coming into play all the time, can people be truly ‘informed’ and are outsiders who come in with new technologies doing a good enough job of facilitating discussions about possible implications and risk with those who will face the consequences? Are community members and activists themselves included in risk analysis, assumption testing, threat modeling and risk mitigation work? Is there a way to predict the likelihood of harm? For example, can we determine whether releasing ‘x’ data will likely lead to ‘y’ harm happening? How can participants, practitioners and program designers get better at identifying and mitigating risks?

When things get scary…

Even when risk analysis is conducted, it is impossible to predict or foresee every possible way that a program can go wrong during implementation. Then the question becomes what to do when you are in the middle of something that is putting people at risk or leading to extremely negative unintended consequences. Who can you call for help? What do you do when there is no mitigation possible and you need to pull the plug on an effort? Who decides that you’ve reached that point? This is not an issue that exclusively affects programs that use open data, but open data may create new risks with which practitioners, participants and activists have less experience, thus the need to examine it more closely.

Participants felt that there is not enough honest discussion on this aspect. There is a pop culture of ‘admitting failure’ but admitting harm is different because there is a higher sense of liability and distress. “When I’m really scared shitless about what is happening in a project, what do I do?” asked one participant at the OK Con discussion sessions. “When I realize that opening data up has generated a huge potential risk to people who are already vulnerable, where do I go for help?” We tend to share our “cute” failures, not our really dismal ones.

Academia has done some work around research ethics, informed consent, human subject research and use of Internal Review Boards (IRBs). What aspects of this can or should be applied to mobile data gathering, crowdsourcing, open data work and the like? What about when citizens are their own source of information and they voluntarily share data without a clear understanding of what happens to the data, or what the possible implications are?

Do we need to think about updating and modernizing the concept of IRBs? A major issue is that many people who are conducting these kinds of data collection and sharing activities using new ICTs are unaware of research ethics and IRBs and don’t consider what they are doing to be ‘research’. How can we broaden this discussion and engage those who may not be aware of the need to integrate informed consent, risk analysis and privacy awareness into their approaches?

The elephant in the room

Despite our good intentions to do better planning and risk management, one big problem is donors, according to some of the OK Con participants.  Do donors require enough risk assessment and mitigation planning in their program proposal designs? Do they allow organizations enough time to develop a well-thought-out and participatory Theory of Change along with a rigorous risk assessment together with program participants? Are funding recipients required to report back on risks and how they played out? As one person put it, “talk about failure is currently more like a ‘cult of failure’ and there is no real learning from it. Systematically we have to report up the chain on money and results and all the good things happening. and no one up at the top really wants to know about the bad things. The most interesting learning doesn’t get back to the donors or permeate across practitioners. We never talk about all the work-arounds and backdoor negotiations that make development work happen. This is a serious systemic issue.”

Greater transparency can actually be a deterrent to talking about some of these complexities, because “the last thing donors want is more complexity as it raises difficult questions.”

Reporting upwards into government representatives in Parliament or Congress leads to continued aversion to any failures or ‘bad news’. Though funding recipients are urged to be innovative, they still need to hit numeric targets so that the international aid budget can be defended in government spaces. Thus, the message is mixed: “Make sure you are learning and recognizing failure, but please don’t put anything too serious in the final report.” There is awareness that rigid program planning doesn’t work and that we need to be adaptive, yet we are asked to “put it all into a log frame and make sure the government aid person can defend it to their superiors.”

Where to from here?

It was suggested that monitoring and evaluation (M&E) could be used as a tool for examining some of these issues, but M&E needs to be seen as a learning component, not only an accountability one. M&E needs to feed into the choices people are making along the way and linking it in well during program design may be one way to include a more adaptive and iterative approach. M&E should force practitioners to ask themselves the right questions as they design programs and as they assess them throughout implementation. Theory of Change might help, and an ethics-based approach could be introduced as well to raise these questions about risk and privacy and ensure that they are addressed from the start of an initiative.

Practitioners have also expressed the need for additional resources to help them predict and manage possible risk: case studies, a safe space for sharing concerns during implementation, people who can help when things go pear-shaped, a menu of methodologies, a set of principles or questions to ask during program design, or even an ICT4D Implementation Hotline or a forum for questions and discussion.

These ethical issues around privacy and risk are not exclusive to Open Development. Similar issues were raised last week at the Open Government Partnership Summit sessions on whistle blowing, privacy, and safeguarding civic space, especially in light of the Snowden case. They were also raised at last year’s Technology Salon on Participatory Mapping.

A number of groups are looking more deeply into this area, including the Capture the Ocean Project, The Engine Room, IDRC’s research network, The Open Technology InstitutePrivacy InternationalGSMA, those working on “Big Data,” those in the Internet of Things space, and others.

I’m looking forward to further discussion with the Open Development working group on all of this in the coming months, and will also be putting a little time into mapping out existing initiatives and identifying gaps when it comes to these cross-cutting ethics, power, privacy and risk issues in open development and other ICT-enabled data-heavy initiatives.

Please do share information, projects, research, opinion pieces and more if you have them!

Read Full Post »

Bruce Lee explains why many open data and technology-led initiatives go wrong.

(See minute 1.14).

Read Full Post »

policy forum

This past Monday I had the opportunity to join Engineers without Borders (EWB) in Calgary, Canada, at their Annual Policy Forum on Global Development to discuss “How can open government contribute to community and economic development?”

Morning panels covered some examples of open government initiatives from Finland, Ghana and Canada. In the afternoon we heard about some of the challenges with open data, open government and the International Aid Transparency Initiative. Table discussions followed both of the panels. The group was a mix of Canadian and African government representatives, people from organizations and groups working in different countries on open government and open data initiatives, and young people who are connected with EWB. The session was under Chatham House Rule in order to encourage frank conversation.

Drawing from such documents as the Open Government Partnership’s Open Government Declaration, Harlan Yu and David G. Robinson’s “The New Ambiguity of “Open Government,” Beth Noveck’s What’s in a Name? Open Gov and Good Gov and Nathaniel Heller, A Working Definition of ‘Open Government’, the following definition of Open Government was used to frame the discussions.

EWB Definition of Open Government

Below (in a very-much-longer-than-you-are-supposed-to-write-in-a-blogpost summary) are the highlights and points I found interesting and useful as related to Open Development, Open Data, Open Government and the International Aid Transparency Initiative (IATI)

1.  Participation thresholds need to be as low as possible for people to participate and engage in open government or open data initiatives. You need to understand well what engagement tools are most useful or comfortable for different groups. In some places, to engage the public you can use tools such as etherpad, wiki platforms, google docs, open tools and online collaboration spaces. In other places and with other populations, regardless of what country, you may be more successful with face-to-face methods or with traditional media like television and radio, but these need to be enhanced with different types of feedback methods like phone calls or surveys or going house to house so that your information is not only traveling one way. Community organizing skills are key to this work, regardless of whether the tools are digital or not.

2.  Literacy remains a huge challenge hindering access to information and citizen engagement in holding government accountable in many countries. This is why face-to-face engagement is important, as well as radio and more popular or broad-based communication channels. One participant asked “how can you make open government a rural, rather than an urban only, phenomenon?” This question resonated for participants from all countries.

3.  Language is still a critical issue. Language poses a big challenge for these kinds of initiatives, from the grassroots level to the global level, within and among countries, for citizens, governments, and anyone trying to share or collect data or information. It was noted that all the countries who have published data to IATI are publishing in English. All the IATI Standards are in English, as is the entire support system for IATI. As one participant noted, this begs the question of who the information in IATI is actually designed for and serving, and who are the expected users of it. Open data initiatives should consider the implications of language they publish in, both politically and practically.

4.  Open data can serve to empower the already empowered. As one speaker noted, “the idea that everyone has the potential to make use of open data is simply not true.” Access to digital infrastructure and educational resource may be missing, meaning that many do not have the ability to access, interpret or use data for their own purposes. Governments can also manipulate data and selectively release data that serves their own interests. Some questioned government motives, citing the example of a government that released “data” saying its unemployment rate was 10% when “everyone knew this to be false, and people grumbled but we did not feel empowered to challenge that statement.” Concern was expressed over the lack of an independent body or commission in some countries to oversee open data and open government processes. Some did not trust the government bodies who were currently in charge of collecting and opening information, saying that due to politics, they would never release any information that made their party or their government look bad.

5.  Privacy rights can be exploited if data is opened without data protection laws and effort to build capacity around how to make certain data anonymous. Citizens may also not be aware of what rights are being violated, so this should also be addressed.

6.  Too much open data discussion takes place without a power analysis, as one participant commented, making some of the ideas around open data and open government somewhat naïve. “Those who have the greatest stake will be the most determined to push their point of view and to make sure it prevails.”

7.  Open data needs to become open data 2.0. According to one participant, open data is still mostly one-way information delivery. In some cases there isn’t even any delivery – information is opened on a portal but no one knows it’s there or what it refers to or why it would be useful. When will open data, open government and open aid become more of a dialogue? When will data be released that answers questions that citizens have rather than the government deciding what it will release? The importance of working with community groups to strengthen their capacity to ask questions and build critical consciousness to question the data was emphasized. A counter point was that government is not necessarily there to start collecting information or creating data sets according to public demand. Governments collect certain data to help them function.

8.  Intermediaries working on open government should be careful of real or perceived bias. Non-profits have their own agendas, and ‘open data’ and ‘open information’ is not immune to being interpreted in non-objective ways. Those working on civic engagement initiatives need to be careful that they are not biased in their support for citizen initiatives. One presenter who works on a platform that encourages citizens to be involved in petitioning new laws for contemplation in Parliament said “Our software is open source so that anyone can set up a similar process to compete with us if they feel we are biased towards one or another type of agenda.”

9.  Technology-based engagement tools change who is participating. Whether in Finland, Canada, Ghana or Malawi, it’s critical to think about reaching those who are not active already online, those who are not the typical early adopters. To reach a broader public, one speaker noted “We are going to remote places, doing events in smaller towns and cities to see how people want to influence and take part in this. Making sure the website is accessible and understandable.”

10. Technological platforms are modifying how political parties and democratic processes operate. This may or may not be a good thing. Normally priorities arise and are discussed within political parties. Will people now bypass the party process and use ‘direct democracy’ channels if they are passionate about an issue but do not want to enter into negotiation around it? Will this weaken political processes or longer standing democratic processes? One speaker considered this change to be positive. People are not happy with being able to vote every 4 years and they want opportunities to participate in between elections cycles and direct voice in how priorities are decided. Others questioned whether bypassing official processes can lead to less participation and more apathy overall on national issues. Some questioned whether within fairly long-standing democracies, open data will have any real impact, considering existing levels of apathy and the lack of political participation.

11. Strong information, statistical, monitoring and evaluation systems are critical for open data and open government processes and to ensure more effective management of development results. This is still a challenge for some countries that need to review their mechanisms and improve their tools and processes for data collection and dissemination. If there is no data, or no current data, there is not much point in opening it. In addition, there are capacity and technical competency challenges within institutions in some countries. One participant mentioned a lack of current government geological information about gold and oil deposits that weakens government capacity to negotiate with the private sector extraction industry and ensure partnerships and earnings will contribute to national development. In addition more evidence is needed on the impact, use, and outcomes of open data. At the moment it’s quite difficult to say with any real authority what the outcomes and impact of open data and open government have been.

12. IATI (International Aid Transparency Initiative) needs more partners. Government representatives noted that they are opening their data, but they can only open the data they possess. In order for data on aid to be useful, more data is needed, especially that of NGOs who are implementing programs. Not many NGOs have published their information to the IATI standard at this point. “The really interesting thing will be when we can start mashing up and mapping out the different kinds of information,” as one speaker noted, “for example, this is the goal of the Open Aid Partnership. It will involve combining information from the donor, development indicators from the World Bank, and country information, and this will open up amazing possibilities once this is all geo-coded.” There are reporting challenges related to IATI and open government data, however, because at times countries and NGOs do not see the benefits of reporting – it feels like just one more top-down administrative burden. There are also issues with donor governments reporting their committed intentions and amounts, recipient governments reporting back, and communications with citizens on both sides (donor and recipient countries). One example that was reported to be enjoying some success was the multi-donor budget support initiative in Ghana, where development partners and government work together to establish development indicators and commitments. If the government delivers on the indicators, the development partners will then provide them with the funding. Development partners can also earmark funding to particular areas if there is government agreement.

13. We need more accountability towards ‘beneficiaries’.Currently many of these initiatives are perceived as being focused on donors and donor publics. As one participant noted, “the interesting thing is less about government and more about getting regular people involved in these processes. When you engage the public you’ll engage government leaders in thinking they will need to change to respond to what citizens are asking for.” Another noted that the essential issue is the link between transparency/accountability and citizens and their own governments. In addition, as one participant asked, “How can you strengthen capacity among citizens to ask the right questions about the data that’s being opened?” For example, citizens may ask about the number of schools being built, but not ask about the quality of education being provided. Public education was a strong focus of discussions around citizen engagement during the policy forum.

14. Should citizens be consulted on everything? however, was one big question. The public at large may not understand the ramifications of its own deep misunderstandings on particular issues and may be inputting from a viewpoint that lacks scientific evidence or fact. “It’s one thing to have an opinion about whether your child should be able to drink energy drinks before age 16, it’s another to input about technical programs like the best policy for green energy,” commented one group.

15. Can citizens really have greater participation if government is still in control of data? was another big question. An example was given of an open consultative process that became unwieldy for a local government, which then shut down the consultation process and changed the nature of the documents to ‘administrative’ and therefore no longer open. Others asked why governments pat themselves on the back over being part of the Open Government Partnership yet they do not have Freedom of Information Acts (FOIA) or they prosecute those who open data in alternative ways, such as Bradley Manning and Aaron Swartz.

16. If citizens don’t get a response from government (or if they don’t like the response, or feel it’s biased or manipulated), apathy and cynicism will increase. It’s important to make sure that ‘open government’ is not just a box that gets ticked off, but rather a long-term change in mentality of those in power and deeper expectations and efforts by citizens for openness and participation in conversations of national importance.

The conclusion was that Open Government is somewhat of a paradox, rooted in aims that are not necessarily new. Open Government strives to enable leaders in their communities to create change and transform their lives and those of people in their communities. It is a complex process that involves many actors and multiple conflicting goals and interests. It’s also something new that we are all learning about and experimenting with, but we are very impatient to know what works and what the impact is. In the room, the feeling was one of ‘radical pragmatism,’ as one participant put it. Open Government is a big idea that represents a big change. It’s something that can transform communities at the global level and there is a great deal of hope and excitement around it. At the same time, we need to acknowledge the challenges associated with it in order to address them and move things forward.

I’ll do a follow up post with the points I made during the panel as this post is clearly way too too long already. Kudos if you are still reading, and a huge thanks to the organizers and participants in the EWB policy forum.

Read Full Post »