Feeds:
Posts
Comments

Archive for the ‘open data’ Category

Development, humanitarian and human rights organizations increasingly collect and use digital data at the various stages of their programming. This type of data has the potential to yield great benefit, but it can also increase individual and community exposure to harm and privacy risks. How can we as a sector better balance data collection and open data sharing with privacy and security, especially when it involves the most vulnerable?

A number of donors, humanitarian and development organizations (including Oxfam, CRS, UN bodies and others) have developed or are in the process of developing guidelines to help them to be more responsible about collection, use, sharing and retention of data from those who participate in their programs.

I’m part of a team (including mStar, Sonjara, Georgetown University, the USAID Global Development Lab, and an advisory committee that includes several shining stars from the ‘responsible data’ movement) that is conducting research on existing practices, policies, systems, and legal frameworks through which international development data is collected, used, shared, and released. Based on this research, we’ll develop ‘responsible data’ practice guidelines for USAID that aim to help:

  • Mitigate privacy and security risks for beneficiaries and others
  • Improve performance and development outcomes through use of data
  • Promote transparency, accountability and public good through open data

The plan is to develop draft guidelines and then to test their application on real programs.

We are looking for digital development projects to assess how our draft guidelines would work in real world settings. Once the projects are selected, members of the research team will visit them to better understand “on-the-ground” contexts and project needs. We’ll apply draft practice guidelines to each case with the goal of identifying what parts of the guidelines are useful/ applicable, and where the gaps are in the guidelines. We’ll also capture feedback from the project management team and partners on implications for project costs and timelines, and we’ll document existing digital data-related good practices and lessons. These findings will further refine USAID’s Responsible Data Practice guidelines.

What types of projects are we looking for?

  • Ongoing or recently concluded projects that are using digital technologies to collect, store, analyze, manage, use and share individuals’ data.
  • Cases where data collected is sensitive or may put project participants at risk.
  • The project should have informal or formal processes for privacy/security risk assessment and mitigation especially with respect to field implementation of digital technologies (listed above) as part of their program. These may be implicit or explicit (i.e. documented or written). They potentially include formal review processes conducted by ethics review boards or institutional review boards (IRBs) for projects.
  • All sectors of international development and all geographies are welcome to submit case studies. We are looking for diversity in context and programming.
  • We prefer case studies from USAID-funded projects but are open to receiving case studies from other donor-supported projects.

If you have a project or an activity that falls into the above criteria, please let us know here. We welcome multiple submissions from one organization; just reuse the form for each proposed case study.

Please submit your projects by February 15, 2017.

And please share this call with others who may be interested in contributing case studies.

Click here to submit your case study.

Also feel free to get in touch with me if you have questions about the project or the call!

 

Read Full Post »

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »

At our April 5th Salon in Washington, DC we had the opportunity to take a closer look at open data and privacy and discuss the intersection of the two in the framework of ‘responsible data’. Our lead discussants were Amy O’Donnell, Oxfam GB; Rob Baker, World Bank; Sean McDonald, FrontlineSMS. I had the pleasure of guest moderating.

What is Responsible Data?

We started out by defining ‘responsible data‘ and some of the challenges when thinking about open data in a framework of responsible data.

The Engine Room defines ‘responsible data’ as

the duty to ensure people’s rights to consent, privacy, security and ownership around the information processes of collection, analysis, storage, presentation and reuse of data, while respecting the values of transparency and openness.

Responsible Data can be like walking a tightrope, noted our first discussant, and you need to find the right balance between opening data and sharing it, all the while being ethical and responsible. “Data is inherently related to power – it can create power, redistribute it, make the powerful more powerful or further marginalize the marginalized. Getting the right balance involves asking some key questions throughout the data lifecycle from design of the data gathering all the way through to disposal of the data.

How can organizations be more responsible?

If an organization wants to be responsible about data throughout the data life cycle, some questions to ask include:

  • In whose interest is it to collect the data? Is it extractive or empowering? Is there informed consent?
  • What and how much do you really need to know? Is the burden of collecting and the liability of storing the data worth it when balanced with the data’s ability to represent people and allow them to be counted and served? Do we know what we’ll actually be doing with the data?
  • How will the data be collected and treated? What are the new opportunities and risks of collecting and storing and using it?
  • Why are you collecting it in the first place? What will it be used for? Will it be shared or opened? Is there a data sharing MOU and has the right kind of consent been secured? Who are we opening the data for and who will be able to access and use it?
  • What is the sensitivity of the data and what needs to be stripped out in order to protect those who provided the data?

Oxfam has developed a data deposit framework to help assess the above questions and make decisions about when and whether data can be open or shared.

(The Engine Room’s Responsible Development Data handbook offers additional guidelines and things to consider)

(See: https://wiki.responsibledata.io/Data_in_the_project_lifecycle for more about the data lifecycle)

Is ‘responsible open data’ an oxymoron?

Responsible Data policies and practices don’t work against open data, our discussant noted. Responsible Data is about developing a framework so that data can be opened and used safely. It’s about respecting the time and privacy of those who have provided us with data and reducing the risk of that data being hacked. As more data is collected digitally and donors are beginning to require organizations to hand over data that has been collected with their funding, it’s critical to have practical resources and help staff to be more responsible about data.

Some disagreed that consent could be truly informed and that open data could ever be responsible since once data is open, all control over the data is lost. “If you can’t control the way the data is used, you can’t have informed people. It’s like saying ‘you gave us permission to open your data, so if something bad happens to you, oh well….” Informed consent is also difficult nowadays because data sets are being used together and in ways that were not possible when informed consent was initially obtained.

Others noted that standard informed consent practices are unhelpful, as people don’t understand what might be done with their data, especially when they have low data literacy. Involving local communities and individuals in defining what data they would like to have and use could make the process more manageable and useful for those whose data we are collecting, using and storing, they suggested.

One person said that if consent to open data was not secured initially; the data cannot be opened, say, 10 years later. Another felt that it was one thing to open data for a purpose and something entirely different to say “we’re going to open your data so people can do fun things with it, to play around with it.”

But just what data are we talking about?

USAID was questioned for requiring grantees to share data sets and for leaning towards de-identification rather than raising the standard to data anonymity. One person noted that at one point the agency had proposed a 22-step process for releasing data and even that was insufficient for protecting program participants in a risky geography because “it’s very easy to figure out who in a small community recently received 8 camels.” For this reason, exclusions are an important part of open data processes, he said.

It’s not black or white, said another. Responsible open data is possible, but openness happens along a spectrum. You have financial data on the one end, which should be very open as the public has a right to know how its tax dollars are being spent. Human subjects research is on the other end, and it should not be totally open. (Author’s note: The Open Knowledge Foundation definition of open data says: “A key point is that when opening up data, the focus is on non-personal data, that is, data which does not contain information about specific individuals.” The distinction between personal data, such as that in household level surveys, and financial data on agency or government activities seems to be blurred or blurring in current debates around open data and privacy.) “Open data will blow up in your face if it’s not done responsibly,” he noted. “But some of the open data published via IATI (the International Aid Transparency Initiative) has led to change.”

A participant followed this comment up by sharing information from a research project conducted on stakeholders’ use of IATI data in 3 countries. When people knew that the open data sets existed they were very excited, she said. “These are countries where there is no Freedom of Information Act (FOIA), and where people cannot access data because no one will give it to them. They trusted the US Government’s data more than their own government data, and there was a huge demand for IATI data. People were very interested in who was getting what funding. They wanted information for planning, coordination, line ministries and other logistical purposes. So let’s not underestimate open data. If having open data sets means that governments, health agencies or humanitarian organizations can do a better job of serving people, that may make for a different kind of analysis or decision.”

‘Open by default’ or ‘open by demand’?

Though there are plenty of good intentions and rationales for open data, said one discussant, ‘open by default’ is a mistake. We may have quick wins with a reduction in duplicity of data collection, but our experiences thus far do not merit ‘open by default’. We have not earned it. Instead, he felt that ‘open by demand’ is a better idea. “We can put out a public list of the data that’s available and see what demand for data comes in. If we are proactive on what is available and what can be made available, and we monitor requests, we can avoid putting out information that no one is interested in. This would lower the overhead on what we are releasing. It would also allow us to have a conversation about who needs this data and for what.”

One participant agreed, positing that often the only reason that we collect data is to provide proof and evidence that we’re doing our job, spending the money given to us, and tracking back. “We tend to think that the only way to provide this evidence is to collect data: do a survey, talk to people, look at website usage. But is anyone actually using this data, this evidence to make decisions?”

Is the open data honeymoon over?

“We need to do a better job of understanding the impact at a wider level,” said another participant, “and I think it’s pretty light. Talking about open data is too general. We need to be more service oriented and problem driven. The conversation is very different when you are using data to solve a particular problem and you can focus on something tangible like service delivery or efficiency. Open data is expensive and not sustainable in the current setup. We need to figure this out.”

Another person shared results from an informal study on the use of open data portals around the world. He found around 2,500 open data portals, and only 3.8% of them use https (the secure version of http). Most have very few visitors, possibly due to poor Internet access in the countries whose open data they are serving up, he said. Several exist in countries with a poor Freedom House ranking and/or in countries at the bottom end of the World Bank’s Digital Dividends report. “In other words, the portals have been built for people who can’t even use them. How responsible is this?” he asked, “And what is the purpose of putting all that data out there if people don’t have the means to access it and we continue to launch more and more portals? Where’s all this going?”

Are we conflating legal terms?

Legal frameworks around data ownership were debated. Some said that the data belonged to the person or agency that collected it or paid for the cost of collecting in terms of copyright and IP. Others said that the data belonged to the individual who provided it. (Author’s note: Participants may have been referring to different categories of data, eg., financial data from government vs human subjects data.) The question was raised of whether informed consent for open data in the humanitarian space is basically a ‘contract of adhesion’ (a term for a legally binding agreement between two parties wherein one side has all the bargaining power and uses it to its advantage). Asking a person to hand over data in an emergency situation in order to enroll in a humanitarian aid program is akin to holding a gun to a person’s head in order to get them to sign a contract, said one person.

There’s a world of difference between ‘published data’ and ‘openly licensed data,’ commented our third discussant. “An open license is a complete lack of control, and you can’t be responsible with something you can’t control. There are ways to be responsible about the way you open something, but once it’s open, your responsibility has left the port.” ‘Use-based licensing’ is something else, and most IP is governed by how it’s used. For example, educational institutions get free access to data because they are educational institutions. Others pay and this subsidized their use of this data, he explained.

One person suggested that we could move from the idea of ‘open data’ to sub-categories related to how accessible the data would be and to whom and for what purposes. “We could think about categories like: completely open, licensed, for a fee, free, closed except for specific uses, etc.; and we could also specify for whom, whose data and for what purposes. If we use the term ‘accessible’ rather than ‘open’ perhaps we can attach some restrictions to it,” she said.

Is data an asset or a liability?

Our current framing is wrong, said one discussant. We should think of data as a toxic asset since as soon as it’s in our books and systems, it creates proactive costs and proactive risks. Threat modeling is a good approach, he noted. Data can cause a lot of harm to an organization – it’s a liability, and if it’s not used or stored according to local laws, an agency could be sued. “We’re far under the bar. We are not compliant with ‘safe harbor’ or ECOWAS regulations. There are libel questions and property laws that our sector is ignorant of. Our good intentions mislead us in terms of how we are doing things. There is plenty of room to build good practice here, he noted, for example through Civic Trusts. Another participant noted that insurance underwriters are already moving into this field, meaning that they see growing liability in this space.

How can we better engage communities and the grassroots?

Some participants shared examples of how they and their organizations have worked closely at the grassroots level to engage people and communities in protecting their own privacy and using open data for their own purposes. Threat modeling is an approach that helps improve data privacy and security, said one. “When we do threat modeling, we treat the data that we plan to collect as a potential asset. At each step of collection, storage, sharing process – we ask, ‘how will we protect those assets? What happens if we don’t share that data? If we don’t collect it? If we don’t delete it?’”

In one case, she worked with very vulnerable women working on human rights issues and together the group put together an action plan to protect its data from adversaries. The threats that they had predicted actually happened and the plan was put into action. Threat modeling also helps to “weed the garden once you plant it,” she said, meaning that it helps organizations and individuals keep an eye on their data, think about when to delete data, pay attention to what happens after data’s opened and dedicate some time for maintenance rather than putting all their attention on releasing and opening data.

More funding needs to be made available for data literacy for those whose data has been collected and/or opened. We need to help people think about what data is of use to them also. One person recalled hearing people involved in the creation of the Kenya Open Government Data portal say that the entire process was a waste of time because of low levels of use of any of the data. There are examples, however, of people using open data and verifying it at community level. For example, high school students in one instance found the data on all the so-called grocery stores in their community and went one-by-one checking into them, and identifying that some of these were actually liquor stores selling potato chips, not actual grocery stores. Having this information and engaging with it can be powerful for local communities’ advocacy work.

Are we the failure here? What are we going to do about it?

One discussant felt that ‘data’ and ‘information’ are often and easily conflated. “Data alone is not power. Information is data that is contextualized into something that is useful.” This brings into question the value of having so many data portals, and so much risk, when so little is being done to turn data into information that is useful to the people our sector says it wants to support and empower.

He gave the example of the Weather Channel, a business built around open data sets that are packaged and broadcast, which just got purchased for $2 billion. Channels like radio that would have provided information to the poor were not purchased, only the web assets, meaning that those who benefit are not the disenfranchised. “Our organizations are actually just like the Weather Channel – we are intermediaries who are interested in taking and using open data for public good.”

As intermediaries, we can add value in the dissemination of this open data, he said. If we have the skills, the intention and the knowledge to use it responsibly, we have a huge opportunity here. “However our enlightened intent has not yet turned this data into information and knowledge that communities can use to improve their lives, so are we the failure here? And if so, what are we doing about it? We could immediately begin engaging communities and seeing what is useful to them.” (See this article for more discussion on how ‘open’ may disenfranchise the poor.)

Where to from here?

Some points raised that merit further discussion and attention include:

  • There is little demand or use of open data (such as government data and finances) and preparing and maintaining data sets is costly – ‘open by demand’ may be a more appropriate approach than ‘open by default.’
  • There is a good deal of disagreement about whether data can be opened responsibly. Some of this disagreement may stem from a lack of clarity about what kind of data we are talking about when we talk about open data.
  • Personal data and human subjects data that was never foreseen to be part of “open data” is potentially being opened, bringing with it risks for those who share it as well as for those who store it.
  • Informed consent for personal/human subject data is a tricky concept and it’s not clear whether it is even possible in the current scenario of personal data being ‘opened’ and the lack of control over how it may be used now or in the future, and the increasing ease of data re-identification.
  • We may want to look at data as a toxic asset rather than a beneficial one, because of the liabilities it brings.
  • Rather than a blanket “open” categorization, sub-categorizations that restrict data sets in different ways might be a possibility.
  • The sector needs to improve its understanding of the legal frameworks around data and data collection, storage and use or it may start to see lawsuits in the near future.
  • Work on data literacy and community involvement in defining what data is of interest and is collected, as well as threat modeling together with community groups is a way to reduce risk and improve data quality, demand and use; but it’s a high-touch activity that may not be possible for every kind of organization.
  • As data intermediaries, we need to do a much better job as a sector to see what we are doing with open data and how we are using it to provide services and contextualized information to the poor and disenfranchised. This is a huge opportunity and we have not done nearly enough here.

The Technology Salon is conducted under Chatham House Rule so attribution has not been made in this post. If you’d like to attend future Salons, sign up here

 

Read Full Post »

2016-01-14 16.51.09_resized

Photo: Duncan Edwards, IDS.

A 2010 review of impact and effectiveness of transparency and accountability initiatives, conducted by Rosie McGee and John Gaventa of the Institute of Development Studies (IDS), found a prevalence of untested assumptions and weak theories of change in projects, programs and strategies. This week IDS is publishing their latest Bulletin titled “Opening Governance,” which offers a compilation of evidence and contributions focusing specifically on Technology in Transparency and Accountability (Tech for T&A).

It has a good range of articles that delve into critical issues in the Tech for T&A and Open Government spaces; help to clarify concepts and design; explore gender inequity as related to information access; and unpack the ‘dark side’ of digital politics, algorithms and consent.

In the opening article, editors Duncan Edwards and Rosie McGee (both currently working with the IDS team that leads the Making All Voices Count Research, Learning and Evidence component) give a superb in-depth review of the history of Tech for T&A and outline some of the challenges that have stemmed from ambiguous or missing conceptual frameworks and a proliferation of “buzzwords and fuzzwords.”

They unpack the history of and links between concepts of “openness,” “open development,” “open government,” “open data,” “feedback loops,” “transparency,” “accountability,” and “ICT4D (ICT for Development)” and provide some examples of papers and evidence that could help to recalibrate expectations among scholars and practitioners (and amongst donors, governments and policy-making bodies, one hopes).

The editors note that conceptual ambiguity continues to plague the field of Tech for T&A, causing technical problems because it hinders attempts to demonstrate impact; and creating political problems “because it clouds the political and ideological differences between projects as different as open data and open governance.”

The authors hope to stoke debate and promote the existing evidence in order to tone down the buzz. Likewise, they aim to provide greater clarity to the Tech for T&A field by offering concrete conclusions stemming from the evidence that they have reviewed and digested.

Download the Opening Governance report here.

 

 

 

 

Read Full Post »

It’s been two weeks since we closed out the M&E Tech Conference in DC and the Deep Dive in NYC. For those of you who missed it or who want to see a quick summary of what happened, here are some of the best tweets from the sessions.

We’re compiling blog posts and related documentation and will be sharing more detailed summaries soon. In the meantime, enjoy a snapshot!

https://twitter.com/neuguy/status/515134807672909826

https://twitter.com/dalgoso/status/515136050793291776

https://twitter.com/neuguy/status/515166952378343425

https://twitter.com/neuguy/status/515184242595487744

https://twitter.com/schmutzie/status/515215243388014592

https://twitter.com/prefontaine/status/515222154670252032

https://twitter.com/richmanmax/status/515576201084411904

https://twitter.com/sandhya_c_rao/status/516343304448131072

https://twitter.com/dalgoso/status/519879358370955264

Read Full Post »

Screen Shot 2014-07-22 at 5.13.57 AM

I spent last week in Berlin at the Open Knowledge Festival – a great place to talk ‘open’ everything and catch up on what is happening in this burgeoning area that crosses through the fields of data, science, education, art, transparency and accountability, governance, development, technology and more.

One session was on Power, politics, inclusion and voice, and it encouraged participants to dig deeper into those 4 aspects of open data and open knowledge. The organizers kicked things off by asking us to get into small groups and talk about power. Our group was assigned the topic of “feeling powerless” and we shared personal experiences of when we had felt powerless. There were several women in my group, many of whom, unsurprisingly, recounted experiences that felt gendered.

Screen Shot 2014-07-22 at 5.24.53 AMThe concept of ‘mansplaining‘ came up. Mansplaining (according to Wikipedia) is a term that describes when a man speaks to a woman with the assumption that she knows less than he does about the topic being discussed because she is female. ‘Mansplaining is different from other forms of condescension because mansplaining is rooted in the assumption that, in general, a man is likely to be more knowledgeable than a woman.’

From there, we got into the tokenism we’d seen in development programs that say they want ‘participation’ but really don’t care to include the viewpoints of the participants. One member of our group talked about the feelings of powerlessness development workers create when they are dismissive of indigenous knowledge and assume they know more than the poor in general. “Like when they go out and explain climate change to people who have been farming their entire lives,” she said.

A lightbulb went off. It’s the same attitude as ‘mansplaining,’ but seen in development workers. It’s #devsplaining.

So I made a hashtag (of course) and tried to come up with a definition.

Devsplaining – when a development worker, academic, or someone who generally has more power within the ‘development industry’ speaks condescendingly to someone with less power. The devsplainer assumes that he/she knows more and has more right to an opinion because of his/her position and power within the industry. Devsplaining is rooted in the assumption that, in general, development workers are likely to be more knowledgeable about the lives and situations of the people who participate in their programs/research than the people themselves are.

What do people think? Any good examples?

 

 

Read Full Post »

Screen Shot 2014-05-08 at 9.36.00 AMDebate and thinking around data, ethics, ICT have been growing and expanding a lot lately, which makes me very happy!

Coming up on May 22 in NYC, the engine room, Hivos, the Berkman Center for Internet and Society, and Kurante (my newish gig) are organizing the latest in a series of events as part of the Responsible Data Forum.

The event will be hosted at ThoughtWorks and it is in-person only. Space is limited, so if you’d like to join us, let us know soon by filling in this form. 

What’s it all about?

This particular Responsible Data Forum event is an effort to map the ethical, legal, privacy and security challenges surrounding the increased use and sharing of data in development programming. The Forum will aim to explore the ways in which these challenges are experienced in project design and implementation, as well as when project data is shared or published in an effort to strengthen accountability. The event will be a collaborative effort to begin developing concrete tools and strategies to address these challenges, which can be further tested and refined with end users at events in Amsterdam and Budapest.

We will explore the responsible data challenges faced by development practitioners in program design and implementation.

Some of the use cases we’ll consider include:

  • projects collecting data from marginalized populations, aspiring to respect a do no harm principle, but also to identify opportunities for informational empowerment
  • project design staff seeking to understand and manage the lifespan of project data from collection, through maintenance, utilization, and sharing or destruction.
  • project staff that are considering data sharing or joint data collection with government agencies or corporate actors
  • project staff who want to better understand how ICT4D will impact communities
  • projects exploring the potential of popular ICT-related mechanisms, such as hackathons, incubation labs or innovation hubs
  • projects wishing to use development data for research purposes, and crafting responsible ways to use personally identifiable data for academic purposes
  • projects working with children under the age of 18, struggling to balance the need for data to improve programming approaches, and demand higher levels of protection for children

By gathering a significant number of development practitioners grappling with these issues, the Forum aims to pose practical and critical questions to the use of data and ICTs in development programming. Through collaborative sessions and group work, the Forum will identify common pressing issues for which there might be practical and feasible solutions. The Forum will focus on prototyping specific tools and strategies to respond to these challenges.

What will be accomplished?

Some outputs from the event may include:

  • Tools and checklists for managing responsible data challenges for specific project modalities, such as sms surveys, constructing national databases, or social media scraping and engagement.
  • Best practices and ethical controls for data sharing agreements with governments, corporate actors, academia or civil society
  • Strategies for responsible program development
  • Guidelines for data-driven projects dealing with communities with limited representation or access to information
  • Heuristics and frameworks for understanding anonymity and re-identification of large development data sets
  • Potential policy interventions to create greater awareness and possibly consider minimum standards

Hope to see some of you on the 22nd! Sign up here if you’re interested in attending, and read more about the Responsible Data Forum here.

 

Read Full Post »

This is a cross post from Heather Leson, Community Engagement Director at the Open Knowledge Foundation. The original post appeared here on the School of Data site.

by Heather Leson

What is the currency of change? What can coders (consumers) do with IATI data? How can suppliers deliver the data sets? Last week I had the honour of participating in the Open Data for Development Codeathon and the International Aid Transparency Initiative Technical Advisory Group meetings. IATI’s goal is to make information about aid spending easier to access, use, and understand. It was great that these events were back-to-back to push a big picture view.

My big takeaways included similar themes that I have learned on my open source journey:

You can talk about open data [insert tech or OS project] all you want, but if you don’t have an interactive community (including mentorship programmes), an education strategy, engagement/feedback loops plan, translation/localization plan and a process for people to learn how to contribute, then you build a double-edged barrier: barrier to entry and barrier for impact/contributor outputs.

Currency

About the Open Data in Development Codeathon

At the Codathon close, Mark Surman, Executive Director of Mozilla Foundation, gave us a call to action to make the web. Well, in order to create a world of data makers, I think we should run aid and development processes through this mindset. What is the currency of change? I hear many people talking about theory of change and impact, but I’d like to add ‘currency’. This is not only about money, this is about using the best brainpower and best energy sources to solve real world problems in smart ways. I think if we heed Mark’s call to action with a “yes, and”, then we can rethink how we approach complex change. Every single industry is suffering from the same issue: how to deal with the influx of supply and demand in information. We need to change how we approach the problem. Combined events like these give a window into tackling problems in a new format. It is not about the next greatest app, but more about asking: how can we learn from the Webmakers and build with each other in our respective fields and networks?

Ease of Delivery

The IATI community / network is very passionate about moving the ball forward on releasing data. During the sessions, it was clear that the attendees see some gaps and are already working to fill them. The new IATI website is set up to grow with a Community component. The feedback from each of the sessions was distilled by the IATI – TAG and Civil Society Guidance groups to share with the IATI Secretariat.

In the Open Data in Development, Impact of Open Data in Developing Countries, and CSO Guidance sessions, we discussed some key items about sharing, learning, and using IATI data. Farai Matsika, with International HIV/Aids Alliance, was particularly poignant reminding us of IATI’s CSO purpose – we need to share data with those we serve.

Country edits IATI

One of the biggest themes was data ethics. As we rush to ask NGOs and CSOs to release data, what are some of the data pitfalls? Anahi Ayala Iaccuci of Internews and Linda Raftree of Plan International USA both reminded participants that data needs to be anonymized to protect those at risk. Ms. Iaccuci asked that we consider the complex nature of sharing both sides of the open data story – successes and failures. As well, she advised: don’t create trust, but think about who people are trusting. Turning this model around is key to rethinking assumptions. I would add to her point: trust and sharing are currency and will add to the success measures of IATI. If people don’t trust the IATI data, they won’t share and use it.

Anne Crowe of Privacy International frequently asked attendees to consider the ramifications of opening data. It is clear that the IATI TAG does not curate the data that NGOS and CSOs share. Thus it falls on each of these organizations to learn how to be data makers in order to contribute data to IATI. Perhaps organizations need a lead educator and curator to ensure the future success of the IATI process, including quality data.

I think that School of Data and the Partnership for Open Data have a huge part to play with IATI. My colleague Zara Rahman is collecting user feedback for the Open Development Toolkit, and Katelyn Rogers is leading the Open Development mailing list. We collectively want to help people become data makers and consumers to effectively achieve their development goals using open data. This also means also tackling the ongoing questions about data quality and data ethics.


Here are some additional resources shared during the IATI meetings.

Read Full Post »

This is a guest post from Anna Crowe, Research Officer on the Privacy in the Developing World Project, and  Carly Nyst, Head of International Advocacy at Privacy International, a London-based NGO working on issues related to technology and human rights, with a focus on privacy and data protection. Privacy International’s new report, Aiding Surveillance, which covers this topic in greater depth was released this week.

by Anna Crowe and Carly Nyst

NOV 21 CANON 040

New technologies hold great potential for the developing world, and countless development scholars and practitioners have sung the praises of technology in accelerating development, reducing poverty, spurring innovation and improving accountability and transparency.

Worryingly, however, privacy is presented as a luxury that creates barriers to development, rather than a key aspect to sustainable development. This perspective needs to change.

Privacy is not a luxury, but a fundamental human right

New technologies are being incorporated into development initiatives and programmes relating to everything from education to health and elections, and in humanitarian initiatives, including crisis response, food delivery and refugee management. But many of the same technologies being deployed in the developing world with lofty claims and high price tags have been extremely controversial in the developed world. Expansive registration systems, identity schemes and databases that collect biometric information including fingerprints, facial scans, iris information and even DNA, have been proposed, resisted, and sometimes rejected in various countries.

The deployment of surveillance technologies by development actors, foreign aid donors and humanitarian organisations, however, is often conducted in the complete absence of the type of public debate or deliberation that has occurred in developed countries. Development actors rarely consider target populations’ opinions when approving aid programmes. Important strategy documents such as the UN Office for Humanitarian Affairs’ Humanitarianism in a Networked Age and the UN High-Level Panel on the Post-2015 Development Agenda’s A New Global Partnership: Eradicate Poverty and Transfer Economies through Sustainable Development give little space to the possible impact adopting new technologies or data analysis techniques could have on individuals’ privacy.

Some of this trend can be attributed to development actors’ systematic failure to recognise the risks to privacy that development initiatives present. However, it also reflects an often unspoken view that the right to privacy must necessarily be sacrificed at the altar of development – that privacy and development are conflicting, mutually exclusive goals.

The assumptions underpinning this view are as follows:

  • that privacy is not important to people in developing countries;
  • that the privacy implications of new technologies are not significant enough to warrant special attention;
  • and that respecting privacy comes at a high cost, endangering the success of development initiatives and creating unnecessary work for development actors.

These assumptions are deeply flawed. While it should go without saying, privacy is a universal right, enshrined in numerous international human rights treaties, and matters to all individuals, including those living in the developing world. The vast majority of developing countries have explicit constitutional requirements to ensure that their policies and practices do not unnecessarily interfere with privacy. The right to privacy guarantees individuals a personal sphere, free from state interference, and the ability to determine who has information about them and how it is used. Privacy is also an “essential requirement for the realization of the right to freedom of expression”. It is not an “optional” right that only those living in the developed world deserve to see protected. To presume otherwise ignores the humanity of individuals living in various parts of the world.

Technologies undoubtedly have the potential to dramatically improve the provision of development and humanitarian aid and to empower populations. However, the privacy implications of many new technologies are significant and are not well understood by many development actors. The expectations that are placed on technologies to solve problems need to be significantly circumscribed, and the potential negative implications of technologies must be assessed before their deployment. Biometric identification systems, for example, may assist in aid disbursement, but if they also wrongly exclude whole categories of people, then the objectives of the original development intervention have not been achieved. Similarly, border surveillance and communications surveillance systems may help a government improve national security, but may also enable the surveillance of human rights defenders, political activists, immigrants and other groups.

Asking for humanitarian actors to protect and respect privacy rights must not be distorted as requiring inflexible and impossibly high standards that would derail development initiatives if put into practice. Privacy is not an absolute right and may be limited, but only where limitation is necessary, proportionate and in accordance with law. The crucial aspect is to actually undertake an analysis of the technology and its privacy implications and to do so in a thoughtful and considered manner. For example, if an intervention requires collecting personal data from those receiving aid, the first step should be to ask what information is necessary to collect, rather than just applying a standard approach to each programme. In some cases, this may mean additional work. But this work should be considered in light of the contribution upholding human rights and the rule of law make to development and to producing sustainable outcomes. And in some cases, respecting privacy can also mean saving lives, as information falling into the wrong hands could spell tragedy.

A new framing

While there is an increasing recognition among development actors that more attention needs to be paid to privacy, it is not enough to merely ensure that a programme or initiative does not actively harm the right to privacy; instead, development actors should aim to promote rights, including the right to privacy, as an integral part of achieving sustainable development outcomes. Development is not just, or even mostly, about accelerating economic growth. The core of development is building capacity and infrastructure, advancing equality, and supporting democratic societies that protect, respect and fulfill human rights.

The benefits of development and humanitarian assistance can be delivered without unnecessary and disproportionate limitations on the right to privacy. The challenge is to improve access to and understanding of technologies, ensure that policymakers and the laws they adopt respond to the challenges and possibilities of technology, and generate greater public debate to ensure that rights and freedoms are negotiated at a societal level.

Technologies can be built to satisfy both development and privacy.

Download the Aiding Surveillance report.

Read Full Post »

This post was originally published on the Open Knowledge Foundation blog

A core theme that the Open Development track covered at September’s Open Knowledge Conference was Ethics and Risk in Open Development. There were more questions than answers in the discussions, summarized below, and the Open Development working group plans to further examine these issues over the coming year.

Informed consent and opting in or out

Ethics around ‘opt in’ and ‘opt out’ when working with people in communities with fewer resources, lower connectivity, and/or less of an understanding about privacy and data are tricky. Yet project implementers have a responsibility to work to the best of their ability to ensure that participants understand what will happen with their data in general, and what might happen if it is shared openly.

There are some concerns around how these decisions are currently being made and by whom. Can an NGO make the decision to share or open data from/about program participants? Is it OK for an NGO to share ‘beneficiary’ data with the private sector in return for funding to help make a program ‘sustainable’? What liabilities might donors or program implementers face in the future as these issues develop?

Issues related to private vs. public good need further discussion, and there is no one right answer because concepts and definitions of ‘private’ and ‘public’ data change according to context and geography.

Informed participation, informed risk-taking

The ‘do no harm’ principle is applicable in emergency and conflict situations, but is it realistic to apply it to activism? There is concern that organizations implementing programs that rely on newer ICTs and open data are not ensuring that activists have enough information to make an informed choice about their involvement. At the same time, assuming that activists don’t know enough to decide for themselves can come across as paternalistic.

As one participant at OK Con commented, “human rights and accountability work are about changing power relations. Those threatened by power shifts are likely to respond with violence and intimidation. If you are trying to avoid all harm, you will probably not have any impact.” There is also the concept of transformative change: “things get worse before they get better. How do you include that in your prediction of what risks may be involved? There also may be a perception gap in terms of what different people consider harm to be. Whose opinion counts and are we listening? Are the right people involved in the conversations about this?”

A key point is that whomever assumes the risk needs to be involved in assessing that potential risk and deciding what the actions should be — but people also need to be fully informed. With new tools coming into play all the time, can people be truly ‘informed’ and are outsiders who come in with new technologies doing a good enough job of facilitating discussions about possible implications and risk with those who will face the consequences? Are community members and activists themselves included in risk analysis, assumption testing, threat modeling and risk mitigation work? Is there a way to predict the likelihood of harm? For example, can we determine whether releasing ‘x’ data will likely lead to ‘y’ harm happening? How can participants, practitioners and program designers get better at identifying and mitigating risks?

When things get scary…

Even when risk analysis is conducted, it is impossible to predict or foresee every possible way that a program can go wrong during implementation. Then the question becomes what to do when you are in the middle of something that is putting people at risk or leading to extremely negative unintended consequences. Who can you call for help? What do you do when there is no mitigation possible and you need to pull the plug on an effort? Who decides that you’ve reached that point? This is not an issue that exclusively affects programs that use open data, but open data may create new risks with which practitioners, participants and activists have less experience, thus the need to examine it more closely.

Participants felt that there is not enough honest discussion on this aspect. There is a pop culture of ‘admitting failure’ but admitting harm is different because there is a higher sense of liability and distress. “When I’m really scared shitless about what is happening in a project, what do I do?” asked one participant at the OK Con discussion sessions. “When I realize that opening data up has generated a huge potential risk to people who are already vulnerable, where do I go for help?” We tend to share our “cute” failures, not our really dismal ones.

Academia has done some work around research ethics, informed consent, human subject research and use of Internal Review Boards (IRBs). What aspects of this can or should be applied to mobile data gathering, crowdsourcing, open data work and the like? What about when citizens are their own source of information and they voluntarily share data without a clear understanding of what happens to the data, or what the possible implications are?

Do we need to think about updating and modernizing the concept of IRBs? A major issue is that many people who are conducting these kinds of data collection and sharing activities using new ICTs are unaware of research ethics and IRBs and don’t consider what they are doing to be ‘research’. How can we broaden this discussion and engage those who may not be aware of the need to integrate informed consent, risk analysis and privacy awareness into their approaches?

The elephant in the room

Despite our good intentions to do better planning and risk management, one big problem is donors, according to some of the OK Con participants.  Do donors require enough risk assessment and mitigation planning in their program proposal designs? Do they allow organizations enough time to develop a well-thought-out and participatory Theory of Change along with a rigorous risk assessment together with program participants? Are funding recipients required to report back on risks and how they played out? As one person put it, “talk about failure is currently more like a ‘cult of failure’ and there is no real learning from it. Systematically we have to report up the chain on money and results and all the good things happening. and no one up at the top really wants to know about the bad things. The most interesting learning doesn’t get back to the donors or permeate across practitioners. We never talk about all the work-arounds and backdoor negotiations that make development work happen. This is a serious systemic issue.”

Greater transparency can actually be a deterrent to talking about some of these complexities, because “the last thing donors want is more complexity as it raises difficult questions.”

Reporting upwards into government representatives in Parliament or Congress leads to continued aversion to any failures or ‘bad news’. Though funding recipients are urged to be innovative, they still need to hit numeric targets so that the international aid budget can be defended in government spaces. Thus, the message is mixed: “Make sure you are learning and recognizing failure, but please don’t put anything too serious in the final report.” There is awareness that rigid program planning doesn’t work and that we need to be adaptive, yet we are asked to “put it all into a log frame and make sure the government aid person can defend it to their superiors.”

Where to from here?

It was suggested that monitoring and evaluation (M&E) could be used as a tool for examining some of these issues, but M&E needs to be seen as a learning component, not only an accountability one. M&E needs to feed into the choices people are making along the way and linking it in well during program design may be one way to include a more adaptive and iterative approach. M&E should force practitioners to ask themselves the right questions as they design programs and as they assess them throughout implementation. Theory of Change might help, and an ethics-based approach could be introduced as well to raise these questions about risk and privacy and ensure that they are addressed from the start of an initiative.

Practitioners have also expressed the need for additional resources to help them predict and manage possible risk: case studies, a safe space for sharing concerns during implementation, people who can help when things go pear-shaped, a menu of methodologies, a set of principles or questions to ask during program design, or even an ICT4D Implementation Hotline or a forum for questions and discussion.

These ethical issues around privacy and risk are not exclusive to Open Development. Similar issues were raised last week at the Open Government Partnership Summit sessions on whistle blowing, privacy, and safeguarding civic space, especially in light of the Snowden case. They were also raised at last year’s Technology Salon on Participatory Mapping.

A number of groups are looking more deeply into this area, including the Capture the Ocean Project, The Engine Room, IDRC’s research network, The Open Technology InstitutePrivacy InternationalGSMA, those working on “Big Data,” those in the Internet of Things space, and others.

I’m looking forward to further discussion with the Open Development working group on all of this in the coming months, and will also be putting a little time into mapping out existing initiatives and identifying gaps when it comes to these cross-cutting ethics, power, privacy and risk issues in open development and other ICT-enabled data-heavy initiatives.

Please do share information, projects, research, opinion pieces and more if you have them!

Read Full Post »

Older Posts »