Feeds:
Posts
Comments

Archive for the ‘monitoring and evaluation’ Category

Earlier this month I attended the African Evaluators’ Conference (AfrEA) in Cameroon as part of the Technology and Evaluation stream organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation.

A first post about ICTs and M&E at the Afrea Conference went into some of the deliberations around using or not using ICTs and how we can learn and share more as institutions and evaluators. I’ve written previously about barriers and challenges with using ICTs in M&E of international development programs (see the list of posts at the bottom of this one). Many of these same conversations came up at AfrEA, so I won’t write about these again here. What I did want to capture and share were a few interesting design and implementation thoughts from the various ICT and M&E sessions. Here goes:

1) Asking questions via ICT may lead to more honest answers. Some populations are still not familiar with smart phones and tablets and this makes some people shy and quiet, yet it makes others more curious and animated to participate. Some people worry that mobiles, laptops and tablet create distance between the enumerator and the person participating in a survey. On the other hand, I’m hearing more and more examples of cases where using ICTs for surveying actually allow for a greater sense of personal privacy and more honest answers. I first heard about this several years ago with relation to children and youth in the US and Canada seeking psychological or reproductive health counseling. They seemed to feel more comfortable asking questions about sensitive issues via online chats (as opposed to asking a counselor or doctor face-to-face) because they felt anonymous. This same is true for telephone inquiries.

In the case of evaluations, someone suggested that rather than a mobile or tablet creating distance, a device can actually create an opportunity for privacy. For example, if a sensitive question comes up in a survey, an enumerator can hand the person being interviewed the mobile phone and look away when they provide their answer and hit enter, in the same way that waiters in some countries will swipe your ATM card and politely look away while you enter your PIN. Key is building people’s trust in these methods so they can be sure they are secure.

At a Salon on Feb 28, I heard about mobile polling being used to ask men in the Democratic Republic of Congo about sexual assault against men. There was a higher recorded affirmative rate when the question was answered via a mobile survey than when the question had been asked in other settings or though other means. This of course makes sense, considering that often when a reporter or surveyor comes around asking whether men have been victims of rape, no one wants to say publicly. It’s impossible to know in a situation of violence if a perpetrator might be standing around in the crowd watching someone getting interviewed, and clearly shame and stigma also prevent people from answering openly.

Another example at the AfrEA Tech Salon, was a comparison study done by an organization in a slum area in Accra. Five enumerators who spoke local languages conducted Water, Sanitation and Hygiene (WASH) surveys by mobile phone using Open Data Kit (an open source survey application) and the responses were compared with the same survey done on paper.  When people were asked in person by enumerators if they defecated outdoors, affirmative answers were very low. When people were asked the same question via a voice-based mobile phone survey, 26% of respondents reported open defecation.

2) Risk of collecting GPS coordinates. We had a short discussion on the plusses and minuses of using GPS and collecting geolocation data in monitoring and evaluation. One issue that came up was safety for enumerators who carry GPS devices. Some people highlighted that GPS devices can put staff/enumerators at risk of abuse from organized crime bands, military groups, or government authorities, especially in areas with high levels of conflict and violence. This makes me think that if geographic information is needed in these cases, it might be good to use a mobile phone application that collects GPS rather than a fancy smart phone or an actual GPS unit (for example, one could try out PoiMapper, which works on feature phones).

In addition, evaluators emphasized that we need to think through whether GPS data is really necessary at household level. It is tempting to always collect all the information that we possibly can, but we can never truly assure anyone that their information will not be de-anonymized somehow in the near or distant future, and in extremely high risk areas, this can be a huge risk. Many organizations do not have high-level security for their data, so it may be better to collect community or district level data than household locations. Some evaluators said they use ‘tricks’ to anonymize the geographical data, like pinpointing location a few miles off, but others felt this was not nearly enough to guarantee anonymity.

3) Devices can create unforeseen operational challenges at the micro-level. When doing a mobile survey by phone and asking people to press a number to select a particular answer to a question, one organization working in rural Ghana to collect feedback about government performance found that some phones were set to lock when a call was answered. People were pressing buttons to respond to phone surveys (press 1 for….), but their answers did not register because phones were locked, or answers registered incorrectly because the person was entering their PIN to unlock the phone. Others noted that when planning for training of enumerators or community members who will use their own devices for data collection, we cannot forget the fact that every model of phone is slightly different. This adds quite a lot of time to the training as each different model of phone needs to be explained to trainees. (There are a huge number of other challenges related to devices, but these were two that I had not thought of before.)

4) Motivation in the case of poor capacity to respond. An organization interested in tracking violence in a highly volatile area wanted to take reports of violence, but did not have a way to ensure that there would be a response from an INGO, humanitarian organization or government authority if/when violence was reported. This is a known issue — the difficulties of encouraging reporting if responsiveness is low. To keep people engaged this organization thanks people immediately for reporting and then sends peace messages and encouragement 2-3 times per week. Participants in the program have appreciated these ongoing messages and participation has continued to be steady, regardless of the fact that immediate help has not been provided as a result of reporting.

5) Mirroring physical processes with tech. One way to help digital tools gain more acceptance and to make them more user-friendly is to design them to mirror paper processes or other physical processes that people are already familiar with. For example, one organization shared their design process for a mobile application for village savings and loan (VSL) groups. Because security is a big concern among VSL members, the groups typically keep cash in a box with 3 padlocks. Three elected members must be present and agree to open and remove money from the box in order to conduct any transaction. To mimic this, the VSL mobile application requires 3 PINs to access mobile money or make transactions, and what’s more, the app sends everyone in the VSL Group an SMS notification if the 3 people with the PINs carry out a transaction, meaning the mobile app is even more secure than the original physical lock-box, because everyone knows what is happening all the time with the money.

****

As I mentioned in part 1 of this post, some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward.

Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that are missing!

Previous posts on ICTs and M&E on this blog:

Read Full Post »

I attended the African Evaluators’ Conference (AfrEA) in Cameroon last week as part of the Technology and Evaluation strand organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation. The strand was a fantastic opportunity for learning, sharing and understanding more about the context, possibilities and realities of using ICTs in monitoring and evaluation (M&E). We heard from a variety of evaluators, development practitioners, researchers, tool-developers, donors, and private sector and government folks. Judging by the well-attended sessions, there is a huge amount of interest in ICTs and M&E.

Rather than repeat what’s I’ve written in other posts (see links at the bottom), I’ll focus here on some of the more relevant, interesting, and/or new information from the AfrEA discussions. This first post will go into institutional issues and the ‘field’ of ICTs and M&E. A second post will talk about design and operational tips I learned /was reminded of at AfrEA.

1) We tend to get stuck on data collection –Like other areas (I’m looking at you, Open Data) conversations tend to revolve around collecting data. We need to get beyond that and think more about why we are collecting data and what we are going to do with it (and do we really need all this data?). The evaluation field also needs to explore all the other ways it could be using ICTs for M&E, going beyond mobile phones and surveys. Collecting data is clearly a necessary part of M&E, but those data still need to be analyzed. As a participant from a data visualization firm said, there are so many ways you can use ICTs – they help you make sense of things, you can tag sentiment, you can visualize data and make data-based decisions. Others mentioned that ICTs can help us to share data with various stakeholders, improve sampling in RCTs (Randomized Control Trials), conduct quality checks on massive data sets, and manage staff who are working on data collection. Using big data, we can do analyses we never could have imagined before. We can open and share our data, and stop collecting the same data from the same people multiple times. We can use ICTs to share back what we’ve learned with evaluation stakeholders, governments, the public, and donors. The range of uses of ICTs is huge, yet the discussion tends to get stuck on mobile surveys and data collection, and we need to start thinking beyond that.

2) ICTs are changing how programs are implemented and how M&E is done — When a program already uses ICTs, data collection can be built in through the digital device itself (e.g., tracking user behavior, cookies, and via tests and quizzes), as one evaluator working on tech and education programs noted. As more programs integrate digital tools, it may become easier to collect monitoring and evaluation data with less effort. Along those lines, an evaluator looking at a large-scale mobile-based agricultural information system asked about approaches to conducting M&E that do not rely on enumerators and traditional M&E approaches. In his program, because the farmers who signed up for the mobile information service do not live in the same geographical community, traditional M&E approaches do not seem plausible and ICT-based approaches look like a logical answer. There is little documentation within the international development evaluation community, however, on how an evaluator might design an evaluation in this type of a situation. (I am guessing there may be some insights from market research and possibly from the transparency and accountability sectors, and among people working on “feedback loops”).

3) Moving beyond one-off efforts — Some people noted that mobile data gathering is still done mostly at the project level. Efforts tend to be short-term and one-off. The data collected is not well-integrated into management information systems or national level processes. (Here we may reference the infamous map of mHealth pilots in Uganda, and note the possibility of ICT-enabled M&E in other sectors going this same route). Numerous small pilots may be problematic if the goal is to institutionalize mobile data gathering into M&E at the wider level and do a better job of supporting and strengthening large-scale systems.

4) Sometimes ICTs are not the answer, even if you want them to be – One presenter (who considered himself a tech enthusiast) went into careful detail about his organization’s process of deciding not to use tablets for a complex evaluation across 4 countries with multiple indicators. In the end, the evaluation itself was too complex, and the team was not able to find the right tool for the job. The organization looked at simple, mid-range and highly complex applications and tools and after testing them all, opted out. Each possible tool presented a set of challenges that meant the tool was not a vast improvement over paper-based data collection, and the up-front costs and training were too expensive and lengthy to make the switch to digital tools worthwhile. In addition, the team felt that face-to-face dynamics in the community and having access to notes and written observations in the margins of a paper survey would enable them to conduct a better evaluation. Some tablets are beginning to enable more interactivity and better design for surveys, but not yet in a way that made them a viable option for this evaluation. I liked how the organization went through a very thorough and in-depth process to make this decision.

Other colleagues also commented that the tech tools are still not quite ‘there’ yet for M&E. Even top of the line business solutions are generally found to be somewhat clunky. Million dollar models are not relevant for environments that development evaluators are working in; in addition to their high cost, they often have too many features or require too much training. There are some excellent mid-range tools that are designed for the environment, but many lack vital features such as availability in multiple languages. Simple tools that are more easily accessible and understandable without a lot of training are not sophisticated enough to conduct a large-scale data collection exercise. One person I talked with suggested that the private sector will eventually develop appropriate tools, and the not-for-profit sector will then adopt them. She felt that those of us who are interested in ICTs in M&E are slightly ahead of the curve and need to wait a few years until the tools are more widespread and common. Many people attending the Tech and M&E sessions at AfrEA made the point that use of ICTs in M&E would get easier and cheaper as the field develops, tools get more advanced/appropriate/user-friendly and widely tested, and networks/ platforms/ infrastructure improves in less-connected rural areas.

5) Need for documentation, evaluation and training on use of ICTs in M&E – Some evaluators felt that ICTs are only suitable for routine data collection as part of an ongoing program, but not good for large-scale evaluations. Others pointed out that the notions of ‘ICT for M&E’ and ‘mobile data collection/mobile surveys’ are often used interchangeably, and evaluation practitioners need to look at the multiple ways that ICTs can be used in the wider field of M&E. ICTs are not just useful for moving from paper surveys to mobile data gathering. An evaluator working on a number of RCTs mentioned that his group relies on ICTs for improving samples, reducing bias, and automatically checking data quality.

There was general agreement that M&E practitioners need resources, opportunities for more discussion, and capacity strengthening on the multiple ways that ICTs may be able to support M&E. One evaluator noted that civil society organizations have a tendency to rush into things, hit a brick wall, and then cross their arms and say, “well, this doesn’t work” (in this case, ICTs for M&E). With training and capacity, and as more experience and documentation is gained, he considered that ICTs could have a huge role in making M&E more efficient and effective.

One evaluator, however, questioned whether having better, cheaper, higher quality data is actually leading to better decisions and outcomes. Another evaluator asked for more evidence of what works, when, with whom and under what circumstances so that evaluators could make better decisions around use of ICTs in M&E. Some felt that a decision tree or list of considerations or key questions to think through when integrating ICTs into M&E would be helpful for practitioners. In general, it was agreed that ICTs can help overcome some of our old challenges, but that they inevitably bring new challenges. Rather than shy away from using ICTs, we should try to understand these new challenges and find ways to overcome/work around them. Though the mHealth field has done quite a bit of useful research, and documentation on digital data collection is growing, use of ICTs is still relatively unexplored in the wider evaluation space.

6) There is no simple answer. One of my takeaways from all the sessions was that many M&E specialists are carefully considering options, and thinking quite a lot about which ICTs for what, whom, when and where rather than deciding from the start that ICTs are ‘good and beneficial’ or ‘bad and not worth considering.’ This is really encouraging, and to be expected of a thoughtful group like this. I hope to participate in more discussions of this nature that dig into the nuances of introducing ICTs into M&E.

Some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward. The “field” of ICTs in M&E is quite broad, however, and there are many ways to slice the cake. Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that you think are missing!

(Part 2 of this post)

Previous posts on ICTs and M&E:

Read Full Post »

The NYC Technology Salon on February 28th examined the connection between bigger, better data and resilience. We held morning and afternoon Salons due to the high response rate for the topic. Jake Porway, DataKind; Emmanuel Letouzé, Harvard Humanitarian Initiative; and Elizabeth Eagen, Open Society Foundations; were our lead discussants for the morning. Max Shron, Data Strategy; joined Emmanuel and Elizabeth for the afternoon session.

This post summarizes key discussions from both Salons.

What the heck do we mean by ‘big data’?

The first question at the morning salon was: What precisely do we mean by the term ‘big data’? Participants and lead discussants had varying definitions. One way of thinking about big data is that it is comprised of small bits of unintentionally produced ‘data exhaust’ (website cookies, cellphone data records, etc.) that add up to a dataset. In this case, the term big data refers to the quality and nature of the data, and we think of non-sampled data that are messy, noisy and unstructured. The mindset that goes with big data is one of ‘turning mess into meaning.’

Some Salon participants understood big data as datasets that are too large to be stored, managed and analyzed via conventional database technologies or managed on normal computers. One person suggested dropping the adjective ‘big,’ forgetting about the size, and instead considering the impact of the contribution of the data to understanding. For example, if there were absolutely no data on something and 1000 data points were contributed, this might have a greater impact than adding another 10,000 data points to an existing set of 10 million.

The point here was that when the emphasis is on big (understood as size and/or volume), someone with a small data set (for example, one that fits into an excel sheet) might feel inadequate, yet their data contribution may be actually ‘bigger’ than a physically larger data set (aha! it’s not the size of the paintbrush…). There was a suggestion that instead of talking about big data we should talk about smart data.

How can big data support development?

Two frameworks were shared for thinking about big data in development. One from UN Global Pulse considers that big data can improve a) real-time awareness, b) early warning and c) real-time monitoring. Another looks at big data being used for three kinds of analysis: a) descriptive (providing a summary of something that has already happened), b) predictive (likelihood and probability of something occurring in the future), and c) diagnostic (causal inference and understanding of the world).

What’s the link between big data and resilience?

‘Resilience’ as a concept is contested, difficult to measure and complex. In its most simple definition, resilience can be thought of as the ability to bounce back or bounce forward. (For an interesting discussion on whether we should be talking about sustainability or resilience, see this piece). One discussant noted that global processes and structures are not working well for the poor, as evidenced from continuing cycles of poverty and glaring wealth inequalities. In this view, people are poor as a result of being more exposed and vulnerable to shocks, at the same time, their poverty increases their vulnerability, and it’s difficult to escape from the cycle where over time, small and large shocks deplete assets. An assets-based model of resilience would help individuals, families and communities who are hit by a shock in one sphere — financial, human, capital, social, legal and/or political — to draw on the assets within another sphere to bounce back or forward.

Big data could help this type of an assets-based model of resilience by predicting /helping poor and vulnerable people predict when a shock might happen and preparing for it. Big data analytics, if accessible to the poor, could help them to increase their chances of making better decisions now and for the future. Big data then, should be made accessible and available to communities so that they can self-organize and decrease their own exposure to shocks and hazards and increase their ability to bounce back and bounce forward. Big data could also help various actors to develop a better understanding of the human ecosystem and contribute to increasing resilience.

Can ivory tower big data approaches contribute to resilience?

The application of big data approaches to efforts that aim to increase resilience and better understand human ecosystems often comes at things from the wrong angle, according to one discussant. We are increasingly seeing situations where a decision is made at the top by people who know how to crunch data yet have no way of really understanding the meaning of the data in the local context. In these cases, the impact of data on resilience will be low, because resilience can only truly be created and supported at the local level. Instead of large organizations thinking about how they can use data from afar to ‘rescue’ or ‘help’ the poor, organizations should be working together with communities in crisis (or supporting local or nationally based intermediaries to facilitate this process) so that communities can discuss and pull meaning from the data, contextualize it and use it to help themselves. They can also be more informed what data exist about them and more aware of how these data might be used.

For the Human Rights community, for example, the story is about how people successfully use data to advocate for their own rights, and there is less emphasis on large data sets. Rather, the goal is to get data to citizens and communities. It’s to support groups to define and use data locally and to think about what the data can tell them about the advocacy path they could take to achieve a particular goal.

Can data really empower people?

To better understand the opportunities and challenges of big data, we need to unpack questions related to empowerment. Who has the knowledge? The access? Who can use the data? Salon participants emphasized that change doesn’t come by merely having data. Rather it’s about using big data as an advocacy tool to tell the world to change processes and to put things normally left unsaid on the table for discussion and action. It is also about decisions and getting ‘big data’ to the ‘small world,’ e.g., the local level. According to some, this should be the priority of ‘big data for development’ actors over the next 5 years.

Though some participants at the Salon felt that data on their own do not empower individuals; others noted that knowing your credit score or tracking how much you are eating or exercising can indeed be empowering to individuals. In addition, the process of gathering data can help communities understand their own realities better, build their self-esteem and analytical capacities, and contribute to achieving a more level playing field when they are advocating for their rights or for a budget or service. As one Salon participant said, most communities have information but are not perceived to have data unless they collect it using ‘Western’ methods. Having data to support and back information, opinions and demands can serve communities in negotiations with entities that wield more power. (See the book “Who Counts, the power of participatory statistics” on how to work with communities to create ‘data’ from participatory approaches).

On the other hand, data are not enough if there is no political will to make change to respond to the data and to the requests or demands being made based on the data. As one Salon participant said: “giving someone a data set doesn’t change politics.”

Should we all jump on the data bandwagon?

Both discussants and participants made a plea to ‘practice safe statistics!’ Human rights organizations wander in and out of statistics and don’t really understand how it works, said one person. ‘You wouldn’t go to court without a lawyer, so don’t try to use big data unless you can ensure it’s valid and you know how to manage it.’ If organizations plan to work with data, they should have statisticians and/or data scientists on staff or on call as partners and collaborators. Lack of basic statistical literacy is a huge issue amongst the general population and within many organizations, thought leaders, and journalists, and this can be dangerous.

As big data becomes more trendy, the risk of misinterpretation is growing, and we need to place more attention on the responsible use of statistics and data or we may end up harming people by bad decisions. ‘Everyone thinks they are experts who can handle statistics – bias, collection, correlation’ these days. And ‘as a general rule, no matter how many times you say the data show possible correlation not causality, the public will understand that there is causality,’ commented one discussant. And generally, he noted, ‘when people look at data, they believe them as truth because they include numbers, statistics, science.’ Greater statistical literacy could help people to not just read or access data and information but to use them wisely, to understand and question how data are interpreted, and to detect political or other biases. What’s more, organizations today are asking questions about big data that have been on statisticians’ minds for a very long time, so reaching out to those who understand these issues can be useful to avoid repeating mistakes and re-learning lessons that have already been well-documented.

This poor statistical literacy becomes a serious ethical issue when data are used to determine funding or actions that impact on people’s lives, or when they are shared openly, accidentally or in ways that are unethical. In addition, privacy and protection are critical elements in using and working with data about people, especially when the data involve vulnerable populations. Organizations can face legal action and liability suits if their data put people at harm, as one Salon participant noted. ‘An organization could even be accused of manslaughter… and I’m speaking from experience,’ she added.

What can we do to move forward?

Some potential actions for moving forward included:

  • Emphasis with donors that having big data does not mean that in order to cut costs, you should eliminate community level processes related to data collection, interpretation, analysis, and ownership;
  • Evaluations and literature/documentation on the effectiveness of different tools and methods, and when and in which contexts they might be applicable, including things like cost-benefit analyses of using big data and evaluation of its impact on development/on communities when combined with community level processes vs used alone/without community involvement — practitioner gut feelings are that big data without community involvement is irresponsible and ineffective in terms of resilience, and it would be good to have evidence to help validate or disprove this;
  • More and better tools and resources to support data collection, visualization and use and to help organizations with risk analysis, privacy impact assessments, strategies and planning around use of big data; case studies and a place to share and engage with peers, creation of a ‘cook book’ to help organizations understand the ingredients, tools, processes of using data/big data in their work;
  • ‘Normative conventions’ on how big data should be used to avoid falling into tech-driven dystopia;
  • Greater capacity for ‘safe statistics’ among organizations;
  • A community space where frank and open conversations around data/big data can occur in an ongoing way with the right range of people and cross-section of experiences and expertise from business, data, organizations, etc.

In conclusion?

We touched upon all types of data and various levels of data usage for a huge range of purposes at the two Salons. One closing thought was around the importance of having a solid idea of what questions we trying to answer before moving on to collecting data, and then understanding what data collection methods are adequate for our purpose, what ICT tools are right for which data collection and interpretation methods, what will done with the data/what is the purpose of collecting data, how we’ll interpret them, and how data will be shared, with whom, and in what format.

See this growing list of resources related to Data and Resilience here and add yours!

Thanks to participants and lead discussants for the fantastic exchange, and a big thank you to ThoughtWorks for hosting us at their offices for this Salon. Thanks also to Hunter Goldman, Elizabeth Eagen and Emmanuel Letouzé for their support developing this Salon topic, and to Somto Fab-Ukozor for support with notes and the summary. Salons are held under Chatham House Rule, therefore no attribution has been made in this post. If you’d like to attend future Salons, sign up here!

Read Full Post »

This is a cross post from Heather Leson, Community Engagement Director at the Open Knowledge Foundation. The original post appeared here on the School of Data site.

by Heather Leson

What is the currency of change? What can coders (consumers) do with IATI data? How can suppliers deliver the data sets? Last week I had the honour of participating in the Open Data for Development Codeathon and the International Aid Transparency Initiative Technical Advisory Group meetings. IATI’s goal is to make information about aid spending easier to access, use, and understand. It was great that these events were back-to-back to push a big picture view.

My big takeaways included similar themes that I have learned on my open source journey:

You can talk about open data [insert tech or OS project] all you want, but if you don’t have an interactive community (including mentorship programmes), an education strategy, engagement/feedback loops plan, translation/localization plan and a process for people to learn how to contribute, then you build a double-edged barrier: barrier to entry and barrier for impact/contributor outputs.

Currency

About the Open Data in Development Codeathon

At the Codathon close, Mark Surman, Executive Director of Mozilla Foundation, gave us a call to action to make the web. Well, in order to create a world of data makers, I think we should run aid and development processes through this mindset. What is the currency of change? I hear many people talking about theory of change and impact, but I’d like to add ‘currency’. This is not only about money, this is about using the best brainpower and best energy sources to solve real world problems in smart ways. I think if we heed Mark’s call to action with a “yes, and”, then we can rethink how we approach complex change. Every single industry is suffering from the same issue: how to deal with the influx of supply and demand in information. We need to change how we approach the problem. Combined events like these give a window into tackling problems in a new format. It is not about the next greatest app, but more about asking: how can we learn from the Webmakers and build with each other in our respective fields and networks?

Ease of Delivery

The IATI community / network is very passionate about moving the ball forward on releasing data. During the sessions, it was clear that the attendees see some gaps and are already working to fill them. The new IATI website is set up to grow with a Community component. The feedback from each of the sessions was distilled by the IATI – TAG and Civil Society Guidance groups to share with the IATI Secretariat.

In the Open Data in Development, Impact of Open Data in Developing Countries, and CSO Guidance sessions, we discussed some key items about sharing, learning, and using IATI data. Farai Matsika, with International HIV/Aids Alliance, was particularly poignant reminding us of IATI’s CSO purpose – we need to share data with those we serve.

Country edits IATI

One of the biggest themes was data ethics. As we rush to ask NGOs and CSOs to release data, what are some of the data pitfalls? Anahi Ayala Iaccuci of Internews and Linda Raftree of Plan International USA both reminded participants that data needs to be anonymized to protect those at risk. Ms. Iaccuci asked that we consider the complex nature of sharing both sides of the open data story – successes and failures. As well, she advised: don’t create trust, but think about who people are trusting. Turning this model around is key to rethinking assumptions. I would add to her point: trust and sharing are currency and will add to the success measures of IATI. If people don’t trust the IATI data, they won’t share and use it.

Anne Crowe of Privacy International frequently asked attendees to consider the ramifications of opening data. It is clear that the IATI TAG does not curate the data that NGOS and CSOs share. Thus it falls on each of these organizations to learn how to be data makers in order to contribute data to IATI. Perhaps organizations need a lead educator and curator to ensure the future success of the IATI process, including quality data.

I think that School of Data and the Partnership for Open Data have a huge part to play with IATI. My colleague Zara Rahman is collecting user feedback for the Open Development Toolkit, and Katelyn Rogers is leading the Open Development mailing list. We collectively want to help people become data makers and consumers to effectively achieve their development goals using open data. This also means also tackling the ongoing questions about data quality and data ethics.


Here are some additional resources shared during the IATI meetings.

Read Full Post »

This is a cross-post from Tom Murphyeditor of the aid blog A View From the Cave. The original article can be found on Humanosphere. The post summarizes discussions at our November 21st New York City Technology Salon: Are Mobile Money Cash Grants the Future of Development?  If you’d like to join us for future Salons, sign up here.

by Tom Murphy

Decades ago, some of the biggest NGOs simply gave away money to individuals in communities. People lined up and were just given cash.

The once popular form of aid went out of fashion, but it is now making a comeback.

Over time, coordination became extremely difficult. Traveling from home to home costs time and money for the NGO and the same problem exists for recipients when they have to go to a central location. More significant was the shift in development thinking that said giving hand outs was causing long term damage.

The backlash against ‘welfare queens’ in the US, UK and elsewhere during the 1980s was reflected in international development programming. Problem was that it was all based on unproven theories of change and anecdotal evidence, rather than hard evidence.

Half a decade later, new research shows that just giving people money can be an effective way to build assets and even incomes. The findings were covered by major players like NPR and the Economist.

While exciting and promising, cash transfers are not a new tool in the development utility belt.

Various forms of transfers have emerged over the past decade. Food vouchers were used by the World Food Programme when responding to the 2011 famine in the Horn of Africa. Like food stamps in the US, people could go buy food from local markets and get exactly what they need while supporting the local economy.

The differences have sparked a sometimes heated debate within the development community as to what the findings about cash transfers mean going forward. A Technology Salon hosted conversation at ThoughtWorks in New York City last week, featured some of the leading researchers and players in the cash transfer sector.

The salon style conversation featured Columbia University and popular aid blogger Chris Blattman, GiveDirectly co-founder and UCSD researcher Paul Neihaus and Plan USA CEO Tessie San Martin. The ensuing discussion, operating under the Chatham House Rule of no attribution, featured representatives from large NGOs, microfinance organizations and UN agencies.

Research from Kenya, Uganda and Liberia show both the promise and shortcomings of cash transfers. For example, giving out cash in addition to training was successful in generating employment in Northern Uganda. Another program, with the backing of the Ugandan government, saw success with the cash alone.

Cash transfers have been argued as the new benchmark for development and aid programs. Advocates in the discussion made the case that programs should be evaluated in terms of impact and cost-effectiveness against just giving people cash.

That idea saw some resistance. The research from Liberia, for example, showed that money given to street youth would not be wasted, but it was not sufficient to generate long-lasting employment or income. There are capacity problems and much larger issues that probably cannot be addressed by cash alone.

An additional concern is the unintended negative consequences caused by cash transfers. One example given was that of refugees in Syria. Money was distributed to families labeled for rent. Despite warnings not to label the transfer, the program went ahead.

As a result, rents increased. The money intended to help reduce the cost incurred by rent was rendered largely useless. One participant raised the concern that cash transfers in such a setting could be ‘taxed’ by rebels or government fighters. There is a potential that aid organizations could help fund fighting by giving unrestricted cash.

The discussion made it clear that the applications of cash transfers are far more nuanced than they might appear. Kenya saw success in part because of the ease of sending money to people through mobile phones. Newer programs in India, for example, rely on what are essentially ATM cards.

Impacts, admitted practitioners, can go beyond simple incomes. There has been care to make sure that implementing cash transfer programs to not dramatically change social structures in ways that cause problems for the community and recipients. In one case, giving women cash allowed for them to participate in the local markets, a benefit to everyone except for the existing shop oligarchs.

Governments in low and middle-income countries are seeing increasing pressure to establish social programs. The success of cash transfer programs in Brazil and Mexico indicate that it can be an effective way to lift people out of poverty. Testing is underway to bring about more efficient and context appropriate cash transfer schemes.

An important component in the re-emergence of cash transfers is looking back to previous efforts, said one NGO official. The individual’s organization is systematically looking back at communities where the NGO used to work in order to see what happened ten years later. The idea is to learn what impacts may or may not have been on that community in order to inform future initiatives.

“Lots of people have concerns about cash, but we should have concerns about all the programs we are doing,” said a participant.

The lessons from the cash transfer research shows that there is increasing need for better evidence across development and aid programs. Researchers in the group argued that the ease of doing evaluations is improving.

Read the “Storified” version of the Technology Salon on Mobiles and Cash Transfers here.

Read Full Post »

This post was originally published on the Open Knowledge Foundation blog

A core theme that the Open Development track covered at September’s Open Knowledge Conference was Ethics and Risk in Open Development. There were more questions than answers in the discussions, summarized below, and the Open Development working group plans to further examine these issues over the coming year.

Informed consent and opting in or out

Ethics around ‘opt in’ and ‘opt out’ when working with people in communities with fewer resources, lower connectivity, and/or less of an understanding about privacy and data are tricky. Yet project implementers have a responsibility to work to the best of their ability to ensure that participants understand what will happen with their data in general, and what might happen if it is shared openly.

There are some concerns around how these decisions are currently being made and by whom. Can an NGO make the decision to share or open data from/about program participants? Is it OK for an NGO to share ‘beneficiary’ data with the private sector in return for funding to help make a program ‘sustainable’? What liabilities might donors or program implementers face in the future as these issues develop?

Issues related to private vs. public good need further discussion, and there is no one right answer because concepts and definitions of ‘private’ and ‘public’ data change according to context and geography.

Informed participation, informed risk-taking

The ‘do no harm’ principle is applicable in emergency and conflict situations, but is it realistic to apply it to activism? There is concern that organizations implementing programs that rely on newer ICTs and open data are not ensuring that activists have enough information to make an informed choice about their involvement. At the same time, assuming that activists don’t know enough to decide for themselves can come across as paternalistic.

As one participant at OK Con commented, “human rights and accountability work are about changing power relations. Those threatened by power shifts are likely to respond with violence and intimidation. If you are trying to avoid all harm, you will probably not have any impact.” There is also the concept of transformative change: “things get worse before they get better. How do you include that in your prediction of what risks may be involved? There also may be a perception gap in terms of what different people consider harm to be. Whose opinion counts and are we listening? Are the right people involved in the conversations about this?”

A key point is that whomever assumes the risk needs to be involved in assessing that potential risk and deciding what the actions should be — but people also need to be fully informed. With new tools coming into play all the time, can people be truly ‘informed’ and are outsiders who come in with new technologies doing a good enough job of facilitating discussions about possible implications and risk with those who will face the consequences? Are community members and activists themselves included in risk analysis, assumption testing, threat modeling and risk mitigation work? Is there a way to predict the likelihood of harm? For example, can we determine whether releasing ‘x’ data will likely lead to ‘y’ harm happening? How can participants, practitioners and program designers get better at identifying and mitigating risks?

When things get scary…

Even when risk analysis is conducted, it is impossible to predict or foresee every possible way that a program can go wrong during implementation. Then the question becomes what to do when you are in the middle of something that is putting people at risk or leading to extremely negative unintended consequences. Who can you call for help? What do you do when there is no mitigation possible and you need to pull the plug on an effort? Who decides that you’ve reached that point? This is not an issue that exclusively affects programs that use open data, but open data may create new risks with which practitioners, participants and activists have less experience, thus the need to examine it more closely.

Participants felt that there is not enough honest discussion on this aspect. There is a pop culture of ‘admitting failure’ but admitting harm is different because there is a higher sense of liability and distress. “When I’m really scared shitless about what is happening in a project, what do I do?” asked one participant at the OK Con discussion sessions. “When I realize that opening data up has generated a huge potential risk to people who are already vulnerable, where do I go for help?” We tend to share our “cute” failures, not our really dismal ones.

Academia has done some work around research ethics, informed consent, human subject research and use of Internal Review Boards (IRBs). What aspects of this can or should be applied to mobile data gathering, crowdsourcing, open data work and the like? What about when citizens are their own source of information and they voluntarily share data without a clear understanding of what happens to the data, or what the possible implications are?

Do we need to think about updating and modernizing the concept of IRBs? A major issue is that many people who are conducting these kinds of data collection and sharing activities using new ICTs are unaware of research ethics and IRBs and don’t consider what they are doing to be ‘research’. How can we broaden this discussion and engage those who may not be aware of the need to integrate informed consent, risk analysis and privacy awareness into their approaches?

The elephant in the room

Despite our good intentions to do better planning and risk management, one big problem is donors, according to some of the OK Con participants.  Do donors require enough risk assessment and mitigation planning in their program proposal designs? Do they allow organizations enough time to develop a well-thought-out and participatory Theory of Change along with a rigorous risk assessment together with program participants? Are funding recipients required to report back on risks and how they played out? As one person put it, “talk about failure is currently more like a ‘cult of failure’ and there is no real learning from it. Systematically we have to report up the chain on money and results and all the good things happening. and no one up at the top really wants to know about the bad things. The most interesting learning doesn’t get back to the donors or permeate across practitioners. We never talk about all the work-arounds and backdoor negotiations that make development work happen. This is a serious systemic issue.”

Greater transparency can actually be a deterrent to talking about some of these complexities, because “the last thing donors want is more complexity as it raises difficult questions.”

Reporting upwards into government representatives in Parliament or Congress leads to continued aversion to any failures or ‘bad news’. Though funding recipients are urged to be innovative, they still need to hit numeric targets so that the international aid budget can be defended in government spaces. Thus, the message is mixed: “Make sure you are learning and recognizing failure, but please don’t put anything too serious in the final report.” There is awareness that rigid program planning doesn’t work and that we need to be adaptive, yet we are asked to “put it all into a log frame and make sure the government aid person can defend it to their superiors.”

Where to from here?

It was suggested that monitoring and evaluation (M&E) could be used as a tool for examining some of these issues, but M&E needs to be seen as a learning component, not only an accountability one. M&E needs to feed into the choices people are making along the way and linking it in well during program design may be one way to include a more adaptive and iterative approach. M&E should force practitioners to ask themselves the right questions as they design programs and as they assess them throughout implementation. Theory of Change might help, and an ethics-based approach could be introduced as well to raise these questions about risk and privacy and ensure that they are addressed from the start of an initiative.

Practitioners have also expressed the need for additional resources to help them predict and manage possible risk: case studies, a safe space for sharing concerns during implementation, people who can help when things go pear-shaped, a menu of methodologies, a set of principles or questions to ask during program design, or even an ICT4D Implementation Hotline or a forum for questions and discussion.

These ethical issues around privacy and risk are not exclusive to Open Development. Similar issues were raised last week at the Open Government Partnership Summit sessions on whistle blowing, privacy, and safeguarding civic space, especially in light of the Snowden case. They were also raised at last year’s Technology Salon on Participatory Mapping.

A number of groups are looking more deeply into this area, including the Capture the Ocean Project, The Engine Room, IDRC’s research network, The Open Technology InstitutePrivacy InternationalGSMA, those working on “Big Data,” those in the Internet of Things space, and others.

I’m looking forward to further discussion with the Open Development working group on all of this in the coming months, and will also be putting a little time into mapping out existing initiatives and identifying gaps when it comes to these cross-cutting ethics, power, privacy and risk issues in open development and other ICT-enabled data-heavy initiatives.

Please do share information, projects, research, opinion pieces and more if you have them!

Read Full Post »

This is a cross-post by Duncan Edwards from the Institute of Development Studies. Duncan and I collaborated on some sessions for the Open Development stream at September’s Open Knowledge Conference, and we are working on a few posts to sum up what we discussed there and highlight some lingering thoughts on open development and open data. This post was originally published on the Open Knowledge Foundation blog on October 21, 2013

by Duncan Edwards

I’ve had a lingering feeling of unease that things were not quite right in the world of open development and ICT4D (Information and communication technology for development), so at September’s Open Knowledge Conference in Geneva I took advantage of the presence of some of the world’s top practitioners in these two areas to explore the question: How does “openness” really effect change within development?

Inspiration for the session came from a number of conversations I’ve had over the last few years. My co-conspirator/co-organiser of the OKCon side event “Reality check: Ethics and Risk in Open Development,” Linda Raftree, had also been feeling uncomfortable with the framing of many open development projects, assumptions being made about how “openness + ICTs = development outcomes,” and a concern that risks and privacy were not being adequately considered. We had been wondering whether the claims made by Open Development enthusiasts were substantiated by any demonstrable impact. For some reason, as soon as you introduce the words “open data” and “ICT,” good practice in development gets thrown out the window in the excitement to reach “the solution”.

A common narrative in many “open” development projects goes along the lines of “provide access to data/information –> some magic occurs –> we see positive change.” In essence, because of the newness of this field, we only know what we THINK happens, we don’t know what REALLY happens because there is a paucity of documentation and evidence.

It’s problematic that we often use the terms data, information, and knowledge interchangeably, because:
Data is NOT knowledge.
Data is NOT information.
Information is NOT knowledge.
Knowledge IS what you know. It’s the result of information you’ve consumed, your education, your culture, beliefs, religion, experience – it’s intertwined with the society within which you live.

Data cake metaphor developed by Mark Johnstone.

Understanding and thinking through how we get from the “openness” of data, to how this affects how and what people think, and consequently how they MIGHT act, is critical in whether “open” actually has any additional impact.

At Wednesday’s session, panellist Matthew Smith from the International Development Research Centre (IDRC) talked about the commonalities across various open initiatives. Matthew argued that a larger Theory of Change (ToC) around how ‘open’ leads to change on a number of levels could allow practitioners to draw out common points. The basic theory we see in open initiatives is “put information out, get a feedback loop going, see change happen.” But open development can be sliced in many ways, and we tend to work in silos when talking about openness. We have open educational resources, open data, open government, open science, etc. We apply ideas and theories of openness in a number of domains but we are not learning across these domains.

We explored the theories of change underpinning two active programmes that incorporate a certain amount of “openness” in their logic. Simon Colmer from the Knowledge Services department at the Institute of Development Studies outlined his department’s theory of change of how research evidence can help support decision-making in development policy-making and practice. Erik Nijland from HIVOS presented elements of the theory of change that underpins the Making All Voices Count programme, which looks to increase the links between citizens and governments to improve public services and deepen democracy. Both of these ToCs assume that because data/information is accessible, people will use it within their decision-making processes.

They also both assume that intermediaries play a critical role in analysis, translation, interpretation, and contextualisation of data and information to ensure that decision makers (whether citizens, policy actors, or development practitioners) are able to make use of it. Although access is theoretically open, in practice even mediated access is not equal – so how might this play out in respect to marginalised communities and individuals?

What neither ToC really does is unpack who these intermediaries are. What are their politics? What are their drivers for mediating data and information? What is the effect of this? A common assumption is that intermediaries are somehow neutral and unbiased – does this assumption really hold true?

What many open data initiatives do not consider is what happens after people are able to access and internalise open data and information. How do people act once they know something? As Vanessa Herringshaw from the Transparency and Accountability Initiative said in the “Raising the Bar for ambition and quality in OGP” session, “We know what transparency should look like but things are a lot less clear on the accountability end of things”.

There are a lot of unanswered questions. Do citizens have the agency to take action? Who holds power? What kind of action is appropriate or desirable? Who is listening? And if they are listening, do they care?

Linda finished up the panel by raising some questions around the assumptions that people make decisions based on information rather than on emotion, and that there is a homogeneous “public” or “community” that is waiting for data/information upon which to base their opinions and actions.

So as a final thought, here’s my (perhaps clumsy) 2013 update on Gil Scott Heron’s 1970 song “The Revolution will not be televised”:

“The revolution will NOT be in Open data,
It will NOT be in hackathons, data dives, and mobile apps,
It will NOT be broadcast on Facebook, Twitter, and YouTube,
It will NOT be live-streamed, podcast, and available on catch-up
The revolution will not be televised”

Heron’s point, which holds true today, was that “the revolution” or change, starts in the head. We need to think carefully about how we get far beyond access to data.

Look out for a second post coming soon on Theories of Change in Open, and a third post on ethics and risk in open data and open development.

And if you’re interested in joining the conversation, \sign up to our Open Development mailing list.

Read Full Post »

According to the latest GSMA statistics, nearly 50% of people own a mobile phone in the developing world and almost 70% have access to mobile phones. With mobile access increasing daily, opportunities to use mobiles in development initiatives continue to grow and expand. The area of Mobiles for Development (M4D) has attracted investment from all sides, including mobile operators, entrepreneurs, investors and international development agencies – all working to generate social impact and improve wellbeing at the base of the pyramid. However, efforts to scale M4D initiatives and make them sustainable have largely failed.

Our July 25th NYC Technology Salon examined the topic of Scaling M4D. Lead discussants Corina Gardner from the GSMA’s Mobile Development Intelligence (MDI) unit and Sean McDonald from FrontlineSMS joined us to kick off the conversation, which was hosted at the Rockefeller Foundation.

Screen Shot 2013-08-14 at 10.52.13 AMThe Salon started off with key points from the MDI report “Scaling Mobile for Development: A developing world opportunity,” which highlights the issue of sustainable and scaled impact as the central challenge (and opportunity) in M4D over the next five years. Because the GSMA is commercially focused (it is made up of telecommunication industry members), business models that can achieve both revenue goals and added value to customers are a core concern. GSMA is interested in finding M4D business models that convince industry to re-invest and replicate. However not many of these examples exist.

Business models at the base of the pyramid (BOP) tend to be different than what the industry is used to. If scale is defined as the number of people reached with a service, and the population being reached has little money, then the only clear business model is via a huge customer base. Given that international development agencies also want to achieve scale with development programs, there is a good potential overlap in M4D. In order to achieve good impact, it’s critical to think through what BOP users want and need, and what offers real value to them for their limited resources.

Innovative vehicles are needed for investing in M4D. Currently, M4D financing tends to take two distinct paths: International Development funding and Venture Capital (VC) funding. Hiccups occur because the two operate very differently and do not always work well together. International development funds and processes do not move as quickly as technology-based funds. There is low tolerance for uncertainty and a desire for initial proof of potential impact, adoption and uptake. On the VC side, there is the desire for a light overhead structure modeled after Silicon Valley; however in African countries, for example, there is little existing infrastructure, meaning a heavier structure and a slower process. In addition, the exit strategy may not be clear. A worst-case scenario is when one of the two types of financing bodies is interested in investing, yet both walk away when they see the other at the table.

Though very few examples of M4D at scale exist, some elements brought up during the Salon that need to be considered include:

User-centric design. It is critical to understand the community and the end user’s needs, demands, and payment capacity. Both the private sector and international development agencies have existing approaches to developing M4D initiatives that focus on understanding local context and consultation and engagement with users, but the two sectors use different language to describe these approaches and they often talk past each other without connecting on their commonalities. According to one discussant, the best and most user friendly design is that with the lowest barrier to access, the simplest technology, the cleanest interface and configurability, so that people can build in more complexity if needed. These types of design will also tend to be the most replicable, an important element of scale. Iterative design and getting prototypes in front of users is needed to get their feedback, and this can be a challenge in M4D programs if they are being done within typical international development cycles of planning and funding.

User data. Users at the base of the pyramid are both financially poor and “data poor” and companies cannot create products for users that they know nothing about. Mobile can help gather data on user behaviors. This data can be used to inform business models, create products and services of value for BOP users, and to create revenue streams. One key question is that of how the data can be better used to benefit the BOP more broadly.

Understanding what ‘scale’ means for different parties. For mobile operators, scale is important because it is linked to numbers, volume and revenue. However this is not the element that matters for those working in international development, where impact may be a more important measure of success. Uptake of an M4D service may be due to advertising, rather than because it has a measurable impact on the life of a user. The difference needs to be understood and better analyzed and documented before success, scale, or impact is claimed. One measure of success is improved and sustained functioning of broader systems — and mobile may only be one small piece of a well-functioning development program, information ecosystem, or service delivery effort. As one discussant noted, “I don’t care if someone uses mobile banking or branch banking, so long as they are banking.” The mobile device may not be the central piece; it may be an additional access point for people who were formerly left out of these systems. In addition, “reaching” people is different than “influencing” people, and the latter will likely have more of an impact. Trust is critical in these efforts to influence, and often that takes more than a mobile connection.

Infrastructure.  The case for improved networks, coverage, and other infrastructure (electricity, for example) needs to be made to operators and government. The urban-rural divide when it comes to infrastructure is a global issue, not just one in so-called ‘developing economies.’ For example, using 4G and a credit card, someone can order a product on Amazon from the DRC, however Amazon will not be able to deliver that product. Similarly, someone can report poor government services via a mobile phone, but until infrastructure and governance improves, there may be no response. Poor infrastructure in rural areas is an issue globally.

Payment. Operators incorrectly give away free SMS to NGOs, said one discussant. Instead, having to pay a small amount (either as an NGO or an end user) means that much more care is taken in terms of what is communicated. “If it costs 5 cents to send a message, you will not spam people.” This is also critical for building in sustainability, and where the best ROI tends to be found in technology influenced programming. More thought and research is needed regarding payment and sustainable, scalable models.

Due diligence. A challenge in the M4D space is the high incidence of people seeing a problem, thinking no one has addressed it, and jumping in to build their own solution. This wastes money and time and creates churn. It is important to do research, layer, and build from other people’s ideas and existing solutions. One problem with the idea of due diligence, according to a participant, is that it means different things to different people. In technology it means “you have a problem, what is the cheapest and most robust solution,” but in the field of international development, context discovery takes a very long time and requires multidisciplinary knowledge and awareness that goes far beyond technology. There is also a need to consider whether technology (as opposed to non-digital efforts) is the most viable solution for the information and communication situation. ‘Horizontal due diligence’ (looking at partnerships) and due diligence with regard to maximizing systems are also needed.

Monitoring and Evaluation (M&E). M&E is currently sub-par on most M4D initiatives, said one participant. Organizations are often doing their own M&E rather than having a third party conduct external M&E.  There is a lack of comparative data on M4D programs also, and often M&E is attempted at the end of a project rather than built in from the start. A greater presence of academia is needed in M4D work, it was noted, and we also need more qualitative data, as currently the emphasis is on the quantitative data that are collected more easily via mobiles. One benefit in M4D programs is the ability to digitize and intelligently store data from the very start. This is the way to show scale and impact, said another participant. However data need to be well-used and refined, and available to the right person or people at the right time. Greater respect and understanding of privacy and ethical issues along with helping people to understand and steward their users’ data are also critically important and need more attention.

Salons function under Chatham House Rule, thus no attribution has been made. Sign up here to join the Technology Salon mailing list to receive invitations for future events in NYC, DC, San Francisco, London and Nairobi!

Read Full Post »

This is a cross-post from Tessie San Martin, CEO of Plan International USA. Tessie’s original post is published on the Plan International USA blogFor more on the status of the International Aid Transparency Initiative (IATI) in the US and information on which donors sit where on the Transparency Index, visit Publish What You Fund.

Over 40 governments, along with UN organizations and the World Bank, have committed to a common standard and time schedule for publishing aid information under the International Aid Transparency Initiative (IATI).  There are high expectations for this initiative. The ultimate objective is to increase the effectiveness of donor assistance, making aid work for those whom we are trying to help and contributing to accelerated development outcomes on the ground. IATI is good news for increased accountability, can help improve coordination, and provides a space for engaging donors, communities, governments and the general public in a broader development dialogue.

Secretary of State Clinton signed on behalf of the US Government on November 2011. While US engagement has been very welcomed, US Government performance in terms of actually executing IATI has left much to be desired.  Publish What You Fund, an organization helping to ensure governments are held to their initial aid transparency commitments, ranked only one out of six agencies (MCC) in the ‘fair’ category in terms of execution. Recently, organizations like Oxfam and ONE have rightly questioned the US Government’s commitment and progress, and exhorted the Obama administration to make full compliance with the IATI standard a priority.

But with all the attention focused on how USG is performing, what are INGOs doing about IATI?  After all, governments can only open access to the data they have. Official development assistance is an increasingly smaller proportion of the entire aid flows, so having INGOs — particularly, and at the very least, the largest global INGOs — also committed to this process is vital to the success of the Initiative.

What are INGO’s doing about IATI? The answer is: not much.

Very few INGOs have committed to publishing their information to the IATI standard.  INGOs that have complied are doing so primarily because a donor is requiring it.  For example, DfID, the UK foreign aid agency, has such a requirement and, as a result, the UK has the largest number of INGOs in compliance.  The US Government has not imposed this requirement on US-based INGOs and it is not likely to do so in the future.  It is therefore not surprising that US-based INGOs have not shown much interest in IATI.

This is a lost opportunity for everyone.  Accountability and transparency are as relevant to the private and the non-profit side of development assistance as they are to the public side.

At Plan International, an INGO with offices in almost 70 countries, it is not surprising that the part of our organization making the fastest strides in this area is our office in the United Kingdom.  As an important recipient of DfID money, they were instructed to do so.  In the US, though Plan International USA is not a major recipient of USG funding, we believe that making the investment to comply with IATI reporting format and timelines is good development practice; we are thus committed to publishing to IATI in the next year.  How can we effectively preach transparency and increased accountability to our recipient communities and to the governments with which we are working yet not commit to something as eminently common sensical as uniform formats, comparable data sets and systematic reporting frequencies?

We are not Pollyannaish about the task.  Like all INGOs pondering whether and how to comply with IATI, we have many concerns, including the costs of complying and what it will do to our overhead (and therefore to something like our Charity Navigator) rating.   We have established an internal project code so we can better capture, track and understand the costs involved in this initiative.  And we are evaluating where we draw the line in terms of the size of the projects on which we should be reporting, balancing costs with the desire to maximize disclosure (it is also worth remembering that rating agencies themselves are placing increasing emphasis on transparent reporting, so rating concerns may ultimately support a move towards greater IATI compliance).

As we have moved forward, we have had many issues to address, including privacy concerns, since a fair bit of Plan’s internal documentation was not written with the idea that it would one day be shared with the public.  Publishing some information may pose security risks for minority or political groups being supported.  These situations have been contemplated by IATI already, however, and there are valid exemptions for sensitive data.  We have also learned that there are many resources to help INGOs navigate the IATI compliance waters.  These resources are not well known to US INGOs, and need to be better publicized. Plan in the US, of course, is also benefiting from the research and hard work our UK office has done to comply with DfID’s mandate, allowing us to start off on a strong foundation of organizational experience.

I am convinced that IATI is not just good development practice but also makes good business sense. At the same time, it is worth remembering that IATI is not the entire solution.  IATI is designed to improve upward accountability to donors and taxpayers.  It is not designed explicitly to improve accountability to the children and communities with which we are partnering and whom we serve. And, as the ultimate goal is improved aid effectiveness, data must be accompanied by better information about goals, methodologies and approaches.  We also need to get better at sharing not just successes but failures within our federation and across all development organizations.

Despite all the shortcomings, IATI is a good start.  And as we push the US Government to do better, INGOs need to be pushing themselves to do better as well.

Read Full Post »

At Catholic Relief Services’ annual ICT4D meeting in March 2013, I worked with Jill Hannon from Rockefeller Foundation’s Evaluation Office to organize 3 sessions on the use of ICT for Monitoring and Evaluation (ICTME). The sessions covered the benefits (known and perceived) of using ICTs for M&E, the challenges and barriers organizations face when doing so, and some lessons and advice on how to integrate ICTs into the M&E process.

Our lead discussants in the three sessions included: Stella Luk (Dimagi), Guy Sharrack (CRS), Mike Matarasso (CRS), David McAfee (HNI/Datawinners), Mark Boots (Votomobile), and Teressa Trusty (USAID’s IDEA/Mobile Solutions). In addition, we drew from the experiences and expertise of some 60 people who attended our two round table sessions.

Benefits of integrating ICTs into the M&E process

Some of the potential benefits of integrating ICTs mentioned by the various discussants and participants in the sessions included:

  • More rigorous, higher quality data collection and more complete data
  • Reduction in required resources (time, human, money) to collect, aggregate and analyze data
  • Reduced complexity if data systems are simplified; thus increased productivity and efficiency
  • Combined information sources and types and integration of free form, qualitative data with quantitative data
  • Broader general feedback from a wider public via ICT tools like SMS; inclusion of new voices in the feedback process, elimination of the middleman to empower communities
  • Better cross-sections of information, information comparisons; better coordination and cross-comparing if standard, open formats are used
  • Trend-spotting with visualization tools
  • Greater data transparency and data visibility, easier data auditing
  • Real-time or near real-time feedback “up the chain” that enables quicker decision-making, adaptive management, improved allocation of limited resources based on real-time data, quicker communication of decisions/changes back to field-level staff, faster response to donors and better learning
  • Real-time feedback “down the ladder” that allows for direct citizen/beneficiary feedback, and complementing of formal M&E with other social monitoring approaches
  • Scale, greater data security and archiving, and less environmental impact
  • Better user experience for staff as well as skill enhancement and job marketability and competitiveness of staff who use the system

Barriers and challenges of integrating ICTs into M&E processes

A number of challenges and barriers were also identified, including:

  • A lack of organizational capacity to decide when to use ICTs in M&E, for what, and why, and deciding on the right ICT (if any) for the situation. Organizations may find it difficult to get beyond collecting the data to better use of data for decision-making and coordination. There is often low staff capacity, low uptake of ICT tools and resistance to change.
  • A tendency to focus on surveys and less attention to other types of M&E input, such as qualitative input. Scaling analysis of large-scale qualitative feedback is also a challenge: “How do you scale qualitative feedback to 10,000 people or more? People can give their feedback in a number of languages by voice. How do you mine that data?”
  • The temptation to offload excessive data collection to frontline staff without carefully selecting what data is actually going to be used and useful for them or for other decision-makers.
  • M&E is often tacked on at the end of a proposal design. The same is true for ICT. Both ICT and M&E need to be considered and “baked in” to a process from the very beginning.
  • ICT-based M&E systems have missed the ball on sharing data back. “Clinics in Ghana collect a lot of information that gets aggregated and moved up the chain. What doesn’t happen is sharing that information back with the clinic staff so that they can see what is happening in their own clinic and why. We need to do a better job of giving information back to people and closing the loop.” This step is also important for accountability back to communities. On the whole, we need to be less extractive.
  • Available tools are not always exactly right, and no tool seems to provide everything an organization needs, making it difficult to choose the right tool. There are too many solutions, many of which are duplicative, and often the feature sets and the usability of these tools are both poor. There are issues with sustainability and ongoing maintenance and development of M&E platforms.
  • Common definitions for data types and standards for data formatting are needed. The lack of interoperability among ICT solutions also causes challenges. As a field, we don’t do enough linking of systems together to see a bigger picture of which programs are doing what, where and who they are impacting and how.
  • Security and privacy are not adequately addressed. Many organizations or technology providers are unaware of the ethical implications of collecting data via new tools and channels. Many organizations are unclear about the ethical standards for research versus information that is offered up by different constituents or “beneficiaries” (eg., information provided by people participating in programs that use SMS or collect information through SMS-based surveys) versus monitoring and evaluation information. It is also unclear what the rules are for information collected by private companies, who this information can be shared with and what privacy laws mean for ICT-enabled M&E and other types of data collection. If there are too many barriers to collecting information, however, the amount of information collected will be reduced. A balance needs to be found. The information that telecommunications companies hold is something to think about when considering privacy and consent issues, especially in situations of higher vulnerability and risk. (UNOCHA has recently released a report that may be useful.)
  • Not enough is understood about motivation and incentive for staff or community members to participate or share data. “Where does my information go? Do I see the results? Why should I participate? Is anyone responding to my input?” In addition, the common issues of cost, access, capacity, language, literacy, cultural barriers are very much present in attempts to collect information directly from community members. Another question is that of inclusion: Does ICT-enabled data collection or surveying leave certain groups out? (See this study on intrinsic vs extrinsic motivation for feedback.)
  • Donors often push or dictate the use of ICT when it’s perhaps not the most useful for the situation. In addition there is normally not enough time during proposal process for organizations to work on buy-in and good design of an ICT-enabled M&E system. There is often a demand from the top for excessive data collection without an understanding of the effort required to collect it, and time/resource trade-offs for excessive data collection when it leads to less time spent on program implementation. “People making decisions in the capital want to add all these new questions and information and that can be a challenge… What data are valuable to collect? Who will respond to them? Who will use them as the project goes forward?”
  • There seems to be a focus on top-down, externally created solutions rather than building on local systems and strengths or supporting local organizations or small businesses to strengthen their ICTME capacities. “Can strengthening local capacity be an objective in its own right? Are donors encouraging agencies to develop vertical ICTME solutions without strengthening local systems and partners?”
  • Results-based, data-based focus can bias the countable, leave out complex development processes with more difficult to count/measure impacts.

Lessons and good practice for integrating ICTs into M&E processes

ICT is not a silver bullet – it presents its own set of challenges. But a number of good practices surfaced:

  • The use of ICTs for M&E is not just a technology issue, it’s a people and processes issue too, and it is important to manage the change carefully. It’s also important to keep an open mind that ICT4D to support M&E might not always be the best use of scarce resources – there may be more pressing priorities for a project. Getting influential people on your side to support the cause and help leverage funding and support is critical. It’s also important to communicate goals and objectives clearly, and provide incentives to make sure ICTs are successfully adopted. The trick is keeping up with technology advances to improve the system, but also keeping your eye on the ball.
  • When designing an ICTME effort, clarity of purpose and a holistic picture of the project M&E system are needed in order to review options for where ICT4D can best fit. Don’t start with the technology. Start with the M&E purpose and goals and focus on the business need, not the gadgets. Have a detailed understanding of M&E data requirements and data flows as a first step. Follow those with iterative discussions with ICT staff to specify the ICT4D solution requirements.
  • Select an important but modest project to start with and pilot in one location – get it right and work out the glitches before expanding to a second tier of pilots or expanding widely. Have a fully functional model to share for broad buy-in and collect some hard data during the pilot to convince people of adoption. The first ICT initiative will be the most important.  If it is successful, use of ICTs will likely spread throughout an organization.  If the first initiative fails, it can significantly push back the adoption of ICTs in general. For this reason, it’s important to use your best people for the first effort. Teamwork and/or new skill sets may be required to improve ICT-enabled M&E. The “ICT4D 2.0 Manifesto” talks about a tribrid set of skills needed for ICT-enabled programs.
  • Don’t underestimate the need for staff training and ongoing technical assistance to ensure a positive user experience, particularly when starting out. Agencies need to find the right balance between being able to provide support for a limited number of ICT solutions versus the need to support ongoing local innovation.  It’s also important to ask for help when needed.  The most successful M&E projects are led by competent managers who seek out resources both inside and outside their organizations.
  • Good ICT-enabled M&E comes from a partnership between program, M&E and ICT staff, technical support internal and external to the organization. Having a solid training curriculum and a good help desk are important. In addition, in-built capacity for original architecture design and to maintain and adjust the system is a good idea. A lead business owner and manager for the system need to be in place as well as global and local level pioneers and strong leadership (with budget!) to do testing and piloting. At the local level, it is important to have an energetic and savvy local M&E pioneer who has a high level of patience and understands technology.
  • At the community level, a key piece is understanding who you need to hear from for effective M&E and ensuring that ICT tools are accessible to all. It’s also critical to understand who you are ignoring or not reaching with any tool or process. Are women and children left out? What about income level? Those who are not literate?
  • Organizations should also take care that they are not replacing or obliterating existing human responsibilities for evaluation. For example, at community level in Ghana, Assembly Members have the current responsibility for representing citizen concerns. An ICT-enabled feedback loop might undermine this responsibility if it seeks direct-from-citizen evaluation input.  The issue of trust and the human-human link also need consideration. ICT cannot and should not be a replacement for everything. New ICT tools can increase the number of people and factors evaluated; not just increase efficiency of existing evaluations.
  • Along the same lines, it’s important not to duplicate existing information systems, create parallel systems or fragment the government’s own systems. Organizations should be strengthening local government systems and working with government to use the information to inform policy and help with decision-making and implementation of programs.
  • implementors need to think about the direction of information flow. “Is it valuable to share results “upward” and “downward”? It is possible to integrate local decision-making into a system.” Systems can be created that allow for immediate local-level decision-making based on survey input. Key survey questions can be linked to indicators that allow for immediate discussion and solutions to improve service provision.
  • Also, the potential political and social implications of greater openness in information flows needs to be considered. Will local, regional and national government embrace the openness and transparency that ICTs offer? Are donors and NGOs potentially putting people at risk?
  • For best results, pick a feasible and limited number of quality indicators and think through how frontline workers will be motivated to collect the data. Excessive data collection will interfere with or impede service delivery. Make sure managers are capable of handling and analyzing data that comes in and reacting to it, or there is no point in collecting it. It’s important to not only think about what data you want, but how this data will be used. Real-time data collected needs to be actionable. Be sure that those submitting data understand what data they have submitted and can verify its accuracy. Mobile data collection needs to be integrated into real processes and feedback loops. People will only submit information or reports if they see that someone cares about those reports and does something about them.
  • Collecting data through mobile technology may change the behavior being monitored or tracked. One participant commented that when his organization implemented an ICT-based system to track staff performance, people started doing unnecessary activities so that they could tick off the system boxes rather than doing what they knew should be done for better program impact.
  • At the practical level, tips include having robust options for connectivity and power solutions, testing the technology in the field with a real situation, securing reduced costs with vendors for bulk purchasing and master agreements, using standard vendor tools instead of custom building. It’s good to keep the system simple, efficient and effective as possible and to avoid redundancy or the addition of features things that don’t truly offer more functionality.

Thanks to all our participants and lead discussants at the sessions!

Useful information and guides on ICTME:

Mobile-based technology for monitoring and evaluation: A reference guide for project managers, M&E specialists, researchers, donors

3 Reports on mobile data collection

Other posts on ICTs for M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

Using participatory video for M&E

ICTs and M&E at the South Asia Evaluators’ Conclave

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 733 other followers