Feeds:
Posts
Comments

Archive for the ‘monitoring and evaluation’ Category

It’s been two weeks since we closed out the M&E Tech Conference in DC and the Deep Dive in NYC. For those of you who missed it or who want to see a quick summary of what happened, here are some of the best tweets from the sessions.

We’re compiling blog posts and related documentation and will be sharing more detailed summaries soon. In the meantime, enjoy a snapshot!

Read Full Post »

Today as we jump into the M&E Tech conference in DC (we’ll also have a Deep Dive on the same topic in NYC next week), I’m excited to share a report I’ve been working on for the past year or so with Michael Bamberger: Emerging Opportunities in a Tech-Enabled World.

The past few years have seen dramatic advances in the use of hand-held devices (phones and tablets) for program monitoring and for survey data collection. Progress has been slower with respect to the application of ICT-enabled devices for program evaluation, but this is clearly the next frontier.

In the paper, we review how ICT-enabled technologies are already being applied in program monitoring and in survey research. We also review areas where ICTs are starting to be applied in program evaluation and identify new areas in which new technologies can potentially be applied. The technologies discussed include hand-held devices for quantitative and qualitative data collection and analysis, data quality control, GPS and mapping devices, environmental monitoring, satellite imaging and big data.

While the technological advances and the rapidly falling costs of data collection and analysis are opening up exciting new opportunities for monitoring and evaluation, the paper also cautions that more attention should be paid to basic quality control questions that evaluators normally ask about representativity of data and selection bias, data quality and construct validity. The ability to use techniques such as crowd sourcing to generate information and feedback from tens of thousands of respondents has so fascinated researchers that concerns about the representativity or quality of the responses have received less attention than is the case with conventional instruments for data collection and analysis.

Some of the challenges include the potential for: selectivity bias and sample design, M&E processes being driven by the requirements of the technology and over-reliance on simple quantitative data, as well as low institutional capacity to introduce ICT and resistance to change, and issues of privacy.

None of this is intended to discourage the introduction of these technologies, as the authors fully recognize their huge potential. One of the most exciting areas concerns the promotion of a more equitable society through simple and cost-effective monitoring and evaluation systems that give voice to previously excluded sectors of the target populations; and that offer opportunities for promoting gender equality in access to information. The application of these technologies however needs to be on a sound methodological footing.

The last section of the paper offers some tips and ideas on how to integrate ICTs into M&E practice and potential pitfalls to avoid. Many of these were drawn from Salons and discussions with practitioners, given that there is little solid documentation or evidence related to the use of ICTs for M&E.

Download the full paper here! 

Read Full Post »

Earlier this month I attended the African Evaluators’ Conference (AfrEA) in Cameroon as part of the Technology and Evaluation stream organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation.

A first post about ICTs and M&E at the Afrea Conference went into some of the deliberations around using or not using ICTs and how we can learn and share more as institutions and evaluators. I’ve written previously about barriers and challenges with using ICTs in M&E of international development programs (see the list of posts at the bottom of this one). Many of these same conversations came up at AfrEA, so I won’t write about these again here. What I did want to capture and share were a few interesting design and implementation thoughts from the various ICT and M&E sessions. Here goes:

1) Asking questions via ICT may lead to more honest answers. Some populations are still not familiar with smart phones and tablets and this makes some people shy and quiet, yet it makes others more curious and animated to participate. Some people worry that mobiles, laptops and tablet create distance between the enumerator and the person participating in a survey. On the other hand, I’m hearing more and more examples of cases where using ICTs for surveying actually allow for a greater sense of personal privacy and more honest answers. I first heard about this several years ago with relation to children and youth in the US and Canada seeking psychological or reproductive health counseling. They seemed to feel more comfortable asking questions about sensitive issues via online chats (as opposed to asking a counselor or doctor face-to-face) because they felt anonymous. This same is true for telephone inquiries.

In the case of evaluations, someone suggested that rather than a mobile or tablet creating distance, a device can actually create an opportunity for privacy. For example, if a sensitive question comes up in a survey, an enumerator can hand the person being interviewed the mobile phone and look away when they provide their answer and hit enter, in the same way that waiters in some countries will swipe your ATM card and politely look away while you enter your PIN. Key is building people’s trust in these methods so they can be sure they are secure.

At a Salon on Feb 28, I heard about mobile polling being used to ask men in the Democratic Republic of Congo about sexual assault against men. There was a higher recorded affirmative rate when the question was answered via a mobile survey than when the question had been asked in other settings or though other means. This of course makes sense, considering that often when a reporter or surveyor comes around asking whether men have been victims of rape, no one wants to say publicly. It’s impossible to know in a situation of violence if a perpetrator might be standing around in the crowd watching someone getting interviewed, and clearly shame and stigma also prevent people from answering openly.

Another example at the AfrEA Tech Salon, was a comparison study done by an organization in a slum area in Accra. Five enumerators who spoke local languages conducted Water, Sanitation and Hygiene (WASH) surveys by mobile phone using Open Data Kit (an open source survey application) and the responses were compared with the same survey done on paper.  When people were asked in person by enumerators if they defecated outdoors, affirmative answers were very low. When people were asked the same question via a voice-based mobile phone survey, 26% of respondents reported open defecation.

2) Risk of collecting GPS coordinates. We had a short discussion on the plusses and minuses of using GPS and collecting geolocation data in monitoring and evaluation. One issue that came up was safety for enumerators who carry GPS devices. Some people highlighted that GPS devices can put staff/enumerators at risk of abuse from organized crime bands, military groups, or government authorities, especially in areas with high levels of conflict and violence. This makes me think that if geographic information is needed in these cases, it might be good to use a mobile phone application that collects GPS rather than a fancy smart phone or an actual GPS unit (for example, one could try out PoiMapper, which works on feature phones).

In addition, evaluators emphasized that we need to think through whether GPS data is really necessary at household level. It is tempting to always collect all the information that we possibly can, but we can never truly assure anyone that their information will not be de-anonymized somehow in the near or distant future, and in extremely high risk areas, this can be a huge risk. Many organizations do not have high-level security for their data, so it may be better to collect community or district level data than household locations. Some evaluators said they use ‘tricks’ to anonymize the geographical data, like pinpointing location a few miles off, but others felt this was not nearly enough to guarantee anonymity.

3) Devices can create unforeseen operational challenges at the micro-level. When doing a mobile survey by phone and asking people to press a number to select a particular answer to a question, one organization working in rural Ghana to collect feedback about government performance found that some phones were set to lock when a call was answered. People were pressing buttons to respond to phone surveys (press 1 for….), but their answers did not register because phones were locked, or answers registered incorrectly because the person was entering their PIN to unlock the phone. Others noted that when planning for training of enumerators or community members who will use their own devices for data collection, we cannot forget the fact that every model of phone is slightly different. This adds quite a lot of time to the training as each different model of phone needs to be explained to trainees. (There are a huge number of other challenges related to devices, but these were two that I had not thought of before.)

4) Motivation in the case of poor capacity to respond. An organization interested in tracking violence in a highly volatile area wanted to take reports of violence, but did not have a way to ensure that there would be a response from an INGO, humanitarian organization or government authority if/when violence was reported. This is a known issue — the difficulties of encouraging reporting if responsiveness is low. To keep people engaged this organization thanks people immediately for reporting and then sends peace messages and encouragement 2-3 times per week. Participants in the program have appreciated these ongoing messages and participation has continued to be steady, regardless of the fact that immediate help has not been provided as a result of reporting.

5) Mirroring physical processes with tech. One way to help digital tools gain more acceptance and to make them more user-friendly is to design them to mirror paper processes or other physical processes that people are already familiar with. For example, one organization shared their design process for a mobile application for village savings and loan (VSL) groups. Because security is a big concern among VSL members, the groups typically keep cash in a box with 3 padlocks. Three elected members must be present and agree to open and remove money from the box in order to conduct any transaction. To mimic this, the VSL mobile application requires 3 PINs to access mobile money or make transactions, and what’s more, the app sends everyone in the VSL Group an SMS notification if the 3 people with the PINs carry out a transaction, meaning the mobile app is even more secure than the original physical lock-box, because everyone knows what is happening all the time with the money.

****

As I mentioned in part 1 of this post, some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward.

Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that are missing!

Previous posts on ICTs and M&E on this blog:

Read Full Post »

I attended the African Evaluators’ Conference (AfrEA) in Cameroon last week as part of the Technology and Evaluation strand organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation. The strand was a fantastic opportunity for learning, sharing and understanding more about the context, possibilities and realities of using ICTs in monitoring and evaluation (M&E). We heard from a variety of evaluators, development practitioners, researchers, tool-developers, donors, and private sector and government folks. Judging by the well-attended sessions, there is a huge amount of interest in ICTs and M&E.

Rather than repeat what’s I’ve written in other posts (see links at the bottom), I’ll focus here on some of the more relevant, interesting, and/or new information from the AfrEA discussions. This first post will go into institutional issues and the ‘field’ of ICTs and M&E. A second post will talk about design and operational tips I learned /was reminded of at AfrEA.

1) We tend to get stuck on data collection –Like other areas (I’m looking at you, Open Data) conversations tend to revolve around collecting data. We need to get beyond that and think more about why we are collecting data and what we are going to do with it (and do we really need all this data?). The evaluation field also needs to explore all the other ways it could be using ICTs for M&E, going beyond mobile phones and surveys. Collecting data is clearly a necessary part of M&E, but those data still need to be analyzed. As a participant from a data visualization firm said, there are so many ways you can use ICTs – they help you make sense of things, you can tag sentiment, you can visualize data and make data-based decisions. Others mentioned that ICTs can help us to share data with various stakeholders, improve sampling in RCTs (Randomized Control Trials), conduct quality checks on massive data sets, and manage staff who are working on data collection. Using big data, we can do analyses we never could have imagined before. We can open and share our data, and stop collecting the same data from the same people multiple times. We can use ICTs to share back what we’ve learned with evaluation stakeholders, governments, the public, and donors. The range of uses of ICTs is huge, yet the discussion tends to get stuck on mobile surveys and data collection, and we need to start thinking beyond that.

2) ICTs are changing how programs are implemented and how M&E is done — When a program already uses ICTs, data collection can be built in through the digital device itself (e.g., tracking user behavior, cookies, and via tests and quizzes), as one evaluator working on tech and education programs noted. As more programs integrate digital tools, it may become easier to collect monitoring and evaluation data with less effort. Along those lines, an evaluator looking at a large-scale mobile-based agricultural information system asked about approaches to conducting M&E that do not rely on enumerators and traditional M&E approaches. In his program, because the farmers who signed up for the mobile information service do not live in the same geographical community, traditional M&E approaches do not seem plausible and ICT-based approaches look like a logical answer. There is little documentation within the international development evaluation community, however, on how an evaluator might design an evaluation in this type of a situation. (I am guessing there may be some insights from market research and possibly from the transparency and accountability sectors, and among people working on “feedback loops”).

3) Moving beyond one-off efforts — Some people noted that mobile data gathering is still done mostly at the project level. Efforts tend to be short-term and one-off. The data collected is not well-integrated into management information systems or national level processes. (Here we may reference the infamous map of mHealth pilots in Uganda, and note the possibility of ICT-enabled M&E in other sectors going this same route). Numerous small pilots may be problematic if the goal is to institutionalize mobile data gathering into M&E at the wider level and do a better job of supporting and strengthening large-scale systems.

4) Sometimes ICTs are not the answer, even if you want them to be – One presenter (who considered himself a tech enthusiast) went into careful detail about his organization’s process of deciding not to use tablets for a complex evaluation across 4 countries with multiple indicators. In the end, the evaluation itself was too complex, and the team was not able to find the right tool for the job. The organization looked at simple, mid-range and highly complex applications and tools and after testing them all, opted out. Each possible tool presented a set of challenges that meant the tool was not a vast improvement over paper-based data collection, and the up-front costs and training were too expensive and lengthy to make the switch to digital tools worthwhile. In addition, the team felt that face-to-face dynamics in the community and having access to notes and written observations in the margins of a paper survey would enable them to conduct a better evaluation. Some tablets are beginning to enable more interactivity and better design for surveys, but not yet in a way that made them a viable option for this evaluation. I liked how the organization went through a very thorough and in-depth process to make this decision.

Other colleagues also commented that the tech tools are still not quite ‘there’ yet for M&E. Even top of the line business solutions are generally found to be somewhat clunky. Million dollar models are not relevant for environments that development evaluators are working in; in addition to their high cost, they often have too many features or require too much training. There are some excellent mid-range tools that are designed for the environment, but many lack vital features such as availability in multiple languages. Simple tools that are more easily accessible and understandable without a lot of training are not sophisticated enough to conduct a large-scale data collection exercise. One person I talked with suggested that the private sector will eventually develop appropriate tools, and the not-for-profit sector will then adopt them. She felt that those of us who are interested in ICTs in M&E are slightly ahead of the curve and need to wait a few years until the tools are more widespread and common. Many people attending the Tech and M&E sessions at AfrEA made the point that use of ICTs in M&E would get easier and cheaper as the field develops, tools get more advanced/appropriate/user-friendly and widely tested, and networks/ platforms/ infrastructure improves in less-connected rural areas.

5) Need for documentation, evaluation and training on use of ICTs in M&E – Some evaluators felt that ICTs are only suitable for routine data collection as part of an ongoing program, but not good for large-scale evaluations. Others pointed out that the notions of ‘ICT for M&E’ and ‘mobile data collection/mobile surveys’ are often used interchangeably, and evaluation practitioners need to look at the multiple ways that ICTs can be used in the wider field of M&E. ICTs are not just useful for moving from paper surveys to mobile data gathering. An evaluator working on a number of RCTs mentioned that his group relies on ICTs for improving samples, reducing bias, and automatically checking data quality.

There was general agreement that M&E practitioners need resources, opportunities for more discussion, and capacity strengthening on the multiple ways that ICTs may be able to support M&E. One evaluator noted that civil society organizations have a tendency to rush into things, hit a brick wall, and then cross their arms and say, “well, this doesn’t work” (in this case, ICTs for M&E). With training and capacity, and as more experience and documentation is gained, he considered that ICTs could have a huge role in making M&E more efficient and effective.

One evaluator, however, questioned whether having better, cheaper, higher quality data is actually leading to better decisions and outcomes. Another evaluator asked for more evidence of what works, when, with whom and under what circumstances so that evaluators could make better decisions around use of ICTs in M&E. Some felt that a decision tree or list of considerations or key questions to think through when integrating ICTs into M&E would be helpful for practitioners. In general, it was agreed that ICTs can help overcome some of our old challenges, but that they inevitably bring new challenges. Rather than shy away from using ICTs, we should try to understand these new challenges and find ways to overcome/work around them. Though the mHealth field has done quite a bit of useful research, and documentation on digital data collection is growing, use of ICTs is still relatively unexplored in the wider evaluation space.

6) There is no simple answer. One of my takeaways from all the sessions was that many M&E specialists are carefully considering options, and thinking quite a lot about which ICTs for what, whom, when and where rather than deciding from the start that ICTs are ‘good and beneficial’ or ‘bad and not worth considering.’ This is really encouraging, and to be expected of a thoughtful group like this. I hope to participate in more discussions of this nature that dig into the nuances of introducing ICTs into M&E.

Some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward. The “field” of ICTs in M&E is quite broad, however, and there are many ways to slice the cake. Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that you think are missing!

(Part 2 of this post)

Previous posts on ICTs and M&E:

Read Full Post »

The NYC Technology Salon on February 28th examined the connection between bigger, better data and resilience. We held morning and afternoon Salons due to the high response rate for the topic. Jake Porway, DataKind; Emmanuel Letouzé, Harvard Humanitarian Initiative; and Elizabeth Eagen, Open Society Foundations; were our lead discussants for the morning. Max Shron, Data Strategy; joined Emmanuel and Elizabeth for the afternoon session.

This post summarizes key discussions from both Salons.

What the heck do we mean by ‘big data’?

The first question at the morning salon was: What precisely do we mean by the term ‘big data’? Participants and lead discussants had varying definitions. One way of thinking about big data is that it is comprised of small bits of unintentionally produced ‘data exhaust’ (website cookies, cellphone data records, etc.) that add up to a dataset. In this case, the term big data refers to the quality and nature of the data, and we think of non-sampled data that are messy, noisy and unstructured. The mindset that goes with big data is one of ‘turning mess into meaning.’

Some Salon participants understood big data as datasets that are too large to be stored, managed and analyzed via conventional database technologies or managed on normal computers. One person suggested dropping the adjective ‘big,’ forgetting about the size, and instead considering the impact of the contribution of the data to understanding. For example, if there were absolutely no data on something and 1000 data points were contributed, this might have a greater impact than adding another 10,000 data points to an existing set of 10 million.

The point here was that when the emphasis is on big (understood as size and/or volume), someone with a small data set (for example, one that fits into an excel sheet) might feel inadequate, yet their data contribution may be actually ‘bigger’ than a physically larger data set (aha! it’s not the size of the paintbrush…). There was a suggestion that instead of talking about big data we should talk about smart data.

How can big data support development?

Two frameworks were shared for thinking about big data in development. One from UN Global Pulse considers that big data can improve a) real-time awareness, b) early warning and c) real-time monitoring. Another looks at big data being used for three kinds of analysis: a) descriptive (providing a summary of something that has already happened), b) predictive (likelihood and probability of something occurring in the future), and c) diagnostic (causal inference and understanding of the world).

What’s the link between big data and resilience?

‘Resilience’ as a concept is contested, difficult to measure and complex. In its most simple definition, resilience can be thought of as the ability to bounce back or bounce forward. (For an interesting discussion on whether we should be talking about sustainability or resilience, see this piece). One discussant noted that global processes and structures are not working well for the poor, as evidenced from continuing cycles of poverty and glaring wealth inequalities. In this view, people are poor as a result of being more exposed and vulnerable to shocks, at the same time, their poverty increases their vulnerability, and it’s difficult to escape from the cycle where over time, small and large shocks deplete assets. An assets-based model of resilience would help individuals, families and communities who are hit by a shock in one sphere — financial, human, capital, social, legal and/or political — to draw on the assets within another sphere to bounce back or forward.

Big data could help this type of an assets-based model of resilience by predicting /helping poor and vulnerable people predict when a shock might happen and preparing for it. Big data analytics, if accessible to the poor, could help them to increase their chances of making better decisions now and for the future. Big data then, should be made accessible and available to communities so that they can self-organize and decrease their own exposure to shocks and hazards and increase their ability to bounce back and bounce forward. Big data could also help various actors to develop a better understanding of the human ecosystem and contribute to increasing resilience.

Can ivory tower big data approaches contribute to resilience?

The application of big data approaches to efforts that aim to increase resilience and better understand human ecosystems often comes at things from the wrong angle, according to one discussant. We are increasingly seeing situations where a decision is made at the top by people who know how to crunch data yet have no way of really understanding the meaning of the data in the local context. In these cases, the impact of data on resilience will be low, because resilience can only truly be created and supported at the local level. Instead of large organizations thinking about how they can use data from afar to ‘rescue’ or ‘help’ the poor, organizations should be working together with communities in crisis (or supporting local or nationally based intermediaries to facilitate this process) so that communities can discuss and pull meaning from the data, contextualize it and use it to help themselves. They can also be more informed what data exist about them and more aware of how these data might be used.

For the Human Rights community, for example, the story is about how people successfully use data to advocate for their own rights, and there is less emphasis on large data sets. Rather, the goal is to get data to citizens and communities. It’s to support groups to define and use data locally and to think about what the data can tell them about the advocacy path they could take to achieve a particular goal.

Can data really empower people?

To better understand the opportunities and challenges of big data, we need to unpack questions related to empowerment. Who has the knowledge? The access? Who can use the data? Salon participants emphasized that change doesn’t come by merely having data. Rather it’s about using big data as an advocacy tool to tell the world to change processes and to put things normally left unsaid on the table for discussion and action. It is also about decisions and getting ‘big data’ to the ‘small world,’ e.g., the local level. According to some, this should be the priority of ‘big data for development’ actors over the next 5 years.

Though some participants at the Salon felt that data on their own do not empower individuals; others noted that knowing your credit score or tracking how much you are eating or exercising can indeed be empowering to individuals. In addition, the process of gathering data can help communities understand their own realities better, build their self-esteem and analytical capacities, and contribute to achieving a more level playing field when they are advocating for their rights or for a budget or service. As one Salon participant said, most communities have information but are not perceived to have data unless they collect it using ‘Western’ methods. Having data to support and back information, opinions and demands can serve communities in negotiations with entities that wield more power. (See the book “Who Counts, the power of participatory statistics” on how to work with communities to create ‘data’ from participatory approaches).

On the other hand, data are not enough if there is no political will to make change to respond to the data and to the requests or demands being made based on the data. As one Salon participant said: “giving someone a data set doesn’t change politics.”

Should we all jump on the data bandwagon?

Both discussants and participants made a plea to ‘practice safe statistics!’ Human rights organizations wander in and out of statistics and don’t really understand how it works, said one person. ‘You wouldn’t go to court without a lawyer, so don’t try to use big data unless you can ensure it’s valid and you know how to manage it.’ If organizations plan to work with data, they should have statisticians and/or data scientists on staff or on call as partners and collaborators. Lack of basic statistical literacy is a huge issue amongst the general population and within many organizations, thought leaders, and journalists, and this can be dangerous.

As big data becomes more trendy, the risk of misinterpretation is growing, and we need to place more attention on the responsible use of statistics and data or we may end up harming people by bad decisions. ‘Everyone thinks they are experts who can handle statistics – bias, collection, correlation’ these days. And ‘as a general rule, no matter how many times you say the data show possible correlation not causality, the public will understand that there is causality,’ commented one discussant. And generally, he noted, ‘when people look at data, they believe them as truth because they include numbers, statistics, science.’ Greater statistical literacy could help people to not just read or access data and information but to use them wisely, to understand and question how data are interpreted, and to detect political or other biases. What’s more, organizations today are asking questions about big data that have been on statisticians’ minds for a very long time, so reaching out to those who understand these issues can be useful to avoid repeating mistakes and re-learning lessons that have already been well-documented.

This poor statistical literacy becomes a serious ethical issue when data are used to determine funding or actions that impact on people’s lives, or when they are shared openly, accidentally or in ways that are unethical. In addition, privacy and protection are critical elements in using and working with data about people, especially when the data involve vulnerable populations. Organizations can face legal action and liability suits if their data put people at harm, as one Salon participant noted. ‘An organization could even be accused of manslaughter… and I’m speaking from experience,’ she added.

What can we do to move forward?

Some potential actions for moving forward included:

  • Emphasis with donors that having big data does not mean that in order to cut costs, you should eliminate community level processes related to data collection, interpretation, analysis, and ownership;
  • Evaluations and literature/documentation on the effectiveness of different tools and methods, and when and in which contexts they might be applicable, including things like cost-benefit analyses of using big data and evaluation of its impact on development/on communities when combined with community level processes vs used alone/without community involvement — practitioner gut feelings are that big data without community involvement is irresponsible and ineffective in terms of resilience, and it would be good to have evidence to help validate or disprove this;
  • More and better tools and resources to support data collection, visualization and use and to help organizations with risk analysis, privacy impact assessments, strategies and planning around use of big data; case studies and a place to share and engage with peers, creation of a ‘cook book’ to help organizations understand the ingredients, tools, processes of using data/big data in their work;
  • ‘Normative conventions’ on how big data should be used to avoid falling into tech-driven dystopia;
  • Greater capacity for ‘safe statistics’ among organizations;
  • A community space where frank and open conversations around data/big data can occur in an ongoing way with the right range of people and cross-section of experiences and expertise from business, data, organizations, etc.

In conclusion?

We touched upon all types of data and various levels of data usage for a huge range of purposes at the two Salons. One closing thought was around the importance of having a solid idea of what questions we trying to answer before moving on to collecting data, and then understanding what data collection methods are adequate for our purpose, what ICT tools are right for which data collection and interpretation methods, what will done with the data/what is the purpose of collecting data, how we’ll interpret them, and how data will be shared, with whom, and in what format.

See this growing list of resources related to Data and Resilience here and add yours!

Thanks to participants and lead discussants for the fantastic exchange, and a big thank you to ThoughtWorks for hosting us at their offices for this Salon. Thanks also to Hunter Goldman, Elizabeth Eagen and Emmanuel Letouzé for their support developing this Salon topic, and to Somto Fab-Ukozor for support with notes and the summary. Salons are held under Chatham House Rule, therefore no attribution has been made in this post. If you’d like to attend future Salons, sign up here!

Read Full Post »

This is a cross post from Heather Leson, Community Engagement Director at the Open Knowledge Foundation. The original post appeared here on the School of Data site.

by Heather Leson

What is the currency of change? What can coders (consumers) do with IATI data? How can suppliers deliver the data sets? Last week I had the honour of participating in the Open Data for Development Codeathon and the International Aid Transparency Initiative Technical Advisory Group meetings. IATI’s goal is to make information about aid spending easier to access, use, and understand. It was great that these events were back-to-back to push a big picture view.

My big takeaways included similar themes that I have learned on my open source journey:

You can talk about open data [insert tech or OS project] all you want, but if you don’t have an interactive community (including mentorship programmes), an education strategy, engagement/feedback loops plan, translation/localization plan and a process for people to learn how to contribute, then you build a double-edged barrier: barrier to entry and barrier for impact/contributor outputs.

Currency

About the Open Data in Development Codeathon

At the Codathon close, Mark Surman, Executive Director of Mozilla Foundation, gave us a call to action to make the web. Well, in order to create a world of data makers, I think we should run aid and development processes through this mindset. What is the currency of change? I hear many people talking about theory of change and impact, but I’d like to add ‘currency’. This is not only about money, this is about using the best brainpower and best energy sources to solve real world problems in smart ways. I think if we heed Mark’s call to action with a “yes, and”, then we can rethink how we approach complex change. Every single industry is suffering from the same issue: how to deal with the influx of supply and demand in information. We need to change how we approach the problem. Combined events like these give a window into tackling problems in a new format. It is not about the next greatest app, but more about asking: how can we learn from the Webmakers and build with each other in our respective fields and networks?

Ease of Delivery

The IATI community / network is very passionate about moving the ball forward on releasing data. During the sessions, it was clear that the attendees see some gaps and are already working to fill them. The new IATI website is set up to grow with a Community component. The feedback from each of the sessions was distilled by the IATI – TAG and Civil Society Guidance groups to share with the IATI Secretariat.

In the Open Data in Development, Impact of Open Data in Developing Countries, and CSO Guidance sessions, we discussed some key items about sharing, learning, and using IATI data. Farai Matsika, with International HIV/Aids Alliance, was particularly poignant reminding us of IATI’s CSO purpose – we need to share data with those we serve.

Country edits IATI

One of the biggest themes was data ethics. As we rush to ask NGOs and CSOs to release data, what are some of the data pitfalls? Anahi Ayala Iaccuci of Internews and Linda Raftree of Plan International USA both reminded participants that data needs to be anonymized to protect those at risk. Ms. Iaccuci asked that we consider the complex nature of sharing both sides of the open data story – successes and failures. As well, she advised: don’t create trust, but think about who people are trusting. Turning this model around is key to rethinking assumptions. I would add to her point: trust and sharing are currency and will add to the success measures of IATI. If people don’t trust the IATI data, they won’t share and use it.

Anne Crowe of Privacy International frequently asked attendees to consider the ramifications of opening data. It is clear that the IATI TAG does not curate the data that NGOS and CSOs share. Thus it falls on each of these organizations to learn how to be data makers in order to contribute data to IATI. Perhaps organizations need a lead educator and curator to ensure the future success of the IATI process, including quality data.

I think that School of Data and the Partnership for Open Data have a huge part to play with IATI. My colleague Zara Rahman is collecting user feedback for the Open Development Toolkit, and Katelyn Rogers is leading the Open Development mailing list. We collectively want to help people become data makers and consumers to effectively achieve their development goals using open data. This also means also tackling the ongoing questions about data quality and data ethics.


Here are some additional resources shared during the IATI meetings.

Read Full Post »

This is a cross-post from Tom Murphyeditor of the aid blog A View From the Cave. The original article can be found on Humanosphere. The post summarizes discussions at our November 21st New York City Technology Salon: Are Mobile Money Cash Grants the Future of Development?  If you’d like to join us for future Salons, sign up here.

by Tom Murphy

Decades ago, some of the biggest NGOs simply gave away money to individuals in communities. People lined up and were just given cash.

The once popular form of aid went out of fashion, but it is now making a comeback.

Over time, coordination became extremely difficult. Traveling from home to home costs time and money for the NGO and the same problem exists for recipients when they have to go to a central location. More significant was the shift in development thinking that said giving hand outs was causing long term damage.

The backlash against ‘welfare queens’ in the US, UK and elsewhere during the 1980s was reflected in international development programming. Problem was that it was all based on unproven theories of change and anecdotal evidence, rather than hard evidence.

Half a decade later, new research shows that just giving people money can be an effective way to build assets and even incomes. The findings were covered by major players like NPR and the Economist.

While exciting and promising, cash transfers are not a new tool in the development utility belt.

Various forms of transfers have emerged over the past decade. Food vouchers were used by the World Food Programme when responding to the 2011 famine in the Horn of Africa. Like food stamps in the US, people could go buy food from local markets and get exactly what they need while supporting the local economy.

The differences have sparked a sometimes heated debate within the development community as to what the findings about cash transfers mean going forward. A Technology Salon hosted conversation at ThoughtWorks in New York City last week, featured some of the leading researchers and players in the cash transfer sector.

The salon style conversation featured Columbia University and popular aid blogger Chris Blattman, GiveDirectly co-founder and UCSD researcher Paul Neihaus and Plan USA CEO Tessie San Martin. The ensuing discussion, operating under the Chatham House Rule of no attribution, featured representatives from large NGOs, microfinance organizations and UN agencies.

Research from Kenya, Uganda and Liberia show both the promise and shortcomings of cash transfers. For example, giving out cash in addition to training was successful in generating employment in Northern Uganda. Another program, with the backing of the Ugandan government, saw success with the cash alone.

Cash transfers have been argued as the new benchmark for development and aid programs. Advocates in the discussion made the case that programs should be evaluated in terms of impact and cost-effectiveness against just giving people cash.

That idea saw some resistance. The research from Liberia, for example, showed that money given to street youth would not be wasted, but it was not sufficient to generate long-lasting employment or income. There are capacity problems and much larger issues that probably cannot be addressed by cash alone.

An additional concern is the unintended negative consequences caused by cash transfers. One example given was that of refugees in Syria. Money was distributed to families labeled for rent. Despite warnings not to label the transfer, the program went ahead.

As a result, rents increased. The money intended to help reduce the cost incurred by rent was rendered largely useless. One participant raised the concern that cash transfers in such a setting could be ‘taxed’ by rebels or government fighters. There is a potential that aid organizations could help fund fighting by giving unrestricted cash.

The discussion made it clear that the applications of cash transfers are far more nuanced than they might appear. Kenya saw success in part because of the ease of sending money to people through mobile phones. Newer programs in India, for example, rely on what are essentially ATM cards.

Impacts, admitted practitioners, can go beyond simple incomes. There has been care to make sure that implementing cash transfer programs to not dramatically change social structures in ways that cause problems for the community and recipients. In one case, giving women cash allowed for them to participate in the local markets, a benefit to everyone except for the existing shop oligarchs.

Governments in low and middle-income countries are seeing increasing pressure to establish social programs. The success of cash transfer programs in Brazil and Mexico indicate that it can be an effective way to lift people out of poverty. Testing is underway to bring about more efficient and context appropriate cash transfer schemes.

An important component in the re-emergence of cash transfers is looking back to previous efforts, said one NGO official. The individual’s organization is systematically looking back at communities where the NGO used to work in order to see what happened ten years later. The idea is to learn what impacts may or may not have been on that community in order to inform future initiatives.

“Lots of people have concerns about cash, but we should have concerns about all the programs we are doing,” said a participant.

The lessons from the cash transfer research shows that there is increasing need for better evidence across development and aid programs. Researchers in the group argued that the ease of doing evaluations is improving.

Read the “Storified” version of the Technology Salon on Mobiles and Cash Transfers here.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 770 other followers