Feeds:
Posts
Comments

Posts Tagged ‘evaluation’

The private sector has been using dashboards for quite some time, but international development organizations face challenges when it comes to identifying the right data dashboards and accompanying systems for decision-making.

Our May 29th, 2015, Technology Salon (sponsored by The Rockefeller Foundation) explored data dashboards and data visualization for improved decision making with lead discussants John DeRiggi, Senior Data Architect, DAI; Shawna Hoffman, Associate Manager, Evaluation and Learning at The MasterCard Foundation; Stephanie Evergreen, Evergreen Data.

In short, we learned at the Salon that most organizations are struggling with the data dashboard process. There are a number of reasons that dashboards fail. They may never get off the ground, they may not deliver what was promised, they may deliver but no one uses them, or they may deliver but the data is poor and bad decisions are made. Using data for better decision-making is an ongoing process – not a task or product to complete and then relegate to automation. Just getting a dashboard up and running doesn’t guarantee that it’s a success – it’s critical to look deeper to see if the data and its visualization have actually improved decisions and how. Like with any ICT tool, user centered design and ongoing iteration are key. Successful dashboards are organized, useful, include targets, and have trends and predictions. Organizational culture and change management are critical in the process.

Points discussed in detail*:

1) Ask whether you actually need a dashboard

The first question to ask is whether a dashboard is needed or possible. One discussant, who specializes in data visualization, noted that she’s often brought in because someone wants to do data visualization, and she then needs to work backwards with the organization through a number of other preparatory steps before getting to the part on data visualization. It’s critical to have data dashboard discussions with different parts of the organization in order to understand real needs and expectations. Often people will say they need a dashboard because they want to make better decisions, noted another lead discussant. “But what kind of decisions, and what information is needed to make those decisions? Where does that information come from? Who will get it?”

2) Define the audience and type of dashboard

People often think that they can create one dashboard that will fulfill everyone’s needs. As one discussant put it, they will say the audience for the dashboard is “everyone – all decision makers at all levels!” In reality most organizations will need several dashboards for different levels of decision-making. It’s important to know who will own it, use it, keep it up, and collect the data. Will it be internal or externally facing? Discussing all of this is a key part of the process of thinking through the dashboard. As one discussant outlined, dashboards can be strategic, analytical or operational. But it’s difficult for them to be all three at once. So organizations need to come to a clear understanding of their data and decision-making needs. What information, if available, would help different teams at different levels with their decision making? One dashboard can’t be everything to everyone. Creating a charter that outlines what the dashboard project is and what it aims to do is a way to help avoid mission creep, said one discussant.

3) Work with users to develop your dashboard

To start off the process, it’s important to clearly identify the audience and find out what they need – don’t assume you know, recommended one discussant. But also, as a Salon participant pointed out, don’t assume that they know either. Have a conversation where their and your expertise comes together. “The higher up you go, the less people may understand about data. One idea is to just take the ‘data’ out of the conversation. Ask decision-makers what questions they are trying to answer, what problems they are trying to solve. Then find out how to collect and visualize the data that helps them answer their questions,” suggested another participant. Create ownership and accountability at all levels – with users, with staff who will input the data, with project managers, with grantees – you need cooperation from all levels noted others. Clear buy-in will also help with data quality. If people see the results of their data coming out in a data visualization, they may be more inclined to provide quality data. One way to involve users is to gather different teams to talk about their data and to create ‘entity relationship models’ together. “People can get into the weeds, and then you can build a vocabulary for the organization. Then you can use that model to build the system and create commonality across it,” said one discussant. Another idea is to create paper prototypes of dashboards with users so that they can envision them better.

4) Dashboards help people engage with the data they’ve collected

A dashboard is a window into your data, said one participant. In some cases, seeing their data visualized can help staff to see that they have been providing poor quality data. “People didn’t realize how bad their data was until they saw their dashboard,” said one discussant. Another noted that people may disagree with what the data tells them in the dashboard and feel motivated to provide better data. On the other hand, they may realize that their data was actually good, and instead they need to improve ineffective programs. A danger is that putting a dashboard on top of bad data shines a light on the data, said one participant, and this might create an incentive for people to manipulate their data.

5) Don’t be over-ambitious

Align the dashboard with indicators that link to strategic goals and directions and stay focused, recommended one discussant. There is often a temptation to over-complicate with tons of data and visuals. But extraneous data leads to misinterpretation or distraction. Dashboards should make complex data available in an accessible way to users, she said. You can always make more visuals if needed, but you want a concise story told in the data and visuals that you’re depicting. Determine what is useful, productive and credible and leave out what is exciting but extraneous. “Don’t try to have 30 indicators.”

6) Be clear about your data categories and indicators

Rolling up data from a large number of different programs into a dashboard is a huge challenge, especially if different sites or programs are using different data models. For example, if one program is describing an activity as a ‘workshop’ and the other uses ‘training session,’ said one discussant, you have a problem. A Salon participant explained that her organization started with shallow but important common denominators across programs. Over time they aim to go deeper to begin looking at outcomes and impact.

7) Think through how you’ll sustain the dashboard and related system(s)

One discussant said that her organization established three different teams to work on the dashboard process: a) Metrics – Where do we have credible representative data? Where do we have indicators but we don’t have data? b) Plumbing: Where are the data sources? How do they feed into each other? Who is responsible, and can this be aggregated up? And c) Visualization: What visual would help different decision makers make their decisions? Depending on where the organization is in its stage of readiness and its existing staff capacities, different combinations of skill sets may be required to supplement existing ones. Data experts can help teams understand what is possible, yet program or management teams and other dashboard users also need to be involved so that they can identify the questions they are trying to answer with the data and the dashboard.

8) Don’t underestimate the time/resources needed for a functional dashboard

People may not realize that you can’t make a dashboard without data to support it, noted one participant. “It’s like a power point presentation… a power point doesn’t just appear out of nowhere. It’s a result of conversations, research, data, design and more. But for some reason, people think a dashboard will just magically create itself out of thin air.” People also seem to think you can create and launch a dashboard and then put it on autopilot, but that is not the case. The dashboard will need constant changes and iteration, and there will be continual work to keep it up. The questions being asked will also likely change over time and so the dashboard may need to shift to take this into consideration. Time will be required to get buy-in for the dashboard and its use. One Salon participant said that in her former organization, they met quarterly to present, use and discuss the dashboard, and it took about 2 years in order for it to become useful and for people to become invested in it. It’s very important, said one participant, to ensure that management knows that the dashboard is not a static thing – it will need ongoing attention and management.

9) Be selective when it comes to the technology

People tend to think that dashboards are just visual, said a Salon participant. They think they are really cool, business solution platforms. Often senior leadership has seen been pitched something really expensive and complicated, with all kinds of bells and whistles, and they may think that is what they need. It’s important to know where your organization is in terms of capacity before determining which technology would be the best fit, however, noted one discussant. She counseled organizations to use whatever they have on hand rather than bringing in new software that takes people 6 months to learn how to use. Simple excel-based dashboards might be the best place to start, she said.

10) Legacy systems can be combined with new data viz capabilities

One discussant shared how his company’s information system, which was set up over 15 years ago, did not allow for the creation of APIs. This meant that the team could not build derivative software products from their massive existing database. It is too expensive to replace the entire system, and building modules to replace some of it would lead to fragmenting the user experience. So the team built a thin web service layer on top of the existing system. This exposed the data to friendly web formats from which developers could build interactive products.

11) Be realistic about “real time” and “data quality”

One question that came up was around the the level of evidence needed to make good decisions. Having perfect data served up into a perfect visualization is utopian, said one Salon participant. The idea is that we could have ‘real time’ data to inform our decisions, she explained, yet it’s hard to quality check data so quickly. “So at what level can we say we’ll make decisions based on a level of certainty – is it when we feel 80% of the data is good quality? Do we need to lower that to 60% so that we have timely data? Is that too low?” Another question was around the kinds of decisions that require ‘real time’ data versus those that could be made based on data that is 3 to 6 months old. Salon participants said this will depend on the kind of program and the type of decision. The sector in which one is working may also determine the level of comfort with real time and with data quality – for example, the humanitarian sector may need more timely data and accept a lower level of verification whereas the development sector may be the opposite.

Another point was that dashboards should include error bars and available metadata, as well as in some cases a link to raw data for those who want to dig into the data and understand what is behind the dashboard. Sometimes the dashboard process will highlight that there is simply not much quality data available for some programs in some countries. This can be an opportunity to work with staff on the ground to strengthen capacity to collect it.

12) Relax

As one discussant said, “much of the concern about data quality is related to our own hang-ups as data nerds and what we feel comfortable putting out there for people to use to make decisions. We always say ‘we need more research.’” But here the context is different. “Stakeholders and management want the answer. We need to just put the data out there with some caveats to help them.” One way to offer more context for a dashboard is creating a dashboard report that provides some narrative alongside the visualization. Dashboards should also show trends, not only what has happened already, she said. People need to see trends towards the future so that decisions can be made. It was also pointed out that a dashboard shouldn’t be the only basis for decisions. Like a car dashboard – these data dashboards signal that something is changing but you still need to look under the hood to see what it is. The dashboard should trigger questions – it should be a launch pad for discussion.

13) Organizational culture is a huge part of this process

The internal culture and people’s attitudes towards data are embedded into how an organization operates, noted one Salon participant. This varies depending on the type of organization – an evaluation focused organization vs. a development organization vs. a contractor vs. a humanitarian organization, for example. Outside consultants can help you to build a dashboard, but it will be critical to have someone managing organizational change on the inside who knows the current culture and where the organization is aiming to go with the dashboard process. The process is getting easier, however. Many organizations are thirsty for data now, noted one lead discussant. “Often the research or evaluation team create a dashboard and send it to the management team, and then everyone loves it and wants one. People are ready for it now.”

More resources on data dashboards and visualization.

Special thanks to our lead discussants and to our hosts for this Salon! If you’d like to join our Salon discussions in the future, sign up at the Technology Salon site.

*Salons run under Chatham House Rule, so no attribution has been made in this post.

Read Full Post »

Today as we jump into the M&E Tech conference in DC (we’ll also have a Deep Dive on the same topic in NYC next week), I’m excited to share a report I’ve been working on for the past year or so with Michael Bamberger: Emerging Opportunities in a Tech-Enabled World.

The past few years have seen dramatic advances in the use of hand-held devices (phones and tablets) for program monitoring and for survey data collection. Progress has been slower with respect to the application of ICT-enabled devices for program evaluation, but this is clearly the next frontier.

In the paper, we review how ICT-enabled technologies are already being applied in program monitoring and in survey research. We also review areas where ICTs are starting to be applied in program evaluation and identify new areas in which new technologies can potentially be applied. The technologies discussed include hand-held devices for quantitative and qualitative data collection and analysis, data quality control, GPS and mapping devices, environmental monitoring, satellite imaging and big data.

While the technological advances and the rapidly falling costs of data collection and analysis are opening up exciting new opportunities for monitoring and evaluation, the paper also cautions that more attention should be paid to basic quality control questions that evaluators normally ask about representativity of data and selection bias, data quality and construct validity. The ability to use techniques such as crowd sourcing to generate information and feedback from tens of thousands of respondents has so fascinated researchers that concerns about the representativity or quality of the responses have received less attention than is the case with conventional instruments for data collection and analysis.

Some of the challenges include the potential for: selectivity bias and sample design, M&E processes being driven by the requirements of the technology and over-reliance on simple quantitative data, as well as low institutional capacity to introduce ICT and resistance to change, and issues of privacy.

None of this is intended to discourage the introduction of these technologies, as the authors fully recognize their huge potential. One of the most exciting areas concerns the promotion of a more equitable society through simple and cost-effective monitoring and evaluation systems that give voice to previously excluded sectors of the target populations; and that offer opportunities for promoting gender equality in access to information. The application of these technologies however needs to be on a sound methodological footing.

The last section of the paper offers some tips and ideas on how to integrate ICTs into M&E practice and potential pitfalls to avoid. Many of these were drawn from Salons and discussions with practitioners, given that there is little solid documentation or evidence related to the use of ICTs for M&E.

Download the full paper here! 

Read Full Post »

Earlier this month I attended the African Evaluators’ Conference (AfrEA) in Cameroon as part of the Technology and Evaluation stream organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation.

A first post about ICTs and M&E at the Afrea Conference went into some of the deliberations around using or not using ICTs and how we can learn and share more as institutions and evaluators. I’ve written previously about barriers and challenges with using ICTs in M&E of international development programs (see the list of posts at the bottom of this one). Many of these same conversations came up at AfrEA, so I won’t write about these again here. What I did want to capture and share were a few interesting design and implementation thoughts from the various ICT and M&E sessions. Here goes:

1) Asking questions via ICT may lead to more honest answers. Some populations are still not familiar with smart phones and tablets and this makes some people shy and quiet, yet it makes others more curious and animated to participate. Some people worry that mobiles, laptops and tablet create distance between the enumerator and the person participating in a survey. On the other hand, I’m hearing more and more examples of cases where using ICTs for surveying actually allow for a greater sense of personal privacy and more honest answers. I first heard about this several years ago with relation to children and youth in the US and Canada seeking psychological or reproductive health counseling. They seemed to feel more comfortable asking questions about sensitive issues via online chats (as opposed to asking a counselor or doctor face-to-face) because they felt anonymous. This same is true for telephone inquiries.

In the case of evaluations, someone suggested that rather than a mobile or tablet creating distance, a device can actually create an opportunity for privacy. For example, if a sensitive question comes up in a survey, an enumerator can hand the person being interviewed the mobile phone and look away when they provide their answer and hit enter, in the same way that waiters in some countries will swipe your ATM card and politely look away while you enter your PIN. Key is building people’s trust in these methods so they can be sure they are secure.

At a Salon on Feb 28, I heard about mobile polling being used to ask men in the Democratic Republic of Congo about sexual assault against men. There was a higher recorded affirmative rate when the question was answered via a mobile survey than when the question had been asked in other settings or though other means. This of course makes sense, considering that often when a reporter or surveyor comes around asking whether men have been victims of rape, no one wants to say publicly. It’s impossible to know in a situation of violence if a perpetrator might be standing around in the crowd watching someone getting interviewed, and clearly shame and stigma also prevent people from answering openly.

Another example at the AfrEA Tech Salon, was a comparison study done by an organization in a slum area in Accra. Five enumerators who spoke local languages conducted Water, Sanitation and Hygiene (WASH) surveys by mobile phone using Open Data Kit (an open source survey application) and the responses were compared with the same survey done on paper.  When people were asked in person by enumerators if they defecated outdoors, affirmative answers were very low. When people were asked the same question via a voice-based mobile phone survey, 26% of respondents reported open defecation.

2) Risk of collecting GPS coordinates. We had a short discussion on the plusses and minuses of using GPS and collecting geolocation data in monitoring and evaluation. One issue that came up was safety for enumerators who carry GPS devices. Some people highlighted that GPS devices can put staff/enumerators at risk of abuse from organized crime bands, military groups, or government authorities, especially in areas with high levels of conflict and violence. This makes me think that if geographic information is needed in these cases, it might be good to use a mobile phone application that collects GPS rather than a fancy smart phone or an actual GPS unit (for example, one could try out PoiMapper, which works on feature phones).

In addition, evaluators emphasized that we need to think through whether GPS data is really necessary at household level. It is tempting to always collect all the information that we possibly can, but we can never truly assure anyone that their information will not be de-anonymized somehow in the near or distant future, and in extremely high risk areas, this can be a huge risk. Many organizations do not have high-level security for their data, so it may be better to collect community or district level data than household locations. Some evaluators said they use ‘tricks’ to anonymize the geographical data, like pinpointing location a few miles off, but others felt this was not nearly enough to guarantee anonymity.

3) Devices can create unforeseen operational challenges at the micro-level. When doing a mobile survey by phone and asking people to press a number to select a particular answer to a question, one organization working in rural Ghana to collect feedback about government performance found that some phones were set to lock when a call was answered. People were pressing buttons to respond to phone surveys (press 1 for….), but their answers did not register because phones were locked, or answers registered incorrectly because the person was entering their PIN to unlock the phone. Others noted that when planning for training of enumerators or community members who will use their own devices for data collection, we cannot forget the fact that every model of phone is slightly different. This adds quite a lot of time to the training as each different model of phone needs to be explained to trainees. (There are a huge number of other challenges related to devices, but these were two that I had not thought of before.)

4) Motivation in the case of poor capacity to respond. An organization interested in tracking violence in a highly volatile area wanted to take reports of violence, but did not have a way to ensure that there would be a response from an INGO, humanitarian organization or government authority if/when violence was reported. This is a known issue — the difficulties of encouraging reporting if responsiveness is low. To keep people engaged this organization thanks people immediately for reporting and then sends peace messages and encouragement 2-3 times per week. Participants in the program have appreciated these ongoing messages and participation has continued to be steady, regardless of the fact that immediate help has not been provided as a result of reporting.

5) Mirroring physical processes with tech. One way to help digital tools gain more acceptance and to make them more user-friendly is to design them to mirror paper processes or other physical processes that people are already familiar with. For example, one organization shared their design process for a mobile application for village savings and loan (VSL) groups. Because security is a big concern among VSL members, the groups typically keep cash in a box with 3 padlocks. Three elected members must be present and agree to open and remove money from the box in order to conduct any transaction. To mimic this, the VSL mobile application requires 3 PINs to access mobile money or make transactions, and what’s more, the app sends everyone in the VSL Group an SMS notification if the 3 people with the PINs carry out a transaction, meaning the mobile app is even more secure than the original physical lock-box, because everyone knows what is happening all the time with the money.

****

As I mentioned in part 1 of this post, some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward.

Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that are missing!

Previous posts on ICTs and M&E on this blog:

Read Full Post »

I attended the African Evaluators’ Conference (AfrEA) in Cameroon last week as part of the Technology and Evaluation strand organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation. The strand was a fantastic opportunity for learning, sharing and understanding more about the context, possibilities and realities of using ICTs in monitoring and evaluation (M&E). We heard from a variety of evaluators, development practitioners, researchers, tool-developers, donors, and private sector and government folks. Judging by the well-attended sessions, there is a huge amount of interest in ICTs and M&E.

Rather than repeat what’s I’ve written in other posts (see links at the bottom), I’ll focus here on some of the more relevant, interesting, and/or new information from the AfrEA discussions. This first post will go into institutional issues and the ‘field’ of ICTs and M&E. A second post will talk about design and operational tips I learned /was reminded of at AfrEA.

1) We tend to get stuck on data collection –Like other areas (I’m looking at you, Open Data) conversations tend to revolve around collecting data. We need to get beyond that and think more about why we are collecting data and what we are going to do with it (and do we really need all this data?). The evaluation field also needs to explore all the other ways it could be using ICTs for M&E, going beyond mobile phones and surveys. Collecting data is clearly a necessary part of M&E, but those data still need to be analyzed. As a participant from a data visualization firm said, there are so many ways you can use ICTs – they help you make sense of things, you can tag sentiment, you can visualize data and make data-based decisions. Others mentioned that ICTs can help us to share data with various stakeholders, improve sampling in RCTs (Randomized Control Trials), conduct quality checks on massive data sets, and manage staff who are working on data collection. Using big data, we can do analyses we never could have imagined before. We can open and share our data, and stop collecting the same data from the same people multiple times. We can use ICTs to share back what we’ve learned with evaluation stakeholders, governments, the public, and donors. The range of uses of ICTs is huge, yet the discussion tends to get stuck on mobile surveys and data collection, and we need to start thinking beyond that.

2) ICTs are changing how programs are implemented and how M&E is done — When a program already uses ICTs, data collection can be built in through the digital device itself (e.g., tracking user behavior, cookies, and via tests and quizzes), as one evaluator working on tech and education programs noted. As more programs integrate digital tools, it may become easier to collect monitoring and evaluation data with less effort. Along those lines, an evaluator looking at a large-scale mobile-based agricultural information system asked about approaches to conducting M&E that do not rely on enumerators and traditional M&E approaches. In his program, because the farmers who signed up for the mobile information service do not live in the same geographical community, traditional M&E approaches do not seem plausible and ICT-based approaches look like a logical answer. There is little documentation within the international development evaluation community, however, on how an evaluator might design an evaluation in this type of a situation. (I am guessing there may be some insights from market research and possibly from the transparency and accountability sectors, and among people working on “feedback loops”).

3) Moving beyond one-off efforts — Some people noted that mobile data gathering is still done mostly at the project level. Efforts tend to be short-term and one-off. The data collected is not well-integrated into management information systems or national level processes. (Here we may reference the infamous map of mHealth pilots in Uganda, and note the possibility of ICT-enabled M&E in other sectors going this same route). Numerous small pilots may be problematic if the goal is to institutionalize mobile data gathering into M&E at the wider level and do a better job of supporting and strengthening large-scale systems.

4) Sometimes ICTs are not the answer, even if you want them to be – One presenter (who considered himself a tech enthusiast) went into careful detail about his organization’s process of deciding not to use tablets for a complex evaluation across 4 countries with multiple indicators. In the end, the evaluation itself was too complex, and the team was not able to find the right tool for the job. The organization looked at simple, mid-range and highly complex applications and tools and after testing them all, opted out. Each possible tool presented a set of challenges that meant the tool was not a vast improvement over paper-based data collection, and the up-front costs and training were too expensive and lengthy to make the switch to digital tools worthwhile. In addition, the team felt that face-to-face dynamics in the community and having access to notes and written observations in the margins of a paper survey would enable them to conduct a better evaluation. Some tablets are beginning to enable more interactivity and better design for surveys, but not yet in a way that made them a viable option for this evaluation. I liked how the organization went through a very thorough and in-depth process to make this decision.

Other colleagues also commented that the tech tools are still not quite ‘there’ yet for M&E. Even top of the line business solutions are generally found to be somewhat clunky. Million dollar models are not relevant for environments that development evaluators are working in; in addition to their high cost, they often have too many features or require too much training. There are some excellent mid-range tools that are designed for the environment, but many lack vital features such as availability in multiple languages. Simple tools that are more easily accessible and understandable without a lot of training are not sophisticated enough to conduct a large-scale data collection exercise. One person I talked with suggested that the private sector will eventually develop appropriate tools, and the not-for-profit sector will then adopt them. She felt that those of us who are interested in ICTs in M&E are slightly ahead of the curve and need to wait a few years until the tools are more widespread and common. Many people attending the Tech and M&E sessions at AfrEA made the point that use of ICTs in M&E would get easier and cheaper as the field develops, tools get more advanced/appropriate/user-friendly and widely tested, and networks/ platforms/ infrastructure improves in less-connected rural areas.

5) Need for documentation, evaluation and training on use of ICTs in M&E – Some evaluators felt that ICTs are only suitable for routine data collection as part of an ongoing program, but not good for large-scale evaluations. Others pointed out that the notions of ‘ICT for M&E’ and ‘mobile data collection/mobile surveys’ are often used interchangeably, and evaluation practitioners need to look at the multiple ways that ICTs can be used in the wider field of M&E. ICTs are not just useful for moving from paper surveys to mobile data gathering. An evaluator working on a number of RCTs mentioned that his group relies on ICTs for improving samples, reducing bias, and automatically checking data quality.

There was general agreement that M&E practitioners need resources, opportunities for more discussion, and capacity strengthening on the multiple ways that ICTs may be able to support M&E. One evaluator noted that civil society organizations have a tendency to rush into things, hit a brick wall, and then cross their arms and say, “well, this doesn’t work” (in this case, ICTs for M&E). With training and capacity, and as more experience and documentation is gained, he considered that ICTs could have a huge role in making M&E more efficient and effective.

One evaluator, however, questioned whether having better, cheaper, higher quality data is actually leading to better decisions and outcomes. Another evaluator asked for more evidence of what works, when, with whom and under what circumstances so that evaluators could make better decisions around use of ICTs in M&E. Some felt that a decision tree or list of considerations or key questions to think through when integrating ICTs into M&E would be helpful for practitioners. In general, it was agreed that ICTs can help overcome some of our old challenges, but that they inevitably bring new challenges. Rather than shy away from using ICTs, we should try to understand these new challenges and find ways to overcome/work around them. Though the mHealth field has done quite a bit of useful research, and documentation on digital data collection is growing, use of ICTs is still relatively unexplored in the wider evaluation space.

6) There is no simple answer. One of my takeaways from all the sessions was that many M&E specialists are carefully considering options, and thinking quite a lot about which ICTs for what, whom, when and where rather than deciding from the start that ICTs are ‘good and beneficial’ or ‘bad and not worth considering.’ This is really encouraging, and to be expected of a thoughtful group like this. I hope to participate in more discussions of this nature that dig into the nuances of introducing ICTs into M&E.

Some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward. The “field” of ICTs in M&E is quite broad, however, and there are many ways to slice the cake. Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that you think are missing!

(Part 2 of this post)

Previous posts on ICTs and M&E:

Read Full Post »

This is a cross-post from Tom Murphyeditor of the aid blog A View From the Cave. The original article can be found on Humanosphere. The post summarizes discussions at our November 21st New York City Technology Salon: Are Mobile Money Cash Grants the Future of Development?  If you’d like to join us for future Salons, sign up here.

by Tom Murphy

Decades ago, some of the biggest NGOs simply gave away money to individuals in communities. People lined up and were just given cash.

The once popular form of aid went out of fashion, but it is now making a comeback.

Over time, coordination became extremely difficult. Traveling from home to home costs time and money for the NGO and the same problem exists for recipients when they have to go to a central location. More significant was the shift in development thinking that said giving hand outs was causing long term damage.

The backlash against ‘welfare queens’ in the US, UK and elsewhere during the 1980s was reflected in international development programming. Problem was that it was all based on unproven theories of change and anecdotal evidence, rather than hard evidence.

Half a decade later, new research shows that just giving people money can be an effective way to build assets and even incomes. The findings were covered by major players like NPR and the Economist.

While exciting and promising, cash transfers are not a new tool in the development utility belt.

Various forms of transfers have emerged over the past decade. Food vouchers were used by the World Food Programme when responding to the 2011 famine in the Horn of Africa. Like food stamps in the US, people could go buy food from local markets and get exactly what they need while supporting the local economy.

The differences have sparked a sometimes heated debate within the development community as to what the findings about cash transfers mean going forward. A Technology Salon hosted conversation at ThoughtWorks in New York City last week, featured some of the leading researchers and players in the cash transfer sector.

The salon style conversation featured Columbia University and popular aid blogger Chris Blattman, GiveDirectly co-founder and UCSD researcher Paul Neihaus and Plan USA CEO Tessie San Martin. The ensuing discussion, operating under the Chatham House Rule of no attribution, featured representatives from large NGOs, microfinance organizations and UN agencies.

Research from Kenya, Uganda and Liberia show both the promise and shortcomings of cash transfers. For example, giving out cash in addition to training was successful in generating employment in Northern Uganda. Another program, with the backing of the Ugandan government, saw success with the cash alone.

Cash transfers have been argued as the new benchmark for development and aid programs. Advocates in the discussion made the case that programs should be evaluated in terms of impact and cost-effectiveness against just giving people cash.

That idea saw some resistance. The research from Liberia, for example, showed that money given to street youth would not be wasted, but it was not sufficient to generate long-lasting employment or income. There are capacity problems and much larger issues that probably cannot be addressed by cash alone.

An additional concern is the unintended negative consequences caused by cash transfers. One example given was that of refugees in Syria. Money was distributed to families labeled for rent. Despite warnings not to label the transfer, the program went ahead.

As a result, rents increased. The money intended to help reduce the cost incurred by rent was rendered largely useless. One participant raised the concern that cash transfers in such a setting could be ‘taxed’ by rebels or government fighters. There is a potential that aid organizations could help fund fighting by giving unrestricted cash.

The discussion made it clear that the applications of cash transfers are far more nuanced than they might appear. Kenya saw success in part because of the ease of sending money to people through mobile phones. Newer programs in India, for example, rely on what are essentially ATM cards.

Impacts, admitted practitioners, can go beyond simple incomes. There has been care to make sure that implementing cash transfer programs to not dramatically change social structures in ways that cause problems for the community and recipients. In one case, giving women cash allowed for them to participate in the local markets, a benefit to everyone except for the existing shop oligarchs.

Governments in low and middle-income countries are seeing increasing pressure to establish social programs. The success of cash transfer programs in Brazil and Mexico indicate that it can be an effective way to lift people out of poverty. Testing is underway to bring about more efficient and context appropriate cash transfer schemes.

An important component in the re-emergence of cash transfers is looking back to previous efforts, said one NGO official. The individual’s organization is systematically looking back at communities where the NGO used to work in order to see what happened ten years later. The idea is to learn what impacts may or may not have been on that community in order to inform future initiatives.

“Lots of people have concerns about cash, but we should have concerns about all the programs we are doing,” said a participant.

The lessons from the cash transfer research shows that there is increasing need for better evidence across development and aid programs. Researchers in the group argued that the ease of doing evaluations is improving.

Read the “Storified” version of the Technology Salon on Mobiles and Cash Transfers here.

Read Full Post »

This is a cross post from Tessie San Martin, CEO of Plan International USA. It was originally posted on the Plan USA blog, titled Old Roads to New Directions. We’ll have Tessie, Chris Blattman and Paul Niehaus from Give Directly joining us in NYC for our November Technology Salon on Cash Transfers. More info on that soon!
.
There has been a lot of chatter in the mainstream media about unconditional cash transfers (UCTs) lately. See, for example, recent pieces in The New York Times and The Atlantic; and a much discussed segment in NPR. Most media pieces also mentioned an organization called GiveDirectly that does just this. The idea, touted as an important innovation in development, is simplicity itself: give cash directly to poor people who need it, without strings.

GiveDirectly leverages the low costs of mobile money to deliver cash transfers to poor households in select African countries. Initial results are encouraging. The money is not being spent on “sin goods”. On the contrary, it is being – for the most part – directed into productive investment that helps these poor families get ahead.

It is worth noting differences between UCTs and CCTs (conditional cash transfers). CCT programs provide cash payments to poor households, but they impose conditions on recipients before they get the money, mostly related to children’s health care and education (e.g. enroll the kids in school). UCTs put no such conditions. This is why there is such enthusiasm about UCTs. “No conditions” means such programs tend to be cheaper to administer. At least that is the theory. Note that UCTs and CCTs are similar in that neither has any conditions on how the money (once obtained) is spent.

This posting is focused on UCTs because of the current buzz around them. Although they are showing impressive results, let’s be realistic about the potential and limitations of UCTs. There is a lot that we do not know about the conditions under which UCT schemes lead to sustainable poverty reduction. Nor are we clear about how such programs can be scaled effectively. To the credit of organizations like GiveDirectly, they have partnered with Innovations for Poverty Action to carefully evaluate the results of their actions through rigorous randomized control trials.

It is worth noting that GiveDirectly is doing more than just sending cash to the poor; they are also spending resources carefully identifying, evaluating and selecting beneficiaries, and on monitoring and evaluation. This leads me to one of three points I think are worth making about UCTs.

First, the idea behind UCTs may be simple, but the more successful UCT schemes are complex. The “U” in UCTs does not mean that all you are doing is giving poor people money and stepping back. Research done by ODI and funded by the UK’s Department for International Development (DfID) suggests that UCTs work best when accompanied by information, education and communication efforts, careful targeting and selection of participants, and constant feedback and interaction. In other words, you need to consider who will be selected, what complementary efforts/services will enable and facilitate a good response, and you need to constantly invest in citizen feedback channels that allow you to learn and adapt as better information about program impact comes in. This is not much different than what a good INGO needs to do in order to deliver effective programming (UCT or not).

Second, the media coverage ignores how much variation exists among UCTs schemes. As the World Bank’s Berk Ozler has highlighted, there is a world of difference between “waking up one morning and finding $500 in your M-PESA account” (GiveDirectly) and the interventions being carried out in Liberia for unemployed youth, or what the DfID-funded ODI studies describe. Again, it is too early to tell what kinds of effects on poverty reduction we can expect from such schemes and we are miles away from understanding how scheme design details are related to sustainable paths out of poverty.

This leads me to a third set of questions: for whom are UCTs working? How do program results compare in urban vs. rural areas, for different income levels? We have years of data on CCTs, particularly a lot of data from Mexico, Brazil and other middle income countries where these programs have been scaled up nationally. Yes CCTs have problems (what development and social safety net programs do not?). But there is plenty of research demonstrating the conditions under which CCTs work. UCTs are much less well studied.

But the importance of these innovations, as Chris Blattman has already said, is that it forces (or should force) development organizations and donors to think about “top and bottom lines.” In other words, is what we are doing working? And even if it is working, at what cost? More importantly, we should always ask: are there other options for delivering the same (or similar) results more cost effectively?

As the CEO of a child sponsorship organization, I am drawn to the idea of UCTs. In fact, our initial child sponsorship efforts decades ago bear important similarities to today’s UCT programs. But Plan (like most other child sponsorship organizations) stepped away from such direct transfers, as concerns with sustainability and dependency grew. It is perhaps time to take a new look at the evidence around cash transfers, invest in reviewing results of past sponsorship programs and the lessons learned from that experience that may be applicable to a new generation of UCTs.

In the private sector, publicly quoted companies live and die by the share price, and the pressure to innovate and stay ahead is always present. For public charities like Plan, the rewards – and risks – of innovation are much less clear. But ignoring disruptive technologies and innovations, and failing to continuously push to experiment and learn will lead to irrelevancy. The jury may be out on UCTs, but they need to be taken seriously. GiveDirectly and others like it are pushing us all to do better.

Read Full Post »

At Catholic Relief Services’ annual ICT4D meeting in March 2013, I worked with Jill Hannon from Rockefeller Foundation’s Evaluation Office to organize 3 sessions on the use of ICT for Monitoring and Evaluation (ICTME). The sessions covered the benefits (known and perceived) of using ICTs for M&E, the challenges and barriers organizations face when doing so, and some lessons and advice on how to integrate ICTs into the M&E process.

Our lead discussants in the three sessions included: Stella Luk (Dimagi), Guy Sharrack (CRS), Mike Matarasso (CRS), David McAfee (HNI/Datawinners), Mark Boots (Votomobile), and Teressa Trusty (USAID’s IDEA/Mobile Solutions). In addition, we drew from the experiences and expertise of some 60 people who attended our two round table sessions.

Benefits of integrating ICTs into the M&E process

Some of the potential benefits of integrating ICTs mentioned by the various discussants and participants in the sessions included:

  • More rigorous, higher quality data collection and more complete data
  • Reduction in required resources (time, human, money) to collect, aggregate and analyze data
  • Reduced complexity if data systems are simplified; thus increased productivity and efficiency
  • Combined information sources and types and integration of free form, qualitative data with quantitative data
  • Broader general feedback from a wider public via ICT tools like SMS; inclusion of new voices in the feedback process, elimination of the middleman to empower communities
  • Better cross-sections of information, information comparisons; better coordination and cross-comparing if standard, open formats are used
  • Trend-spotting with visualization tools
  • Greater data transparency and data visibility, easier data auditing
  • Real-time or near real-time feedback “up the chain” that enables quicker decision-making, adaptive management, improved allocation of limited resources based on real-time data, quicker communication of decisions/changes back to field-level staff, faster response to donors and better learning
  • Real-time feedback “down the ladder” that allows for direct citizen/beneficiary feedback, and complementing of formal M&E with other social monitoring approaches
  • Scale, greater data security and archiving, and less environmental impact
  • Better user experience for staff as well as skill enhancement and job marketability and competitiveness of staff who use the system

Barriers and challenges of integrating ICTs into M&E processes

A number of challenges and barriers were also identified, including:

  • A lack of organizational capacity to decide when to use ICTs in M&E, for what, and why, and deciding on the right ICT (if any) for the situation. Organizations may find it difficult to get beyond collecting the data to better use of data for decision-making and coordination. There is often low staff capacity, low uptake of ICT tools and resistance to change.
  • A tendency to focus on surveys and less attention to other types of M&E input, such as qualitative input. Scaling analysis of large-scale qualitative feedback is also a challenge: “How do you scale qualitative feedback to 10,000 people or more? People can give their feedback in a number of languages by voice. How do you mine that data?”
  • The temptation to offload excessive data collection to frontline staff without carefully selecting what data is actually going to be used and useful for them or for other decision-makers.
  • M&E is often tacked on at the end of a proposal design. The same is true for ICT. Both ICT and M&E need to be considered and “baked in” to a process from the very beginning.
  • ICT-based M&E systems have missed the ball on sharing data back. “Clinics in Ghana collect a lot of information that gets aggregated and moved up the chain. What doesn’t happen is sharing that information back with the clinic staff so that they can see what is happening in their own clinic and why. We need to do a better job of giving information back to people and closing the loop.” This step is also important for accountability back to communities. On the whole, we need to be less extractive.
  • Available tools are not always exactly right, and no tool seems to provide everything an organization needs, making it difficult to choose the right tool. There are too many solutions, many of which are duplicative, and often the feature sets and the usability of these tools are both poor. There are issues with sustainability and ongoing maintenance and development of M&E platforms.
  • Common definitions for data types and standards for data formatting are needed. The lack of interoperability among ICT solutions also causes challenges. As a field, we don’t do enough linking of systems together to see a bigger picture of which programs are doing what, where and who they are impacting and how.
  • Security and privacy are not adequately addressed. Many organizations or technology providers are unaware of the ethical implications of collecting data via new tools and channels. Many organizations are unclear about the ethical standards for research versus information that is offered up by different constituents or “beneficiaries” (eg., information provided by people participating in programs that use SMS or collect information through SMS-based surveys) versus monitoring and evaluation information. It is also unclear what the rules are for information collected by private companies, who this information can be shared with and what privacy laws mean for ICT-enabled M&E and other types of data collection. If there are too many barriers to collecting information, however, the amount of information collected will be reduced. A balance needs to be found. The information that telecommunications companies hold is something to think about when considering privacy and consent issues, especially in situations of higher vulnerability and risk. (UNOCHA has recently released a report that may be useful.)
  • Not enough is understood about motivation and incentive for staff or community members to participate or share data. “Where does my information go? Do I see the results? Why should I participate? Is anyone responding to my input?” In addition, the common issues of cost, access, capacity, language, literacy, cultural barriers are very much present in attempts to collect information directly from community members. Another question is that of inclusion: Does ICT-enabled data collection or surveying leave certain groups out? (See this study on intrinsic vs extrinsic motivation for feedback.)
  • Donors often push or dictate the use of ICT when it’s perhaps not the most useful for the situation. In addition there is normally not enough time during proposal process for organizations to work on buy-in and good design of an ICT-enabled M&E system. There is often a demand from the top for excessive data collection without an understanding of the effort required to collect it, and time/resource trade-offs for excessive data collection when it leads to less time spent on program implementation. “People making decisions in the capital want to add all these new questions and information and that can be a challenge… What data are valuable to collect? Who will respond to them? Who will use them as the project goes forward?”
  • There seems to be a focus on top-down, externally created solutions rather than building on local systems and strengths or supporting local organizations or small businesses to strengthen their ICTME capacities. “Can strengthening local capacity be an objective in its own right? Are donors encouraging agencies to develop vertical ICTME solutions without strengthening local systems and partners?”
  • Results-based, data-based focus can bias the countable, leave out complex development processes with more difficult to count/measure impacts.

Lessons and good practice for integrating ICTs into M&E processes

ICT is not a silver bullet – it presents its own set of challenges. But a number of good practices surfaced:

  • The use of ICTs for M&E is not just a technology issue, it’s a people and processes issue too, and it is important to manage the change carefully. It’s also important to keep an open mind that ICT4D to support M&E might not always be the best use of scarce resources – there may be more pressing priorities for a project. Getting influential people on your side to support the cause and help leverage funding and support is critical. It’s also important to communicate goals and objectives clearly, and provide incentives to make sure ICTs are successfully adopted. The trick is keeping up with technology advances to improve the system, but also keeping your eye on the ball.
  • When designing an ICTME effort, clarity of purpose and a holistic picture of the project M&E system are needed in order to review options for where ICT4D can best fit. Don’t start with the technology. Start with the M&E purpose and goals and focus on the business need, not the gadgets. Have a detailed understanding of M&E data requirements and data flows as a first step. Follow those with iterative discussions with ICT staff to specify the ICT4D solution requirements.
  • Select an important but modest project to start with and pilot in one location – get it right and work out the glitches before expanding to a second tier of pilots or expanding widely. Have a fully functional model to share for broad buy-in and collect some hard data during the pilot to convince people of adoption. The first ICT initiative will be the most important.  If it is successful, use of ICTs will likely spread throughout an organization.  If the first initiative fails, it can significantly push back the adoption of ICTs in general. For this reason, it’s important to use your best people for the first effort. Teamwork and/or new skill sets may be required to improve ICT-enabled M&E. The “ICT4D 2.0 Manifesto” talks about a tribrid set of skills needed for ICT-enabled programs.
  • Don’t underestimate the need for staff training and ongoing technical assistance to ensure a positive user experience, particularly when starting out. Agencies need to find the right balance between being able to provide support for a limited number of ICT solutions versus the need to support ongoing local innovation.  It’s also important to ask for help when needed.  The most successful M&E projects are led by competent managers who seek out resources both inside and outside their organizations.
  • Good ICT-enabled M&E comes from a partnership between program, M&E and ICT staff, technical support internal and external to the organization. Having a solid training curriculum and a good help desk are important. In addition, in-built capacity for original architecture design and to maintain and adjust the system is a good idea. A lead business owner and manager for the system need to be in place as well as global and local level pioneers and strong leadership (with budget!) to do testing and piloting. At the local level, it is important to have an energetic and savvy local M&E pioneer who has a high level of patience and understands technology.
  • At the community level, a key piece is understanding who you need to hear from for effective M&E and ensuring that ICT tools are accessible to all. It’s also critical to understand who you are ignoring or not reaching with any tool or process. Are women and children left out? What about income level? Those who are not literate?
  • Organizations should also take care that they are not replacing or obliterating existing human responsibilities for evaluation. For example, at community level in Ghana, Assembly Members have the current responsibility for representing citizen concerns. An ICT-enabled feedback loop might undermine this responsibility if it seeks direct-from-citizen evaluation input.  The issue of trust and the human-human link also need consideration. ICT cannot and should not be a replacement for everything. New ICT tools can increase the number of people and factors evaluated; not just increase efficiency of existing evaluations.
  • Along the same lines, it’s important not to duplicate existing information systems, create parallel systems or fragment the government’s own systems. Organizations should be strengthening local government systems and working with government to use the information to inform policy and help with decision-making and implementation of programs.
  • implementors need to think about the direction of information flow. “Is it valuable to share results “upward” and “downward”? It is possible to integrate local decision-making into a system.” Systems can be created that allow for immediate local-level decision-making based on survey input. Key survey questions can be linked to indicators that allow for immediate discussion and solutions to improve service provision.
  • Also, the potential political and social implications of greater openness in information flows needs to be considered. Will local, regional and national government embrace the openness and transparency that ICTs offer? Are donors and NGOs potentially putting people at risk?
  • For best results, pick a feasible and limited number of quality indicators and think through how frontline workers will be motivated to collect the data. Excessive data collection will interfere with or impede service delivery. Make sure managers are capable of handling and analyzing data that comes in and reacting to it, or there is no point in collecting it. It’s important to not only think about what data you want, but how this data will be used. Real-time data collected needs to be actionable. Be sure that those submitting data understand what data they have submitted and can verify its accuracy. Mobile data collection needs to be integrated into real processes and feedback loops. People will only submit information or reports if they see that someone cares about those reports and does something about them.
  • Collecting data through mobile technology may change the behavior being monitored or tracked. One participant commented that when his organization implemented an ICT-based system to track staff performance, people started doing unnecessary activities so that they could tick off the system boxes rather than doing what they knew should be done for better program impact.
  • At the practical level, tips include having robust options for connectivity and power solutions, testing the technology in the field with a real situation, securing reduced costs with vendors for bulk purchasing and master agreements, using standard vendor tools instead of custom building. It’s good to keep the system simple, efficient and effective as possible and to avoid redundancy or the addition of features things that don’t truly offer more functionality.

Thanks to all our participants and lead discussants at the sessions!

Useful information and guides on ICTME:

Mobile-based technology for monitoring and evaluation: A reference guide for project managers, M&E specialists, researchers, donors

3 Reports on mobile data collection

Other posts on ICTs for M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

Using participatory video for M&E

ICTs and M&E at the South Asia Evaluators’ Conclave

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 880 other followers