Feeds:
Posts
Comments

Posts Tagged ‘monitoring and evaluation’

Today as we jump into the M&E Tech conference in DC (we’ll also have a Deep Dive on the same topic in NYC next week), I’m excited to share a report I’ve been working on for the past year or so with Michael Bamberger: Emerging Opportunities in a Tech-Enabled World.

The past few years have seen dramatic advances in the use of hand-held devices (phones and tablets) for program monitoring and for survey data collection. Progress has been slower with respect to the application of ICT-enabled devices for program evaluation, but this is clearly the next frontier.

In the paper, we review how ICT-enabled technologies are already being applied in program monitoring and in survey research. We also review areas where ICTs are starting to be applied in program evaluation and identify new areas in which new technologies can potentially be applied. The technologies discussed include hand-held devices for quantitative and qualitative data collection and analysis, data quality control, GPS and mapping devices, environmental monitoring, satellite imaging and big data.

While the technological advances and the rapidly falling costs of data collection and analysis are opening up exciting new opportunities for monitoring and evaluation, the paper also cautions that more attention should be paid to basic quality control questions that evaluators normally ask about representativity of data and selection bias, data quality and construct validity. The ability to use techniques such as crowd sourcing to generate information and feedback from tens of thousands of respondents has so fascinated researchers that concerns about the representativity or quality of the responses have received less attention than is the case with conventional instruments for data collection and analysis.

Some of the challenges include the potential for: selectivity bias and sample design, M&E processes being driven by the requirements of the technology and over-reliance on simple quantitative data, as well as low institutional capacity to introduce ICT and resistance to change, and issues of privacy.

None of this is intended to discourage the introduction of these technologies, as the authors fully recognize their huge potential. One of the most exciting areas concerns the promotion of a more equitable society through simple and cost-effective monitoring and evaluation systems that give voice to previously excluded sectors of the target populations; and that offer opportunities for promoting gender equality in access to information. The application of these technologies however needs to be on a sound methodological footing.

The last section of the paper offers some tips and ideas on how to integrate ICTs into M&E practice and potential pitfalls to avoid. Many of these were drawn from Salons and discussions with practitioners, given that there is little solid documentation or evidence related to the use of ICTs for M&E.

Download the full paper here! 

Read Full Post »

New technologies are changing the nature of monitoring and evaluation, as discussed in our previous Salon on the use of ICTs in M&E. However, the use of new technologies in M&E efforts can seem daunting or irrelevant to those working in low resource settings, especially if there is little experience or low existing capacity with these new tools and approaches.

What is the role of donors and other intermediaries in strengthening local capacity in communities and development partners to use new technologies to enhance monitoring and evaluation efforts?

On August 30, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC to host the second in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes and to discuss just this question. Our lead discussants were: Revati Prasad from Internews, Tom O’Connell from UNICEF and Jake Watson from the International Rescue Committee. (Thanks Jake for stepping in at the last minute!)

We started off with the comment that “Many of us are faced with the “I” word – in other words, having to demonstrate impact on the ground. But how can we do that if we are 4 levels removed from where change is happening?” How can organizations and donors or those sitting in offices in Washington DC or New York City support grantees and local offices to feed back more quickly and more accurately? From this question, the conversation flowed into a number of directions and suggestions.

1) Determine what works locally

Donor shouldn’t be coming in to say “here’s what works.” Instead, they should be creating local environments for innovation. Rather than pushing things down to people, we need to start thinking from the eyes of the community and incorporate that into how we think and what we do. One participant confirmed that idea with a concrete example. “We went in with ideas – wouldn’t SMS be great… but it became clear that SMS was not the right tool, it was voice. So we worked to establish a hotline. This has connected [the population] with services, it also connects with a database that came from [their] own needs and it tracks what they want to track.” As discussed in the last Salon, however, incentive and motivation are critical. “Early on, even though indicators were set by the community, there was no direct incentive to report.” Once the mentioned call center connected the reporting to access to services, people were more motivated to report.

2) Produce local, not national-level information

If you want to leverage technology for local decision-making, you need local level information, not broad national level information. You also need to recognize that the data will be messy. As one participant said, we need to get away from the idea of imperfect data, and instead think: is the information good enough to enable us to reach that child who wasn’t reached before? We need to stop thinking of knowledge as discrete chunks that endure for 3-4 years. We are actually processing information all the time. We can help managers to think of information as something to filter and use constantly and we can help them with tools to filter information, create simpler dashboards, see bottlenecks, and combine different channels of information to make decisions.

3) Remember why you are using ICTs in M&E

We should be doing M&E in order to achieve better results and leveraging technologies to achieve better impact for communities. Often, however, we end up doing it for the donor. “Donors get really excited about this multicolored thing with 50,000 graphs, but the guy on the ground doesn’t use a bit of it. We need to let go.” commented one participant. “I don’t need to know what the district manager knows. I need to know that he or she has a system in place that works for him or her. My job is to support local staff to have that system working. We need to focus on helping people do their jobs.”

4) Excel might be your ‘killer app’

Worldwide, the range of capacities is huge. Sometimes ICT sounds very sexy, but the greatest success might be teaching people how to use Excel, how to use databases to track human rights violations and domestic violence or setting up a front-end and a data entry system in a local language.

5) Technology capacity doesn’t equal M&E capacity

One participant noted that her organization is working with a technology hub that has very good tech skills but lacks capacity in development and M&E. Their work over the past year has been less about using technology and more about working with the hub to develop these other capacities: how to conduct focus groups, surveys, network analysis, developing toolkits and guides. There’s often excitement on the ground – ‘We can get data in 48 hours! Wow! Let’s go!’ However creating good M&E surveys to be used via technology tools is difficult. One participant expressed that finding local expertise in this area is not easy, especially considering staff turnover. “We don’t always have M&E experts on the ground.” In addition, “there is an art to polls and survey trees, especially when trying to take them from English into other languages. How do you write a primer for staff to create meaningful questions.”

6) Find the best level for ICTs to support the process

ICTs are not always the best tool at the community or district level, given issues of access, literacy, capacity, connection, electricity, etc., but participants mentioned working in blended ways, eg., doing traditional data collection and using ICTs to analyze the data, compile it, produce localized reports, and working with the community to interpret the information for better decision-making. Others use hand-drawn maps, examine issues from the community angle and then incorporate that into digital literacy work and expression work, using new technology tools to tell and document the communities’ stories.

7) Discover the shadow systems and edge of network

One participant noted that people will comply and they will move data through the system as requested from on high, but they simultaneously develop their own ways of tracking information that are actually useful to them. By discovering these ‘shadow systems’, you can see what is really useful. This ‘edge of network’ is where people with whom headquarters doesn’t have contact live and work. We rely on much of their information to build M&E systems yet we don’t consult and work with them often enough. Understanding this ‘edge of network’ is critical to designing and developing good M&E systems and supporting local level M&E for better information and decision-making.

8 ) The devil is in the details

There are many M&E tools to choose from and each has its pros and cons. Participants mentioned KoBo, RapidSMSNokia Data GatheringFrontlineSMS and Episurveyor. While there is a benefit to getting more clean data and getting it in real-time, there will always be post-processing tasks. The data can, however, be thrown on a dashboard for better decision-making. Challenges exist, however. For example, in Haiti, as one participant commented, there is a 10% electrification rate, so solar is required. “It’s difficult to get a local number with Clickatell [an SMS gateway]; you can only get an international number. But getting a local number is very complicated. If you go that route, you need a project coordinator. And if you are using SMS, how do you top off the beneficiaries so that they can reply? The few pennies it costs for people to reply are a deterrent. Yet working with telecom providers is very time-consuming and expensive in any country. Training local staff is an issue – trying to train everyone on the ICT package that you are giving them. You can’t take anything for granted. People usually don’t have experience with these systems.” Literacy is another stumbling block, so some organizations are looking at Interactive Voice Response (IVR) and trying to build a way for it to be rapidly deployed.

9) Who is the M&E for?

Results are one thing, but as one participant noted, “part of results measuring means engaging communities in saying whether the results are good for them.” Another participant commented that Ushahidi maps are great and donors love them. But in CAR, for example, there is 1% internet penetration and maybe 9% of the people text. “If you are creating a crisis map about the incidence of violence, your humanitarian actors may access it, it may improve service delivery, but it is in no way useful for people on the ground. There is reliance on technology, but how to make it useful for local communities is still the big question…. It’s hard to talk about citizen engagement and citizen awareness if you are not reaching citizens because they don’t have access to technology.” And “what about the opportunity cost for the poor? ”asked one participant. “Time is restricted. CSOs push things down to the people least able to use the time for participation. There is a cost to participation, yet we assume participation is a global good. The poorest are really scraping for time and resources.  ‘Who is the data for?’ is still a huge question. Often it’s ‘here’s what we’re going to do for you’ rather than meeting with people first, asking what’s wrong, then listening and asking what they would like to do about it, and listening some more.”

10) Reaching the ‘unreachable’

Reaching and engaging the poorest is still difficult, and the truly unreached will require very different approaches. “We’re really very much spoke to hub,” said one participant, “This is not enough. How can we innovate and resolve this.” Another emphasized the need to find out who’s not part of the conversation, who is left out or not present when these community discussions take place. “You might find out that adolescent girls with mobility issues are not there. You can ask those with whom you are consulting if they know of someone who is not at the meeting. You need to figure out how to reach the invisible members of the community.”  However, as noted, “we also have to protect them. Sometimes identifying people can expose them. There is no clear answer.”

11) Innovation or building on what’s already there?

So will INGOs and donors continue to try to adapt old survey ideas to new technology tools? And will this approach survive much longer? “Aren’t we mostly looking for information that we can act on? Are we going to keep sending teams out all the time or will we begin to work with information we can access differently? Can we release ourselves from that dependence on survey teams?” Some felt that ‘data exhaust’ might be one way of getting information differently; for example a mode like Google Flu Trends. But others noted the difficulty of getting information from non-online populations, who are the majority. In addition, with these new ICT-based methods, there is still a question about representativeness and coverage. Integrated approaches where ICTs are married with traditional methods seem to be the key. This begs the question: “Is innovation really better than building up what’s already there?” as one participant commented. “We need to ask – does it add value? Is it better than what is already there? If it does add perceived value locally, then how do we ensure that it comes to some kind of result. We need to keep our eye on the results we want to achieve. We need to be more results-oriented and do reality checks. We need to constantly ask ourselves:  Are we listening to folks?”

In conclusion

There is much to think about in this emerging area of ICTs and Monitoring and Evaluation.  Join us for the third Salon in the series on October 17 where we’ll continue discussions. If you are not yet on the Technology Salon mailing list, you can sign up here. A summary of the first Salon in the series is here. (A summary of the October 17th Salon is here.)

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

I was chatting with my pal Ernst Suur from War Child Holland and he alerted me to this instructional video they’ve made about using participatory video for monitoring and evaluation in Northern Uganda.

“A PhD student working on video diaries with children commented: ‘What fascinates me is that there is a whole lot of data that they can give me, that they can share, but I cannot always see what it is, or know how to ask for it, or know it exists, and that is what the video diary does.'”

“Video diaries can enhance our transparency, as youth can communicate directly with a wide variety of people; providing field workers, project officers, headquarter staff and even donors with tangible insights into progress made at the individual level.”

Take a look. It’s worth it.

Hint: Flip video cameras are fantastic for filming, but the Flip video format is incompatible with a lot of (most?) video editing software. According to Ernst, if you’re using Flip video cameras and need to edit, Windows Movie Maker Live is the way to go.

Read Full Post »