Feeds:
Posts
Comments

Posts Tagged ‘evaluator’

Earlier this month I attended the African Evaluators’ Conference (AfrEA) in Cameroon as part of the Technology and Evaluation stream organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation.

A first post about ICTs and M&E at the Afrea Conference went into some of the deliberations around using or not using ICTs and how we can learn and share more as institutions and evaluators. I’ve written previously about barriers and challenges with using ICTs in M&E of international development programs (see the list of posts at the bottom of this one). Many of these same conversations came up at AfrEA, so I won’t write about these again here. What I did want to capture and share were a few interesting design and implementation thoughts from the various ICT and M&E sessions. Here goes:

1) Asking questions via ICT may lead to more honest answers. Some populations are still not familiar with smart phones and tablets and this makes some people shy and quiet, yet it makes others more curious and animated to participate. Some people worry that mobiles, laptops and tablet create distance between the enumerator and the person participating in a survey. On the other hand, I’m hearing more and more examples of cases where using ICTs for surveying actually allow for a greater sense of personal privacy and more honest answers. I first heard about this several years ago with relation to children and youth in the US and Canada seeking psychological or reproductive health counseling. They seemed to feel more comfortable asking questions about sensitive issues via online chats (as opposed to asking a counselor or doctor face-to-face) because they felt anonymous. This same is true for telephone inquiries.

In the case of evaluations, someone suggested that rather than a mobile or tablet creating distance, a device can actually create an opportunity for privacy. For example, if a sensitive question comes up in a survey, an enumerator can hand the person being interviewed the mobile phone and look away when they provide their answer and hit enter, in the same way that waiters in some countries will swipe your ATM card and politely look away while you enter your PIN. Key is building people’s trust in these methods so they can be sure they are secure.

At a Salon on Feb 28, I heard about mobile polling being used to ask men in the Democratic Republic of Congo about sexual assault against men. There was a higher recorded affirmative rate when the question was answered via a mobile survey than when the question had been asked in other settings or though other means. This of course makes sense, considering that often when a reporter or surveyor comes around asking whether men have been victims of rape, no one wants to say publicly. It’s impossible to know in a situation of violence if a perpetrator might be standing around in the crowd watching someone getting interviewed, and clearly shame and stigma also prevent people from answering openly.

Another example at the AfrEA Tech Salon, was a comparison study done by an organization in a slum area in Accra. Five enumerators who spoke local languages conducted Water, Sanitation and Hygiene (WASH) surveys by mobile phone using Open Data Kit (an open source survey application) and the responses were compared with the same survey done on paper.  When people were asked in person by enumerators if they defecated outdoors, affirmative answers were very low. When people were asked the same question via a voice-based mobile phone survey, 26% of respondents reported open defecation.

2) Risk of collecting GPS coordinates. We had a short discussion on the plusses and minuses of using GPS and collecting geolocation data in monitoring and evaluation. One issue that came up was safety for enumerators who carry GPS devices. Some people highlighted that GPS devices can put staff/enumerators at risk of abuse from organized crime bands, military groups, or government authorities, especially in areas with high levels of conflict and violence. This makes me think that if geographic information is needed in these cases, it might be good to use a mobile phone application that collects GPS rather than a fancy smart phone or an actual GPS unit (for example, one could try out PoiMapper, which works on feature phones).

In addition, evaluators emphasized that we need to think through whether GPS data is really necessary at household level. It is tempting to always collect all the information that we possibly can, but we can never truly assure anyone that their information will not be de-anonymized somehow in the near or distant future, and in extremely high risk areas, this can be a huge risk. Many organizations do not have high-level security for their data, so it may be better to collect community or district level data than household locations. Some evaluators said they use ‘tricks’ to anonymize the geographical data, like pinpointing location a few miles off, but others felt this was not nearly enough to guarantee anonymity.

3) Devices can create unforeseen operational challenges at the micro-level. When doing a mobile survey by phone and asking people to press a number to select a particular answer to a question, one organization working in rural Ghana to collect feedback about government performance found that some phones were set to lock when a call was answered. People were pressing buttons to respond to phone surveys (press 1 for….), but their answers did not register because phones were locked, or answers registered incorrectly because the person was entering their PIN to unlock the phone. Others noted that when planning for training of enumerators or community members who will use their own devices for data collection, we cannot forget the fact that every model of phone is slightly different. This adds quite a lot of time to the training as each different model of phone needs to be explained to trainees. (There are a huge number of other challenges related to devices, but these were two that I had not thought of before.)

4) Motivation in the case of poor capacity to respond. An organization interested in tracking violence in a highly volatile area wanted to take reports of violence, but did not have a way to ensure that there would be a response from an INGO, humanitarian organization or government authority if/when violence was reported. This is a known issue — the difficulties of encouraging reporting if responsiveness is low. To keep people engaged this organization thanks people immediately for reporting and then sends peace messages and encouragement 2-3 times per week. Participants in the program have appreciated these ongoing messages and participation has continued to be steady, regardless of the fact that immediate help has not been provided as a result of reporting.

5) Mirroring physical processes with tech. One way to help digital tools gain more acceptance and to make them more user-friendly is to design them to mirror paper processes or other physical processes that people are already familiar with. For example, one organization shared their design process for a mobile application for village savings and loan (VSL) groups. Because security is a big concern among VSL members, the groups typically keep cash in a box with 3 padlocks. Three elected members must be present and agree to open and remove money from the box in order to conduct any transaction. To mimic this, the VSL mobile application requires 3 PINs to access mobile money or make transactions, and what’s more, the app sends everyone in the VSL Group an SMS notification if the 3 people with the PINs carry out a transaction, meaning the mobile app is even more secure than the original physical lock-box, because everyone knows what is happening all the time with the money.

****

As I mentioned in part 1 of this post, some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward.

Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that are missing!

Previous posts on ICTs and M&E on this blog:

Read Full Post »

I attended the African Evaluators’ Conference (AfrEA) in Cameroon last week as part of the Technology and Evaluation strand organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation. The strand was a fantastic opportunity for learning, sharing and understanding more about the context, possibilities and realities of using ICTs in monitoring and evaluation (M&E). We heard from a variety of evaluators, development practitioners, researchers, tool-developers, donors, and private sector and government folks. Judging by the well-attended sessions, there is a huge amount of interest in ICTs and M&E.

Rather than repeat what’s I’ve written in other posts (see links at the bottom), I’ll focus here on some of the more relevant, interesting, and/or new information from the AfrEA discussions. This first post will go into institutional issues and the ‘field’ of ICTs and M&E. A second post will talk about design and operational tips I learned /was reminded of at AfrEA.

1) We tend to get stuck on data collection –Like other areas (I’m looking at you, Open Data) conversations tend to revolve around collecting data. We need to get beyond that and think more about why we are collecting data and what we are going to do with it (and do we really need all this data?). The evaluation field also needs to explore all the other ways it could be using ICTs for M&E, going beyond mobile phones and surveys. Collecting data is clearly a necessary part of M&E, but those data still need to be analyzed. As a participant from a data visualization firm said, there are so many ways you can use ICTs – they help you make sense of things, you can tag sentiment, you can visualize data and make data-based decisions. Others mentioned that ICTs can help us to share data with various stakeholders, improve sampling in RCTs (Randomized Control Trials), conduct quality checks on massive data sets, and manage staff who are working on data collection. Using big data, we can do analyses we never could have imagined before. We can open and share our data, and stop collecting the same data from the same people multiple times. We can use ICTs to share back what we’ve learned with evaluation stakeholders, governments, the public, and donors. The range of uses of ICTs is huge, yet the discussion tends to get stuck on mobile surveys and data collection, and we need to start thinking beyond that.

2) ICTs are changing how programs are implemented and how M&E is done — When a program already uses ICTs, data collection can be built in through the digital device itself (e.g., tracking user behavior, cookies, and via tests and quizzes), as one evaluator working on tech and education programs noted. As more programs integrate digital tools, it may become easier to collect monitoring and evaluation data with less effort. Along those lines, an evaluator looking at a large-scale mobile-based agricultural information system asked about approaches to conducting M&E that do not rely on enumerators and traditional M&E approaches. In his program, because the farmers who signed up for the mobile information service do not live in the same geographical community, traditional M&E approaches do not seem plausible and ICT-based approaches look like a logical answer. There is little documentation within the international development evaluation community, however, on how an evaluator might design an evaluation in this type of a situation. (I am guessing there may be some insights from market research and possibly from the transparency and accountability sectors, and among people working on “feedback loops”).

3) Moving beyond one-off efforts — Some people noted that mobile data gathering is still done mostly at the project level. Efforts tend to be short-term and one-off. The data collected is not well-integrated into management information systems or national level processes. (Here we may reference the infamous map of mHealth pilots in Uganda, and note the possibility of ICT-enabled M&E in other sectors going this same route). Numerous small pilots may be problematic if the goal is to institutionalize mobile data gathering into M&E at the wider level and do a better job of supporting and strengthening large-scale systems.

4) Sometimes ICTs are not the answer, even if you want them to be – One presenter (who considered himself a tech enthusiast) went into careful detail about his organization’s process of deciding not to use tablets for a complex evaluation across 4 countries with multiple indicators. In the end, the evaluation itself was too complex, and the team was not able to find the right tool for the job. The organization looked at simple, mid-range and highly complex applications and tools and after testing them all, opted out. Each possible tool presented a set of challenges that meant the tool was not a vast improvement over paper-based data collection, and the up-front costs and training were too expensive and lengthy to make the switch to digital tools worthwhile. In addition, the team felt that face-to-face dynamics in the community and having access to notes and written observations in the margins of a paper survey would enable them to conduct a better evaluation. Some tablets are beginning to enable more interactivity and better design for surveys, but not yet in a way that made them a viable option for this evaluation. I liked how the organization went through a very thorough and in-depth process to make this decision.

Other colleagues also commented that the tech tools are still not quite ‘there’ yet for M&E. Even top of the line business solutions are generally found to be somewhat clunky. Million dollar models are not relevant for environments that development evaluators are working in; in addition to their high cost, they often have too many features or require too much training. There are some excellent mid-range tools that are designed for the environment, but many lack vital features such as availability in multiple languages. Simple tools that are more easily accessible and understandable without a lot of training are not sophisticated enough to conduct a large-scale data collection exercise. One person I talked with suggested that the private sector will eventually develop appropriate tools, and the not-for-profit sector will then adopt them. She felt that those of us who are interested in ICTs in M&E are slightly ahead of the curve and need to wait a few years until the tools are more widespread and common. Many people attending the Tech and M&E sessions at AfrEA made the point that use of ICTs in M&E would get easier and cheaper as the field develops, tools get more advanced/appropriate/user-friendly and widely tested, and networks/ platforms/ infrastructure improves in less-connected rural areas.

5) Need for documentation, evaluation and training on use of ICTs in M&E – Some evaluators felt that ICTs are only suitable for routine data collection as part of an ongoing program, but not good for large-scale evaluations. Others pointed out that the notions of ‘ICT for M&E’ and ‘mobile data collection/mobile surveys’ are often used interchangeably, and evaluation practitioners need to look at the multiple ways that ICTs can be used in the wider field of M&E. ICTs are not just useful for moving from paper surveys to mobile data gathering. An evaluator working on a number of RCTs mentioned that his group relies on ICTs for improving samples, reducing bias, and automatically checking data quality.

There was general agreement that M&E practitioners need resources, opportunities for more discussion, and capacity strengthening on the multiple ways that ICTs may be able to support M&E. One evaluator noted that civil society organizations have a tendency to rush into things, hit a brick wall, and then cross their arms and say, “well, this doesn’t work” (in this case, ICTs for M&E). With training and capacity, and as more experience and documentation is gained, he considered that ICTs could have a huge role in making M&E more efficient and effective.

One evaluator, however, questioned whether having better, cheaper, higher quality data is actually leading to better decisions and outcomes. Another evaluator asked for more evidence of what works, when, with whom and under what circumstances so that evaluators could make better decisions around use of ICTs in M&E. Some felt that a decision tree or list of considerations or key questions to think through when integrating ICTs into M&E would be helpful for practitioners. In general, it was agreed that ICTs can help overcome some of our old challenges, but that they inevitably bring new challenges. Rather than shy away from using ICTs, we should try to understand these new challenges and find ways to overcome/work around them. Though the mHealth field has done quite a bit of useful research, and documentation on digital data collection is growing, use of ICTs is still relatively unexplored in the wider evaluation space.

6) There is no simple answer. One of my takeaways from all the sessions was that many M&E specialists are carefully considering options, and thinking quite a lot about which ICTs for what, whom, when and where rather than deciding from the start that ICTs are ‘good and beneficial’ or ‘bad and not worth considering.’ This is really encouraging, and to be expected of a thoughtful group like this. I hope to participate in more discussions of this nature that dig into the nuances of introducing ICTs into M&E.

Some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward. The “field” of ICTs in M&E is quite broad, however, and there are many ways to slice the cake. Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that you think are missing!

(Part 2 of this post)

Previous posts on ICTs and M&E:

Read Full Post »

At the Community of Evaluators’ Evaluation Conclave last week, Jill Hannon from Rockefeller Foundation’s Evaluation Office and I organized a session on ICTs for Monitoring and Evaluation (M&E) as part of our efforts to learn what different organizations are doing in this area and better understand some of the challenges. We’ll do a couple of similar sessions at the Catholic Relief Services ICT4D Conference in Accra next week, and then we’ll consolidate what we’ve been learning.

Key points raised at this session covered experiences with ICTs in M&E and with ICT4D more generally, including:

ICTs have their advantages, including ease of data collection (especially as compared to carrying around paper forms); ability to collect and convey information from a large and diversely spread population through solutions like SMS; real-time or quick processing of information and ease of feedback; improved decision-making; and administration of large programs and funding flows from the central to the local level.

Capacity is lacking in the use of ICTs for M&E. In the past, the benefits of ICTs had to be sold. Now, the benefits seem to be clear, but there is not enough rigor in the process of selecting and using ICTs. Many organizations would like to use ICT but do not know how or whom to approach to learn. A key struggle is tailoring ICTs to suit M&E needs and goals and ensuring that the tools selected are the right ones for the job and the user. Organizations have a hard time deciding whether it is appropriate to use ICTs, and once they decide, they have trouble determining which solutions are right for their particular goals. People commonly start with the technology, rather than considering what problem they want the technology to help resolve. Often the person developing the M&E framework does not understand ICT, and the person developing the ICT does not understand M&E. There is need to further develop the capacities of M&E professionals who are using ICT systems. Many ICT solutions exist but organizations don’t know what questions to ask about them, and there is not enough information available in an easily understandable format to help them make decisions.

Mindsets can derail ICT-related efforts. Threats and fears around transparency can create resistance among employees to adopt new ICT tools for M&E. In some cases, lack of political makes it difficult to bring about institutional change. Earlier experiences of failure when using ICTs (eg, stolen or broken PCs or PDAs) can also ruin the appetite for trying ICTs again. One complaint was that some government employees nearing retirement age will participate in training as a perk or to collect per diem, yet be uninterested in actually learning any new ICT skills. This can take away opportunities from younger staff who may have a real interest in learning and implementing new approaches.

Privacy needs further study and care. It is not clear whether those who provide information through Internet, SMS, etc., understand how it is going to be used and organizations often do not do a good job of explaining. Lack of knowledge and trust in the privacy of their responses can affect willingness or correctness of responses. More effort needs to be made to guarantee privacy and build trust. Technological solutions to privacy such as data encryption can be implemented, but human behavior is likely the bigger challenge. Paper surveys with sensitive information often get piled up in a room where anyone could see them. In the same way, people do not take care with keeping data collected via ICTs safe; for example, they often share passwords. Organizations and agencies need to take privacy more seriously.

Internal Review Boards (IRBs) are missing in smaller organizations. Normally an IRB allows a researcher to be sure that a survey is not personal or potentially traumatizing, that data encryption is in place, and that data are sanitized. But these systems are usually not established in small, local organizations — they only exist in large organizations — leaving room for ethics breaches.

Information flows need quite a lot of thought, as unintended consequences may derail a project. One participant told of a community health initiative that helped women track their menstrual cycles to determine when they were pregnant. The women were sent information and reminders through SMS on prenatal care. The program ran into problems because the designers did not take into account that some women would miscarry. Women who had miscarried got reminders after their miscarriage, which was traumatic for them. Another participant gave an example of a program that publicized the mobile number of a staff member at a local NGO that supported women victims of violence so that women who faced violence could call to report it. The owner of the mobile phone was overwhelmed with the number of calls, often at night, and would switch the mobile off, meaning no response was available to the women trying to report violence. The organization therefore moved to IVR (interactive voice response), which resolved the original problem, however, with IVR, there was no response to the women who reported violence.

Research needs to be done prior to embarking on use of ICTs. A participant working with women in rural areas mentioned that her organization planned to use mobile games for an education and awareness campaign. They conducted research first on gender roles and parity and found that actually women had no command over phones. Husbands or sons owned them and women had access to them only when the men were around, so they did not proceed with the mobile games aspect of the project.

Literacy is an issue that can be overcome. Literacy is a concern, however there are many creative solutions to overcome literacy challenges, such as the use of symbols. A programme in an urban slum used symbols on hand-held devices for a poverty and infrastructure mapping exercise. In Nepal, an organization tried using SMS weather reports, but most people did not have mobiles and could not read SMS. So the organization instead sent an SMS to a couple of farmers in the community who could read, and who would then draw weather symbols on a large billboard. IVR is another commonly used tool in South Asia.

Qualitative data collection using ICTs should not be forgotten. There is often a focus on surveys, and people forget about the power of collecting qualitative data through video, audio, photos, drawings on mobiles and tablets and other such possibilities. A number of tools can be used for participatory monitoring and evaluation processes. For example, baseline data can be collected through video. tagging can be used to help sort content., video and audio files can be linked with text, and change and decision-making can be captured through video vignettes. People can take their own photos to indicate importance or value. Some participatory rural appraisal techniques can be done on a tablet with a big screen. Climate change and other visual data can be captured with tablets or phones or through digital maps. Photographs and GPS are powerful tools for validation and authentication, however care needs to be taken when using maps with those who may not easily orient themselves to an aerial map. One caution is that some of these kinds of initiatives are “boutique” designs that can be quite expensive, making scale difficult. As android devices and tablets become increasingly cheaper and more available, these kinds of solutions may become easier to implement.

Ubiquity and uptake are not the same thing. Even if mobile phones are “everywhere” it does not mean people will use them to do what organizations or evaluators want them to do. This is true for citizen feedback programs, said one participant, especially when there is a lack of response to reports. “It’s not just an issue of literacy or illiteracy, it’s about culture. It’s about not complaining, about not holding authorities accountable due to community pressures. Some people may not feed back because they are aware of the consequences of complaining and this goes beyond simple access and use of technology.” In addition, returning collected data to the community in a format they can understand and use for their own purposes is important. A participant observed that when evaluators go to the community to collect data for baseline, outcome, impact, etc., from a moral standpoint it is exploitative if they do not report the findings back to the community. Communities are not sure of what they get back from the exercise and this undermines the credibility of the feedback mechanism. Unless people see value in participation, they will not be willing to give their information or feedback. However, it’s important to note that responses to citizen or beneficiary feedback can also skew beneficiary feedback. “When people imagine a response will get them something, their feedback will be based on what they expect to get.”

There has not been enough evaluation of ICT-enabled efforts. A participant noted that despite apparent success, there are huge challenges with the use of ICTs in development initiatives: How effective has branchless banking been? How effective is citizen feedback? How are we evaluating the effectiveness of these ICT tools? And what about how these programs impact on different stakeholders? Some may be excited by these projects, whereas others are threatened.

Training and learning opportunities are needed. The session ended, yet the question of where evaluators can obtain additional guidance and support for using ICTs in M&E processes lingered. CLEAR South Asia has produced a guide on mobile data collection, and we’ll be on the lookout for additional resources and training opportunities to share, for example this series of reports on Mobile Data Collection in Africa from the World Wide Web Foundation or this online course Using ICT Tools for Effective Monitoring, Impact Evaluation and Research available through the Development Cafe.

Thanks to Mitesh Thakkar from Fieldata, Sanjay Saxena from Total Synergy Consulting, Syed Ali Asjad Naqvi from the Center for Economic Research in Pakistan (CERP) and Pankaj Chhetri from Equal Access Nepal for participating as lead discussants at the session; Siddhi Mankad from Catalyst Management Services Pvt. Ltd for serving as rapporteur; and Rockefeller Foundation’s Evaluation Office for supporting this effort.

We used the Technology Salon methodology for the session, including Chatham House Rule, therefore no attribution has been made in this summary post.

Other sessions in this series of Salons on ICTs and M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

In addition, here’s a post on how War Child Uganda is using participatory video for M&E

Read Full Post »