At the Community of Evaluators’ Evaluation Conclave last week, Jill Hannon from Rockefeller Foundation’s Evaluation Office and I organized a session on ICTs for Monitoring and Evaluation (M&E) as part of our efforts to learn what different organizations are doing in this area and better understand some of the challenges. We’ll do a couple of similar sessions at the Catholic Relief Services ICT4D Conference in Accra next week, and then we’ll consolidate what we’ve been learning.
Key points raised at this session covered experiences with ICTs in M&E and with ICT4D more generally, including:
ICTs have their advantages, including ease of data collection (especially as compared to carrying around paper forms); ability to collect and convey information from a large and diversely spread population through solutions like SMS; real-time or quick processing of information and ease of feedback; improved decision-making; and administration of large programs and funding flows from the central to the local level.
Capacity is lacking in the use of ICTs for M&E. In the past, the benefits of ICTs had to be sold. Now, the benefits seem to be clear, but there is not enough rigor in the process of selecting and using ICTs. Many organizations would like to use ICT but do not know how or whom to approach to learn. A key struggle is tailoring ICTs to suit M&E needs and goals and ensuring that the tools selected are the right ones for the job and the user. Organizations have a hard time deciding whether it is appropriate to use ICTs, and once they decide, they have trouble determining which solutions are right for their particular goals. People commonly start with the technology, rather than considering what problem they want the technology to help resolve. Often the person developing the M&E framework does not understand ICT, and the person developing the ICT does not understand M&E. There is need to further develop the capacities of M&E professionals who are using ICT systems. Many ICT solutions exist but organizations don’t know what questions to ask about them, and there is not enough information available in an easily understandable format to help them make decisions.
Mindsets can derail ICT-related efforts. Threats and fears around transparency can create resistance among employees to adopt new ICT tools for M&E. In some cases, lack of political makes it difficult to bring about institutional change. Earlier experiences of failure when using ICTs (eg, stolen or broken PCs or PDAs) can also ruin the appetite for trying ICTs again. One complaint was that some government employees nearing retirement age will participate in training as a perk or to collect per diem, yet be uninterested in actually learning any new ICT skills. This can take away opportunities from younger staff who may have a real interest in learning and implementing new approaches.
Privacy needs further study and care. It is not clear whether those who provide information through Internet, SMS, etc., understand how it is going to be used and organizations often do not do a good job of explaining. Lack of knowledge and trust in the privacy of their responses can affect willingness or correctness of responses. More effort needs to be made to guarantee privacy and build trust. Technological solutions to privacy such as data encryption can be implemented, but human behavior is likely the bigger challenge. Paper surveys with sensitive information often get piled up in a room where anyone could see them. In the same way, people do not take care with keeping data collected via ICTs safe; for example, they often share passwords. Organizations and agencies need to take privacy more seriously.
Internal Review Boards (IRBs) are missing in smaller organizations. Normally an IRB allows a researcher to be sure that a survey is not personal or potentially traumatizing, that data encryption is in place, and that data are sanitized. But these systems are usually not established in small, local organizations — they only exist in large organizations — leaving room for ethics breaches.
Information flows need quite a lot of thought, as unintended consequences may derail a project. One participant told of a community health initiative that helped women track their menstrual cycles to determine when they were pregnant. The women were sent information and reminders through SMS on prenatal care. The program ran into problems because the designers did not take into account that some women would miscarry. Women who had miscarried got reminders after their miscarriage, which was traumatic for them. Another participant gave an example of a program that publicized the mobile number of a staff member at a local NGO that supported women victims of violence so that women who faced violence could call to report it. The owner of the mobile phone was overwhelmed with the number of calls, often at night, and would switch the mobile off, meaning no response was available to the women trying to report violence. The organization therefore moved to IVR (interactive voice response), which resolved the original problem, however, with IVR, there was no response to the women who reported violence.
Research needs to be done prior to embarking on use of ICTs. A participant working with women in rural areas mentioned that her organization planned to use mobile games for an education and awareness campaign. They conducted research first on gender roles and parity and found that actually women had no command over phones. Husbands or sons owned them and women had access to them only when the men were around, so they did not proceed with the mobile games aspect of the project.
Literacy is an issue that can be overcome. Literacy is a concern, however there are many creative solutions to overcome literacy challenges, such as the use of symbols. A programme in an urban slum used symbols on hand-held devices for a poverty and infrastructure mapping exercise. In Nepal, an organization tried using SMS weather reports, but most people did not have mobiles and could not read SMS. So the organization instead sent an SMS to a couple of farmers in the community who could read, and who would then draw weather symbols on a large billboard. IVR is another commonly used tool in South Asia.
Qualitative data collection using ICTs should not be forgotten. There is often a focus on surveys, and people forget about the power of collecting qualitative data through video, audio, photos, drawings on mobiles and tablets and other such possibilities. A number of tools can be used for participatory monitoring and evaluation processes. For example, baseline data can be collected through video. tagging can be used to help sort content., video and audio files can be linked with text, and change and decision-making can be captured through video vignettes. People can take their own photos to indicate importance or value. Some participatory rural appraisal techniques can be done on a tablet with a big screen. Climate change and other visual data can be captured with tablets or phones or through digital maps. Photographs and GPS are powerful tools for validation and authentication, however care needs to be taken when using maps with those who may not easily orient themselves to an aerial map. One caution is that some of these kinds of initiatives are “boutique” designs that can be quite expensive, making scale difficult. As android devices and tablets become increasingly cheaper and more available, these kinds of solutions may become easier to implement.
Ubiquity and uptake are not the same thing. Even if mobile phones are “everywhere” it does not mean people will use them to do what organizations or evaluators want them to do. This is true for citizen feedback programs, said one participant, especially when there is a lack of response to reports. “It’s not just an issue of literacy or illiteracy, it’s about culture. It’s about not complaining, about not holding authorities accountable due to community pressures. Some people may not feed back because they are aware of the consequences of complaining and this goes beyond simple access and use of technology.” In addition, returning collected data to the community in a format they can understand and use for their own purposes is important. A participant observed that when evaluators go to the community to collect data for baseline, outcome, impact, etc., from a moral standpoint it is exploitative if they do not report the findings back to the community. Communities are not sure of what they get back from the exercise and this undermines the credibility of the feedback mechanism. Unless people see value in participation, they will not be willing to give their information or feedback. However, it’s important to note that responses to citizen or beneficiary feedback can also skew beneficiary feedback. “When people imagine a response will get them something, their feedback will be based on what they expect to get.”
There has not been enough evaluation of ICT-enabled efforts. A participant noted that despite apparent success, there are huge challenges with the use of ICTs in development initiatives: How effective has branchless banking been? How effective is citizen feedback? How are we evaluating the effectiveness of these ICT tools? And what about how these programs impact on different stakeholders? Some may be excited by these projects, whereas others are threatened.
Training and learning opportunities are needed. The session ended, yet the question of where evaluators can obtain additional guidance and support for using ICTs in M&E processes lingered. CLEAR South Asia has produced a guide on mobile data collection, and we’ll be on the lookout for additional resources and training opportunities to share, for example this series of reports on Mobile Data Collection in Africa from the World Wide Web Foundation or this online course Using ICT Tools for Effective Monitoring, Impact Evaluation and Research available through the Development Cafe.
Thanks to Mitesh Thakkar from Fieldata, Sanjay Saxena from Total Synergy Consulting, Syed Ali Asjad Naqvi from the Center for Economic Research in Pakistan (CERP) and Pankaj Chhetri from Equal Access Nepal for participating as lead discussants at the session; Siddhi Mankad from Catalyst Management Services Pvt. Ltd for serving as rapporteur; and Rockefeller Foundation’s Evaluation Office for supporting this effort.
We used the Technology Salon methodology for the session, including Chatham House Rule, therefore no attribution has been made in this summary post.
Other sessions in this series of Salons on ICTs and M&E:
12 tips on using ICTs for social monitoring and accountability
11 points on strengthening local capacity to use new ICTs for M&E
10 tips on using new ICTs for qualitative M&E
In addition, here’s a post on how War Child Uganda is using participatory video for M&E
Read Full Post »