Feeds:
Posts
Comments

Posts Tagged ‘monitoring’

Earlier this month I attended the African Evaluators’ Conference (AfrEA) in Cameroon as part of the Technology and Evaluation stream organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation.

A first post about ICTs and M&E at the Afrea Conference went into some of the deliberations around using or not using ICTs and how we can learn and share more as institutions and evaluators. I’ve written previously about barriers and challenges with using ICTs in M&E of international development programs (see the list of posts at the bottom of this one). Many of these same conversations came up at AfrEA, so I won’t write about these again here. What I did want to capture and share were a few interesting design and implementation thoughts from the various ICT and M&E sessions. Here goes:

1) Asking questions via ICT may lead to more honest answers. Some populations are still not familiar with smart phones and tablets and this makes some people shy and quiet, yet it makes others more curious and animated to participate. Some people worry that mobiles, laptops and tablet create distance between the enumerator and the person participating in a survey. On the other hand, I’m hearing more and more examples of cases where using ICTs for surveying actually allow for a greater sense of personal privacy and more honest answers. I first heard about this several years ago with relation to children and youth in the US and Canada seeking psychological or reproductive health counseling. They seemed to feel more comfortable asking questions about sensitive issues via online chats (as opposed to asking a counselor or doctor face-to-face) because they felt anonymous. This same is true for telephone inquiries.

In the case of evaluations, someone suggested that rather than a mobile or tablet creating distance, a device can actually create an opportunity for privacy. For example, if a sensitive question comes up in a survey, an enumerator can hand the person being interviewed the mobile phone and look away when they provide their answer and hit enter, in the same way that waiters in some countries will swipe your ATM card and politely look away while you enter your PIN. Key is building people’s trust in these methods so they can be sure they are secure.

At a Salon on Feb 28, I heard about mobile polling being used to ask men in the Democratic Republic of Congo about sexual assault against men. There was a higher recorded affirmative rate when the question was answered via a mobile survey than when the question had been asked in other settings or though other means. This of course makes sense, considering that often when a reporter or surveyor comes around asking whether men have been victims of rape, no one wants to say publicly. It’s impossible to know in a situation of violence if a perpetrator might be standing around in the crowd watching someone getting interviewed, and clearly shame and stigma also prevent people from answering openly.

Another example at the AfrEA Tech Salon, was a comparison study done by an organization in a slum area in Accra. Five enumerators who spoke local languages conducted Water, Sanitation and Hygiene (WASH) surveys by mobile phone using Open Data Kit (an open source survey application) and the responses were compared with the same survey done on paper.  When people were asked in person by enumerators if they defecated outdoors, affirmative answers were very low. When people were asked the same question via a voice-based mobile phone survey, 26% of respondents reported open defecation.

2) Risk of collecting GPS coordinates. We had a short discussion on the plusses and minuses of using GPS and collecting geolocation data in monitoring and evaluation. One issue that came up was safety for enumerators who carry GPS devices. Some people highlighted that GPS devices can put staff/enumerators at risk of abuse from organized crime bands, military groups, or government authorities, especially in areas with high levels of conflict and violence. This makes me think that if geographic information is needed in these cases, it might be good to use a mobile phone application that collects GPS rather than a fancy smart phone or an actual GPS unit (for example, one could try out PoiMapper, which works on feature phones).

In addition, evaluators emphasized that we need to think through whether GPS data is really necessary at household level. It is tempting to always collect all the information that we possibly can, but we can never truly assure anyone that their information will not be de-anonymized somehow in the near or distant future, and in extremely high risk areas, this can be a huge risk. Many organizations do not have high-level security for their data, so it may be better to collect community or district level data than household locations. Some evaluators said they use ‘tricks’ to anonymize the geographical data, like pinpointing location a few miles off, but others felt this was not nearly enough to guarantee anonymity.

3) Devices can create unforeseen operational challenges at the micro-level. When doing a mobile survey by phone and asking people to press a number to select a particular answer to a question, one organization working in rural Ghana to collect feedback about government performance found that some phones were set to lock when a call was answered. People were pressing buttons to respond to phone surveys (press 1 for….), but their answers did not register because phones were locked, or answers registered incorrectly because the person was entering their PIN to unlock the phone. Others noted that when planning for training of enumerators or community members who will use their own devices for data collection, we cannot forget the fact that every model of phone is slightly different. This adds quite a lot of time to the training as each different model of phone needs to be explained to trainees. (There are a huge number of other challenges related to devices, but these were two that I had not thought of before.)

4) Motivation in the case of poor capacity to respond. An organization interested in tracking violence in a highly volatile area wanted to take reports of violence, but did not have a way to ensure that there would be a response from an INGO, humanitarian organization or government authority if/when violence was reported. This is a known issue — the difficulties of encouraging reporting if responsiveness is low. To keep people engaged this organization thanks people immediately for reporting and then sends peace messages and encouragement 2-3 times per week. Participants in the program have appreciated these ongoing messages and participation has continued to be steady, regardless of the fact that immediate help has not been provided as a result of reporting.

5) Mirroring physical processes with tech. One way to help digital tools gain more acceptance and to make them more user-friendly is to design them to mirror paper processes or other physical processes that people are already familiar with. For example, one organization shared their design process for a mobile application for village savings and loan (VSL) groups. Because security is a big concern among VSL members, the groups typically keep cash in a box with 3 padlocks. Three elected members must be present and agree to open and remove money from the box in order to conduct any transaction. To mimic this, the VSL mobile application requires 3 PINs to access mobile money or make transactions, and what’s more, the app sends everyone in the VSL Group an SMS notification if the 3 people with the PINs carry out a transaction, meaning the mobile app is even more secure than the original physical lock-box, because everyone knows what is happening all the time with the money.

****

As I mentioned in part 1 of this post, some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward.

Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that are missing!

Previous posts on ICTs and M&E on this blog:

Read Full Post »

I attended the African Evaluators’ Conference (AfrEA) in Cameroon last week as part of the Technology and Evaluation strand organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation. The strand was a fantastic opportunity for learning, sharing and understanding more about the context, possibilities and realities of using ICTs in monitoring and evaluation (M&E). We heard from a variety of evaluators, development practitioners, researchers, tool-developers, donors, and private sector and government folks. Judging by the well-attended sessions, there is a huge amount of interest in ICTs and M&E.

Rather than repeat what’s I’ve written in other posts (see links at the bottom), I’ll focus here on some of the more relevant, interesting, and/or new information from the AfrEA discussions. This first post will go into institutional issues and the ‘field’ of ICTs and M&E. A second post will talk about design and operational tips I learned /was reminded of at AfrEA.

1) We tend to get stuck on data collection –Like other areas (I’m looking at you, Open Data) conversations tend to revolve around collecting data. We need to get beyond that and think more about why we are collecting data and what we are going to do with it (and do we really need all this data?). The evaluation field also needs to explore all the other ways it could be using ICTs for M&E, going beyond mobile phones and surveys. Collecting data is clearly a necessary part of M&E, but those data still need to be analyzed. As a participant from a data visualization firm said, there are so many ways you can use ICTs – they help you make sense of things, you can tag sentiment, you can visualize data and make data-based decisions. Others mentioned that ICTs can help us to share data with various stakeholders, improve sampling in RCTs (Randomized Control Trials), conduct quality checks on massive data sets, and manage staff who are working on data collection. Using big data, we can do analyses we never could have imagined before. We can open and share our data, and stop collecting the same data from the same people multiple times. We can use ICTs to share back what we’ve learned with evaluation stakeholders, governments, the public, and donors. The range of uses of ICTs is huge, yet the discussion tends to get stuck on mobile surveys and data collection, and we need to start thinking beyond that.

2) ICTs are changing how programs are implemented and how M&E is done — When a program already uses ICTs, data collection can be built in through the digital device itself (e.g., tracking user behavior, cookies, and via tests and quizzes), as one evaluator working on tech and education programs noted. As more programs integrate digital tools, it may become easier to collect monitoring and evaluation data with less effort. Along those lines, an evaluator looking at a large-scale mobile-based agricultural information system asked about approaches to conducting M&E that do not rely on enumerators and traditional M&E approaches. In his program, because the farmers who signed up for the mobile information service do not live in the same geographical community, traditional M&E approaches do not seem plausible and ICT-based approaches look like a logical answer. There is little documentation within the international development evaluation community, however, on how an evaluator might design an evaluation in this type of a situation. (I am guessing there may be some insights from market research and possibly from the transparency and accountability sectors, and among people working on “feedback loops”).

3) Moving beyond one-off efforts — Some people noted that mobile data gathering is still done mostly at the project level. Efforts tend to be short-term and one-off. The data collected is not well-integrated into management information systems or national level processes. (Here we may reference the infamous map of mHealth pilots in Uganda, and note the possibility of ICT-enabled M&E in other sectors going this same route). Numerous small pilots may be problematic if the goal is to institutionalize mobile data gathering into M&E at the wider level and do a better job of supporting and strengthening large-scale systems.

4) Sometimes ICTs are not the answer, even if you want them to be – One presenter (who considered himself a tech enthusiast) went into careful detail about his organization’s process of deciding not to use tablets for a complex evaluation across 4 countries with multiple indicators. In the end, the evaluation itself was too complex, and the team was not able to find the right tool for the job. The organization looked at simple, mid-range and highly complex applications and tools and after testing them all, opted out. Each possible tool presented a set of challenges that meant the tool was not a vast improvement over paper-based data collection, and the up-front costs and training were too expensive and lengthy to make the switch to digital tools worthwhile. In addition, the team felt that face-to-face dynamics in the community and having access to notes and written observations in the margins of a paper survey would enable them to conduct a better evaluation. Some tablets are beginning to enable more interactivity and better design for surveys, but not yet in a way that made them a viable option for this evaluation. I liked how the organization went through a very thorough and in-depth process to make this decision.

Other colleagues also commented that the tech tools are still not quite ‘there’ yet for M&E. Even top of the line business solutions are generally found to be somewhat clunky. Million dollar models are not relevant for environments that development evaluators are working in; in addition to their high cost, they often have too many features or require too much training. There are some excellent mid-range tools that are designed for the environment, but many lack vital features such as availability in multiple languages. Simple tools that are more easily accessible and understandable without a lot of training are not sophisticated enough to conduct a large-scale data collection exercise. One person I talked with suggested that the private sector will eventually develop appropriate tools, and the not-for-profit sector will then adopt them. She felt that those of us who are interested in ICTs in M&E are slightly ahead of the curve and need to wait a few years until the tools are more widespread and common. Many people attending the Tech and M&E sessions at AfrEA made the point that use of ICTs in M&E would get easier and cheaper as the field develops, tools get more advanced/appropriate/user-friendly and widely tested, and networks/ platforms/ infrastructure improves in less-connected rural areas.

5) Need for documentation, evaluation and training on use of ICTs in M&E – Some evaluators felt that ICTs are only suitable for routine data collection as part of an ongoing program, but not good for large-scale evaluations. Others pointed out that the notions of ‘ICT for M&E’ and ‘mobile data collection/mobile surveys’ are often used interchangeably, and evaluation practitioners need to look at the multiple ways that ICTs can be used in the wider field of M&E. ICTs are not just useful for moving from paper surveys to mobile data gathering. An evaluator working on a number of RCTs mentioned that his group relies on ICTs for improving samples, reducing bias, and automatically checking data quality.

There was general agreement that M&E practitioners need resources, opportunities for more discussion, and capacity strengthening on the multiple ways that ICTs may be able to support M&E. One evaluator noted that civil society organizations have a tendency to rush into things, hit a brick wall, and then cross their arms and say, “well, this doesn’t work” (in this case, ICTs for M&E). With training and capacity, and as more experience and documentation is gained, he considered that ICTs could have a huge role in making M&E more efficient and effective.

One evaluator, however, questioned whether having better, cheaper, higher quality data is actually leading to better decisions and outcomes. Another evaluator asked for more evidence of what works, when, with whom and under what circumstances so that evaluators could make better decisions around use of ICTs in M&E. Some felt that a decision tree or list of considerations or key questions to think through when integrating ICTs into M&E would be helpful for practitioners. In general, it was agreed that ICTs can help overcome some of our old challenges, but that they inevitably bring new challenges. Rather than shy away from using ICTs, we should try to understand these new challenges and find ways to overcome/work around them. Though the mHealth field has done quite a bit of useful research, and documentation on digital data collection is growing, use of ICTs is still relatively unexplored in the wider evaluation space.

6) There is no simple answer. One of my takeaways from all the sessions was that many M&E specialists are carefully considering options, and thinking quite a lot about which ICTs for what, whom, when and where rather than deciding from the start that ICTs are ‘good and beneficial’ or ‘bad and not worth considering.’ This is really encouraging, and to be expected of a thoughtful group like this. I hope to participate in more discussions of this nature that dig into the nuances of introducing ICTs into M&E.

Some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward. The “field” of ICTs in M&E is quite broad, however, and there are many ways to slice the cake. Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that you think are missing!

(Part 2 of this post)

Previous posts on ICTs and M&E:

Read Full Post »

This is a cross-post from Tom Murphyeditor of the aid blog A View From the Cave. The original article can be found on Humanosphere. The post summarizes discussions at our November 21st New York City Technology Salon: Are Mobile Money Cash Grants the Future of Development?  If you’d like to join us for future Salons, sign up here.

by Tom Murphy

Decades ago, some of the biggest NGOs simply gave away money to individuals in communities. People lined up and were just given cash.

The once popular form of aid went out of fashion, but it is now making a comeback.

Over time, coordination became extremely difficult. Traveling from home to home costs time and money for the NGO and the same problem exists for recipients when they have to go to a central location. More significant was the shift in development thinking that said giving hand outs was causing long term damage.

The backlash against ‘welfare queens’ in the US, UK and elsewhere during the 1980s was reflected in international development programming. Problem was that it was all based on unproven theories of change and anecdotal evidence, rather than hard evidence.

Half a decade later, new research shows that just giving people money can be an effective way to build assets and even incomes. The findings were covered by major players like NPR and the Economist.

While exciting and promising, cash transfers are not a new tool in the development utility belt.

Various forms of transfers have emerged over the past decade. Food vouchers were used by the World Food Programme when responding to the 2011 famine in the Horn of Africa. Like food stamps in the US, people could go buy food from local markets and get exactly what they need while supporting the local economy.

The differences have sparked a sometimes heated debate within the development community as to what the findings about cash transfers mean going forward. A Technology Salon hosted conversation at ThoughtWorks in New York City last week, featured some of the leading researchers and players in the cash transfer sector.

The salon style conversation featured Columbia University and popular aid blogger Chris Blattman, GiveDirectly co-founder and UCSD researcher Paul Neihaus and Plan USA CEO Tessie San Martin. The ensuing discussion, operating under the Chatham House Rule of no attribution, featured representatives from large NGOs, microfinance organizations and UN agencies.

Research from Kenya, Uganda and Liberia show both the promise and shortcomings of cash transfers. For example, giving out cash in addition to training was successful in generating employment in Northern Uganda. Another program, with the backing of the Ugandan government, saw success with the cash alone.

Cash transfers have been argued as the new benchmark for development and aid programs. Advocates in the discussion made the case that programs should be evaluated in terms of impact and cost-effectiveness against just giving people cash.

That idea saw some resistance. The research from Liberia, for example, showed that money given to street youth would not be wasted, but it was not sufficient to generate long-lasting employment or income. There are capacity problems and much larger issues that probably cannot be addressed by cash alone.

An additional concern is the unintended negative consequences caused by cash transfers. One example given was that of refugees in Syria. Money was distributed to families labeled for rent. Despite warnings not to label the transfer, the program went ahead.

As a result, rents increased. The money intended to help reduce the cost incurred by rent was rendered largely useless. One participant raised the concern that cash transfers in such a setting could be ‘taxed’ by rebels or government fighters. There is a potential that aid organizations could help fund fighting by giving unrestricted cash.

The discussion made it clear that the applications of cash transfers are far more nuanced than they might appear. Kenya saw success in part because of the ease of sending money to people through mobile phones. Newer programs in India, for example, rely on what are essentially ATM cards.

Impacts, admitted practitioners, can go beyond simple incomes. There has been care to make sure that implementing cash transfer programs to not dramatically change social structures in ways that cause problems for the community and recipients. In one case, giving women cash allowed for them to participate in the local markets, a benefit to everyone except for the existing shop oligarchs.

Governments in low and middle-income countries are seeing increasing pressure to establish social programs. The success of cash transfer programs in Brazil and Mexico indicate that it can be an effective way to lift people out of poverty. Testing is underway to bring about more efficient and context appropriate cash transfer schemes.

An important component in the re-emergence of cash transfers is looking back to previous efforts, said one NGO official. The individual’s organization is systematically looking back at communities where the NGO used to work in order to see what happened ten years later. The idea is to learn what impacts may or may not have been on that community in order to inform future initiatives.

“Lots of people have concerns about cash, but we should have concerns about all the programs we are doing,” said a participant.

The lessons from the cash transfer research shows that there is increasing need for better evidence across development and aid programs. Researchers in the group argued that the ease of doing evaluations is improving.

Read the “Storified” version of the Technology Salon on Mobiles and Cash Transfers here.

Read Full Post »

At Catholic Relief Services’ annual ICT4D meeting in March 2013, I worked with Jill Hannon from Rockefeller Foundation’s Evaluation Office to organize 3 sessions on the use of ICT for Monitoring and Evaluation (ICTME). The sessions covered the benefits (known and perceived) of using ICTs for M&E, the challenges and barriers organizations face when doing so, and some lessons and advice on how to integrate ICTs into the M&E process.

Our lead discussants in the three sessions included: Stella Luk (Dimagi), Guy Sharrack (CRS), Mike Matarasso (CRS), David McAfee (HNI/Datawinners), Mark Boots (Votomobile), and Teressa Trusty (USAID’s IDEA/Mobile Solutions). In addition, we drew from the experiences and expertise of some 60 people who attended our two round table sessions.

Benefits of integrating ICTs into the M&E process

Some of the potential benefits of integrating ICTs mentioned by the various discussants and participants in the sessions included:

  • More rigorous, higher quality data collection and more complete data
  • Reduction in required resources (time, human, money) to collect, aggregate and analyze data
  • Reduced complexity if data systems are simplified; thus increased productivity and efficiency
  • Combined information sources and types and integration of free form, qualitative data with quantitative data
  • Broader general feedback from a wider public via ICT tools like SMS; inclusion of new voices in the feedback process, elimination of the middleman to empower communities
  • Better cross-sections of information, information comparisons; better coordination and cross-comparing if standard, open formats are used
  • Trend-spotting with visualization tools
  • Greater data transparency and data visibility, easier data auditing
  • Real-time or near real-time feedback “up the chain” that enables quicker decision-making, adaptive management, improved allocation of limited resources based on real-time data, quicker communication of decisions/changes back to field-level staff, faster response to donors and better learning
  • Real-time feedback “down the ladder” that allows for direct citizen/beneficiary feedback, and complementing of formal M&E with other social monitoring approaches
  • Scale, greater data security and archiving, and less environmental impact
  • Better user experience for staff as well as skill enhancement and job marketability and competitiveness of staff who use the system

Barriers and challenges of integrating ICTs into M&E processes

A number of challenges and barriers were also identified, including:

  • A lack of organizational capacity to decide when to use ICTs in M&E, for what, and why, and deciding on the right ICT (if any) for the situation. Organizations may find it difficult to get beyond collecting the data to better use of data for decision-making and coordination. There is often low staff capacity, low uptake of ICT tools and resistance to change.
  • A tendency to focus on surveys and less attention to other types of M&E input, such as qualitative input. Scaling analysis of large-scale qualitative feedback is also a challenge: “How do you scale qualitative feedback to 10,000 people or more? People can give their feedback in a number of languages by voice. How do you mine that data?”
  • The temptation to offload excessive data collection to frontline staff without carefully selecting what data is actually going to be used and useful for them or for other decision-makers.
  • M&E is often tacked on at the end of a proposal design. The same is true for ICT. Both ICT and M&E need to be considered and “baked in” to a process from the very beginning.
  • ICT-based M&E systems have missed the ball on sharing data back. “Clinics in Ghana collect a lot of information that gets aggregated and moved up the chain. What doesn’t happen is sharing that information back with the clinic staff so that they can see what is happening in their own clinic and why. We need to do a better job of giving information back to people and closing the loop.” This step is also important for accountability back to communities. On the whole, we need to be less extractive.
  • Available tools are not always exactly right, and no tool seems to provide everything an organization needs, making it difficult to choose the right tool. There are too many solutions, many of which are duplicative, and often the feature sets and the usability of these tools are both poor. There are issues with sustainability and ongoing maintenance and development of M&E platforms.
  • Common definitions for data types and standards for data formatting are needed. The lack of interoperability among ICT solutions also causes challenges. As a field, we don’t do enough linking of systems together to see a bigger picture of which programs are doing what, where and who they are impacting and how.
  • Security and privacy are not adequately addressed. Many organizations or technology providers are unaware of the ethical implications of collecting data via new tools and channels. Many organizations are unclear about the ethical standards for research versus information that is offered up by different constituents or “beneficiaries” (eg., information provided by people participating in programs that use SMS or collect information through SMS-based surveys) versus monitoring and evaluation information. It is also unclear what the rules are for information collected by private companies, who this information can be shared with and what privacy laws mean for ICT-enabled M&E and other types of data collection. If there are too many barriers to collecting information, however, the amount of information collected will be reduced. A balance needs to be found. The information that telecommunications companies hold is something to think about when considering privacy and consent issues, especially in situations of higher vulnerability and risk. (UNOCHA has recently released a report that may be useful.)
  • Not enough is understood about motivation and incentive for staff or community members to participate or share data. “Where does my information go? Do I see the results? Why should I participate? Is anyone responding to my input?” In addition, the common issues of cost, access, capacity, language, literacy, cultural barriers are very much present in attempts to collect information directly from community members. Another question is that of inclusion: Does ICT-enabled data collection or surveying leave certain groups out? (See this study on intrinsic vs extrinsic motivation for feedback.)
  • Donors often push or dictate the use of ICT when it’s perhaps not the most useful for the situation. In addition there is normally not enough time during proposal process for organizations to work on buy-in and good design of an ICT-enabled M&E system. There is often a demand from the top for excessive data collection without an understanding of the effort required to collect it, and time/resource trade-offs for excessive data collection when it leads to less time spent on program implementation. “People making decisions in the capital want to add all these new questions and information and that can be a challenge… What data are valuable to collect? Who will respond to them? Who will use them as the project goes forward?”
  • There seems to be a focus on top-down, externally created solutions rather than building on local systems and strengths or supporting local organizations or small businesses to strengthen their ICTME capacities. “Can strengthening local capacity be an objective in its own right? Are donors encouraging agencies to develop vertical ICTME solutions without strengthening local systems and partners?”
  • Results-based, data-based focus can bias the countable, leave out complex development processes with more difficult to count/measure impacts.

Lessons and good practice for integrating ICTs into M&E processes

ICT is not a silver bullet – it presents its own set of challenges. But a number of good practices surfaced:

  • The use of ICTs for M&E is not just a technology issue, it’s a people and processes issue too, and it is important to manage the change carefully. It’s also important to keep an open mind that ICT4D to support M&E might not always be the best use of scarce resources – there may be more pressing priorities for a project. Getting influential people on your side to support the cause and help leverage funding and support is critical. It’s also important to communicate goals and objectives clearly, and provide incentives to make sure ICTs are successfully adopted. The trick is keeping up with technology advances to improve the system, but also keeping your eye on the ball.
  • When designing an ICTME effort, clarity of purpose and a holistic picture of the project M&E system are needed in order to review options for where ICT4D can best fit. Don’t start with the technology. Start with the M&E purpose and goals and focus on the business need, not the gadgets. Have a detailed understanding of M&E data requirements and data flows as a first step. Follow those with iterative discussions with ICT staff to specify the ICT4D solution requirements.
  • Select an important but modest project to start with and pilot in one location – get it right and work out the glitches before expanding to a second tier of pilots or expanding widely. Have a fully functional model to share for broad buy-in and collect some hard data during the pilot to convince people of adoption. The first ICT initiative will be the most important.  If it is successful, use of ICTs will likely spread throughout an organization.  If the first initiative fails, it can significantly push back the adoption of ICTs in general. For this reason, it’s important to use your best people for the first effort. Teamwork and/or new skill sets may be required to improve ICT-enabled M&E. The “ICT4D 2.0 Manifesto” talks about a tribrid set of skills needed for ICT-enabled programs.
  • Don’t underestimate the need for staff training and ongoing technical assistance to ensure a positive user experience, particularly when starting out. Agencies need to find the right balance between being able to provide support for a limited number of ICT solutions versus the need to support ongoing local innovation.  It’s also important to ask for help when needed.  The most successful M&E projects are led by competent managers who seek out resources both inside and outside their organizations.
  • Good ICT-enabled M&E comes from a partnership between program, M&E and ICT staff, technical support internal and external to the organization. Having a solid training curriculum and a good help desk are important. In addition, in-built capacity for original architecture design and to maintain and adjust the system is a good idea. A lead business owner and manager for the system need to be in place as well as global and local level pioneers and strong leadership (with budget!) to do testing and piloting. At the local level, it is important to have an energetic and savvy local M&E pioneer who has a high level of patience and understands technology.
  • At the community level, a key piece is understanding who you need to hear from for effective M&E and ensuring that ICT tools are accessible to all. It’s also critical to understand who you are ignoring or not reaching with any tool or process. Are women and children left out? What about income level? Those who are not literate?
  • Organizations should also take care that they are not replacing or obliterating existing human responsibilities for evaluation. For example, at community level in Ghana, Assembly Members have the current responsibility for representing citizen concerns. An ICT-enabled feedback loop might undermine this responsibility if it seeks direct-from-citizen evaluation input.  The issue of trust and the human-human link also need consideration. ICT cannot and should not be a replacement for everything. New ICT tools can increase the number of people and factors evaluated; not just increase efficiency of existing evaluations.
  • Along the same lines, it’s important not to duplicate existing information systems, create parallel systems or fragment the government’s own systems. Organizations should be strengthening local government systems and working with government to use the information to inform policy and help with decision-making and implementation of programs.
  • implementors need to think about the direction of information flow. “Is it valuable to share results “upward” and “downward”? It is possible to integrate local decision-making into a system.” Systems can be created that allow for immediate local-level decision-making based on survey input. Key survey questions can be linked to indicators that allow for immediate discussion and solutions to improve service provision.
  • Also, the potential political and social implications of greater openness in information flows needs to be considered. Will local, regional and national government embrace the openness and transparency that ICTs offer? Are donors and NGOs potentially putting people at risk?
  • For best results, pick a feasible and limited number of quality indicators and think through how frontline workers will be motivated to collect the data. Excessive data collection will interfere with or impede service delivery. Make sure managers are capable of handling and analyzing data that comes in and reacting to it, or there is no point in collecting it. It’s important to not only think about what data you want, but how this data will be used. Real-time data collected needs to be actionable. Be sure that those submitting data understand what data they have submitted and can verify its accuracy. Mobile data collection needs to be integrated into real processes and feedback loops. People will only submit information or reports if they see that someone cares about those reports and does something about them.
  • Collecting data through mobile technology may change the behavior being monitored or tracked. One participant commented that when his organization implemented an ICT-based system to track staff performance, people started doing unnecessary activities so that they could tick off the system boxes rather than doing what they knew should be done for better program impact.
  • At the practical level, tips include having robust options for connectivity and power solutions, testing the technology in the field with a real situation, securing reduced costs with vendors for bulk purchasing and master agreements, using standard vendor tools instead of custom building. It’s good to keep the system simple, efficient and effective as possible and to avoid redundancy or the addition of features things that don’t truly offer more functionality.

Thanks to all our participants and lead discussants at the sessions!

Useful information and guides on ICTME:

Mobile-based technology for monitoring and evaluation: A reference guide for project managers, M&E specialists, researchers, donors

3 Reports on mobile data collection

Other posts on ICTs for M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

Using participatory video for M&E

ICTs and M&E at the South Asia Evaluators’ Conclave

Read Full Post »

At the Community of Evaluators’ Evaluation Conclave last week, Jill Hannon from Rockefeller Foundation’s Evaluation Office and I organized a session on ICTs for Monitoring and Evaluation (M&E) as part of our efforts to learn what different organizations are doing in this area and better understand some of the challenges. We’ll do a couple of similar sessions at the Catholic Relief Services ICT4D Conference in Accra next week, and then we’ll consolidate what we’ve been learning.

Key points raised at this session covered experiences with ICTs in M&E and with ICT4D more generally, including:

ICTs have their advantages, including ease of data collection (especially as compared to carrying around paper forms); ability to collect and convey information from a large and diversely spread population through solutions like SMS; real-time or quick processing of information and ease of feedback; improved decision-making; and administration of large programs and funding flows from the central to the local level.

Capacity is lacking in the use of ICTs for M&E. In the past, the benefits of ICTs had to be sold. Now, the benefits seem to be clear, but there is not enough rigor in the process of selecting and using ICTs. Many organizations would like to use ICT but do not know how or whom to approach to learn. A key struggle is tailoring ICTs to suit M&E needs and goals and ensuring that the tools selected are the right ones for the job and the user. Organizations have a hard time deciding whether it is appropriate to use ICTs, and once they decide, they have trouble determining which solutions are right for their particular goals. People commonly start with the technology, rather than considering what problem they want the technology to help resolve. Often the person developing the M&E framework does not understand ICT, and the person developing the ICT does not understand M&E. There is need to further develop the capacities of M&E professionals who are using ICT systems. Many ICT solutions exist but organizations don’t know what questions to ask about them, and there is not enough information available in an easily understandable format to help them make decisions.

Mindsets can derail ICT-related efforts. Threats and fears around transparency can create resistance among employees to adopt new ICT tools for M&E. In some cases, lack of political makes it difficult to bring about institutional change. Earlier experiences of failure when using ICTs (eg, stolen or broken PCs or PDAs) can also ruin the appetite for trying ICTs again. One complaint was that some government employees nearing retirement age will participate in training as a perk or to collect per diem, yet be uninterested in actually learning any new ICT skills. This can take away opportunities from younger staff who may have a real interest in learning and implementing new approaches.

Privacy needs further study and care. It is not clear whether those who provide information through Internet, SMS, etc., understand how it is going to be used and organizations often do not do a good job of explaining. Lack of knowledge and trust in the privacy of their responses can affect willingness or correctness of responses. More effort needs to be made to guarantee privacy and build trust. Technological solutions to privacy such as data encryption can be implemented, but human behavior is likely the bigger challenge. Paper surveys with sensitive information often get piled up in a room where anyone could see them. In the same way, people do not take care with keeping data collected via ICTs safe; for example, they often share passwords. Organizations and agencies need to take privacy more seriously.

Internal Review Boards (IRBs) are missing in smaller organizations. Normally an IRB allows a researcher to be sure that a survey is not personal or potentially traumatizing, that data encryption is in place, and that data are sanitized. But these systems are usually not established in small, local organizations — they only exist in large organizations — leaving room for ethics breaches.

Information flows need quite a lot of thought, as unintended consequences may derail a project. One participant told of a community health initiative that helped women track their menstrual cycles to determine when they were pregnant. The women were sent information and reminders through SMS on prenatal care. The program ran into problems because the designers did not take into account that some women would miscarry. Women who had miscarried got reminders after their miscarriage, which was traumatic for them. Another participant gave an example of a program that publicized the mobile number of a staff member at a local NGO that supported women victims of violence so that women who faced violence could call to report it. The owner of the mobile phone was overwhelmed with the number of calls, often at night, and would switch the mobile off, meaning no response was available to the women trying to report violence. The organization therefore moved to IVR (interactive voice response), which resolved the original problem, however, with IVR, there was no response to the women who reported violence.

Research needs to be done prior to embarking on use of ICTs. A participant working with women in rural areas mentioned that her organization planned to use mobile games for an education and awareness campaign. They conducted research first on gender roles and parity and found that actually women had no command over phones. Husbands or sons owned them and women had access to them only when the men were around, so they did not proceed with the mobile games aspect of the project.

Literacy is an issue that can be overcome. Literacy is a concern, however there are many creative solutions to overcome literacy challenges, such as the use of symbols. A programme in an urban slum used symbols on hand-held devices for a poverty and infrastructure mapping exercise. In Nepal, an organization tried using SMS weather reports, but most people did not have mobiles and could not read SMS. So the organization instead sent an SMS to a couple of farmers in the community who could read, and who would then draw weather symbols on a large billboard. IVR is another commonly used tool in South Asia.

Qualitative data collection using ICTs should not be forgotten. There is often a focus on surveys, and people forget about the power of collecting qualitative data through video, audio, photos, drawings on mobiles and tablets and other such possibilities. A number of tools can be used for participatory monitoring and evaluation processes. For example, baseline data can be collected through video. tagging can be used to help sort content., video and audio files can be linked with text, and change and decision-making can be captured through video vignettes. People can take their own photos to indicate importance or value. Some participatory rural appraisal techniques can be done on a tablet with a big screen. Climate change and other visual data can be captured with tablets or phones or through digital maps. Photographs and GPS are powerful tools for validation and authentication, however care needs to be taken when using maps with those who may not easily orient themselves to an aerial map. One caution is that some of these kinds of initiatives are “boutique” designs that can be quite expensive, making scale difficult. As android devices and tablets become increasingly cheaper and more available, these kinds of solutions may become easier to implement.

Ubiquity and uptake are not the same thing. Even if mobile phones are “everywhere” it does not mean people will use them to do what organizations or evaluators want them to do. This is true for citizen feedback programs, said one participant, especially when there is a lack of response to reports. “It’s not just an issue of literacy or illiteracy, it’s about culture. It’s about not complaining, about not holding authorities accountable due to community pressures. Some people may not feed back because they are aware of the consequences of complaining and this goes beyond simple access and use of technology.” In addition, returning collected data to the community in a format they can understand and use for their own purposes is important. A participant observed that when evaluators go to the community to collect data for baseline, outcome, impact, etc., from a moral standpoint it is exploitative if they do not report the findings back to the community. Communities are not sure of what they get back from the exercise and this undermines the credibility of the feedback mechanism. Unless people see value in participation, they will not be willing to give their information or feedback. However, it’s important to note that responses to citizen or beneficiary feedback can also skew beneficiary feedback. “When people imagine a response will get them something, their feedback will be based on what they expect to get.”

There has not been enough evaluation of ICT-enabled efforts. A participant noted that despite apparent success, there are huge challenges with the use of ICTs in development initiatives: How effective has branchless banking been? How effective is citizen feedback? How are we evaluating the effectiveness of these ICT tools? And what about how these programs impact on different stakeholders? Some may be excited by these projects, whereas others are threatened.

Training and learning opportunities are needed. The session ended, yet the question of where evaluators can obtain additional guidance and support for using ICTs in M&E processes lingered. CLEAR South Asia has produced a guide on mobile data collection, and we’ll be on the lookout for additional resources and training opportunities to share, for example this series of reports on Mobile Data Collection in Africa from the World Wide Web Foundation or this online course Using ICT Tools for Effective Monitoring, Impact Evaluation and Research available through the Development Cafe.

Thanks to Mitesh Thakkar from Fieldata, Sanjay Saxena from Total Synergy Consulting, Syed Ali Asjad Naqvi from the Center for Economic Research in Pakistan (CERP) and Pankaj Chhetri from Equal Access Nepal for participating as lead discussants at the session; Siddhi Mankad from Catalyst Management Services Pvt. Ltd for serving as rapporteur; and Rockefeller Foundation’s Evaluation Office for supporting this effort.

We used the Technology Salon methodology for the session, including Chatham House Rule, therefore no attribution has been made in this summary post.

Other sessions in this series of Salons on ICTs and M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

In addition, here’s a post on how War Child Uganda is using participatory video for M&E

Read Full Post »

The February 5 Technology Salon in New York City asked “What are the ethics in participatory digital mapping?” Judging by the packed Salon and long waiting list, many of us are struggling with these questions in our work.

Some of the key ethical points raised at the Salon related to the benefits of open data vs privacy and the desire to do no harm. Others were about whether digital maps are an effective tool in participatory community development or if they are mostly an innovation showcase for donors or a backdrop for individual egos to assert their ‘personal coolness’. The absence of research and ethics protocols for some of these new kinds of data gathering and sharing was also an issue of concern for participants.

During the Salon we were only able to scratch the surface, and we hope to get together soon for a more in-depth session (or maybe 2 or 3 sessions – stay tuned!) to further unpack the ethical issues around participatory digital community mapping.

The points raised by discussants and participants included:

1) Showcasing innovation

Is digital mapping really about communities, or are we really just using communities as a backdrop to showcase our own innovation and coolness or that of our donors?

2) Can you do justice to both process and product?

Maps should be less an “in-out tool“ and more part of a broader program. External agents should be supporting communities to articulate and to be full partners in saying, doing, and knowing what they want to do with maps. Digital mapping may not be better than hand drawn maps, if we consider that the process of mapping is just as or more important than the final product. Hand drawn maps can allow for important discussions to happen while people draw. This seems to happens much less with the digital mapping process, which is more technical, and it happens even less when outside agents are doing the mapping. A hand drawn map can be imbued with meaning in terms of the size, color or placement of objects or borders. Important meaning may be missed when hand drawn maps are replaced with digital ones.

Digital maps, however, can be printed and further enhanced with comments and drawings and discussed in the community, as some noted. And digital maps can lend a sense of professionalism to community members and help them to make a stronger case to authorities and decisions makers. Some participants raised concerns about power relations during mapping processes, and worried that using digital tools could emphasize those.

3) The ethics of wasting people’s time.

Community mapping is difficult. The goal of external agents should be to train local people so that they can be owners of the process and sustain it in the long term. This takes time. Often, however, mapping experts are flown in for a week or two to train community members. They leave people with some knowledge, but not enough to fully manage the mapping process and tools. If people end up only half-trained and without local options to continue training, their time has essentially been wasted. In addition, if young people see the training as a pathway to a highly demanded skill set yet are left partially trained and without access to tools and equipment, they will also feel they have wasted their time.

4) Data extraction

When agencies, academics and mappers come in with their clipboards or their GPS units and conduct the same surveys and studies over and over with the same populations, people’s time is also wasted. Open digital community mapping comes from a viewpoint that an open map and open data are one way to make sure that data that is taken from or created by communities is made available to the communities for their own use and can be accessed by others so that the same data is not collected repeatedly. Though there are privacy concerns around opening data, there is a counter balanced ethical dilemma related to how much time gets wasted by keeping data closed.

5) The (missing) link between data and action

Related to the issue of time wasting is the common issue of a missing link between data collected and/or mapped, action and results. Making a map identifying issues is certainly no guarantee that the government will come and take care of those issues. Maps are a means to an end, but often the end is not clear. What do we really hope the data leads to? What does the community hope for? Mapping can be a flashy technology that brings people to the table, but that is no guarantee that something will happen to resolve the issues the map is aimed at solving.

6) Intermediaries are important

One way to ensure that there is a link between data and action is to identify stakeholders that have the ability to use, understand and re-interpret the data. One case was mentioned where health workers collected data and then wanted to know “What do we do now? How does this affect the work that we do? How do we present this information to community health workers in a way that it is useful to our work?” It’s important to tone the data down and make them understandable to the base population, and to also show them in a way that is useful to people working at local institutions. Each audience will need the data to be visualized or shared in a different, contextually appropriate way if they are going to use the data for decision-making. It’s possible to provide the same data in different ways across different platforms from paper to high tech. The challenge of keeping all the data and the different sharing platforms updated, however, is one that can’t be overlooked.

7) What does informed consent actually mean in today’s world?

There is a viewpoint that data must be open and that locking up data is unethical. On the other hand, there are questions about research ethics and protocols when doing mapping projects and sharing or opening data. Are those who do mapping getting informed consent from people to use or open their data? This is the cornerstone of ethics when doing research with human beings. One must be able to explain and be clear about the risks of this data collection, or it is impossible to get truly informed consent. What consent do community mappers need from other community members if they are opening data or information? What about when people are volunteering their information and self-reporting? What does informed consent mean in those cases? And what needs to be done to ensure that consent is truly informed? How can open data and mapping be explained to those who have not used the Internet before? How can we have informed consent if we cannot promise anyone that their data are really secure? Do we have ethics review boards for these new technological ways of gathering data?

8) Not having community data also has ethical implications

It may seem like time wasting, and there may be privacy and protection questions, but there are are also ethical implications of not having community data. When tools like satellite remote sensing are used to do slum mapping, for example, data are very dehumanized and can lead to sterile decision-making. The data that come from a community itself can make these maps more human and these decisions more humane. But there is a balance between the human/humanizing side and the need to protect. Standards are needed for bringing in community and/or human data in an anonymized way, because there are ethical implications on both ends.

9) The problem with donors….

Big donors are not asking the tough questions, according to some participants. There is a lack of understanding around the meaning, use and value of the data being collected and the utility of maps. “If the data is crap, you’ll have crap GIS and a crap map. If you are just doing a map to do a map, there’s an issue.” There is great incentive from the donor side to show maps and to demonstrate value, because maps are a great photo op, a great visual. But how to go a level down to make a map really useful? Are the M&E folks raising the bar and asking these hard questions? Often from the funder’s perspective, mapping is seen as something that can be done quickly. “Get the map up and the project is done. Voila! And if you can do it in 3 weeks, even better!”

Some participants felt the need for greater donor awareness of these ethical questions because many of them are directly related to funding issues. As one participant noted, whether you coordinate, whether it’s participatory, whether you communicate and share back the information, whether you can do the right thing with the privacy issue — these all depend on what you can convince a donor to fund. Often it’s faster to reinvent the wheel because doing it the right way – coordinating, learning from past efforts, involving the community — takes more time and money. That’s often the hard constraint on these questions of ethics.

Check this link for some resources on the topic, and add yours to the list.

Many thanks to our lead discussants, Robert Banick from the American Red Cross and Erica Hagen from Ground Truth, and to Population Council for hosting us for this month’s Salon!

The next Technology Salon NYC will be coming up in March. Stay tuned for more information, and if you’d like to receive notifications about future salons, sign up for the mailing list!

Read Full Post »

At the October 17, 2012 Technology Salon NYC, we focused on ways that ICTs can be used for qualitative monitoring and evaluation (M&E) efforts that aim to listen better to those who are participating in development programs. Our lead discussants were:  John Hecklinger, Global Giving; Ian Thorpe, UN DOCO and the World We Want 2015 Campaign; and Emily Jacobi, Digital Democracy. This salon was the final in a series of three on using new technologies in M&E work.

Global Giving shared experiences from their story-telling project which has collected tens of thousands of short narratives from community members about when an individual or organization tried to change something in their community. The collected stories are analyzed using Sensemaker to find patterns in the data with the aim of improving NGO work. (For more on Global Giving’s process see this document.)

The United Nations’ Beyond 2015 Campaign aims to spur a global conversation on the post-MDG development agenda. The campaign is conducting outreach to people and organizations to encourage them to participate in the discussion; offering a web platform (www.worldwewant2015.org) where the global conversation is taking place; and working to get offline voices into the conversation. A challenge will be synthesizing and making sense of all of the information coming in via all sorts of media channels and being accountable now and in future to those who participate in the process.

Digital Democracy works on digital literacy and human rights, and makes an effort to integrate qualitative monitoring and evaluation into their program work stream. They use photography, film and other media that transcend the language and literacy barriers. Using these kinds of media helps participants express opinions on issues that need addressing and builds trust. Photos have helped in program development as well as in defining quantitative and qualitative indicators.

A rich conversation took place around the following aspects:

1) Perception may trump hard data

One discussant raised the question “Do opinions matter more than hard data on services?” noting that perceptions about aid and development may be more important than numbers of items delivered, money spent, and timelines met. Even if an organization is meeting all of its targets, what may matter more is what people think about the organization and its work. Does the assistance they get respond to their needs? Rather than asking “Is the school open?” or “Did you get health care?” it may be more important to ask “How do you feel about health?” Agencies may be delivering projects that are not what people want or that do not respond to their needs, cultures, and so on. It is important to encourage people to talk amongst themselves about their priorities, what they think, encourage viewpoints from people of different backgrounds and see how to pull out information to help inform programs and approaches.

2) It is a complex process

Salon participants noted that people are clearly willing to share stories and unstructured feedback. However, the process of collecting and sorting through stories is unwieldy and far from perfect. More work needs to be done to simplify story-collection processes and make them more tech-enabled. In addition, more needs to be done to determine exactly how to feed the information gleaned back in a structured and organized way that helps with decision-making. One idea was the creation of a “Yelp” for NGOs. Tagging and/or asking program participants to tag photos and stories can help make sense of the data. If videos are subtitled, this can also be of great use to begin making sense of the type of information held in videos. Dotsub, for example, is a video subtitling platform that uses a Wikipedia style subtitling model, enabling crowd sourced video translations into any language.

3) Stories and tags are not enough

We know that collecting and tagging stories to pull out qualitative feedback is possible. But so what? The important next step is looking at the effective use of these stories and data. Some ideas on how to better use the data include adding SMS feedback, deep dives with NGOs, and face-to-face meetings. It’s important to move from collecting the stories to thinking about what questions should be asked, how the information can help NGOs improve their performance, how this qualitative data translates into change or different practice at the local and global levels, how the information could be used by local organizers for community mobilization or action, and how all this is informing program design, frameworks and indicators.

4) Outreach is important

Building an online platform does not guarantee that anyone will visit it or participate. Local partners are an important element to reach out and collect data about what people think and feel. Outreach needs to be done with many partners from all parts of a community or society in order to source different viewpoints. In addition, it is important to ask the right questions and establish trust or people will not want to share their views. Any quality participation process, whether online or offline, needs good facilitation and encouragement; it needs to be a two-way process, a conversation.

5) Be aware of bias

Understanding where the process may be biased is important. Everything from asking leading questions, defining the meta data in a certain way, creating processes that only include certain parts of the community or population, selecting certain partners, or asking questions that lead to learning what an organization thinks it needs to know can all create biased answers. Language is important here for several reasons: it will affect who is included or excluded and who is talking with whom. Using development jargon will not resonate with people, and the way development agencies frame questions may lead people to particular answers.

6)  Be aware of exclusion

Related to bias is the issue of exclusion. In large-scale consultations or online situations, it’s difficult to know who is talking and participating. Yet the more log-in information solicited, the less likely people are to participate in discussions. However by not asking, it’s hard to know who is responding, especially when anonymity is allowed. In addition, results also depend on who is willing and wants to participate. Participants agreed that there is no silver bullet to finding folks to participate and ensuring they represent diversity of opinion. One suggestion was that libraries and telecenters could play a role in engaging more remote or isolated communities in these kinds of dialogues.

7) Raising expectations

Asking people for feedback raises expectations that their input will be heard and that they will see some type of concrete result. In these feedback processes, what happens if the decisions made by NGOs or heads of state don’t reflect what people said or contributed? How can we ensure that we are actually listening to what people tell us? Often times we ask for people’s perceptions and then tell them why they are wrong. Follow up is also critical. A campaign from several years ago was mentioned where 93,000 people signed onto a pledge, and once that was achieved, the campaign ended and there was no further engagement with the 93,000 people. Soliciting input and feedback needs to be an ongoing relationship with continual dialogue and response. The process itself needs to be transparent and accountable to those who participate in it.

8 ) Don’t forget safety and protection

The issue of safety and protection for those who offer their opinions and feedback or raise issues and complaints was brought up. Participants noted that safety is very context specific and participatory risk assessments together with community members and partners can help mitigate and ensure that people are informed about potential risk. Avoiding a paternalistic stance is recommended, as sometimes human rights advocates know very well what their risk is and are willing to take it. NGOs should, however, be sure that those with whom they are working fully understand the risks and implications, especially when new media tools are involved that they may not have used before. Digital literacy is key.

9) Weave qualitative M&E into the whole process

Weaving consistent spaces for input and feedback into programs is important. As one discussant noted, “the very media tools we are training partners on are part of our monitoring and evaluation process.”  The initial consultation process itself can form part of the baseline. In addition to M&E, creating trust and a safe space to openly and honestly discuss failure and what did not go so well can help programs improve.  Qualitative information can also help provide a better understanding of the real and hard dynamics of the local context, for example the challenges faced during a complex emergency or protracted conflict. Qualitative monitoring can help people who are not on the ground have a greater appreciation for the circumstances, political framework, and the socio-economic dynamics.

10) Cheaper tool are needed

Some felt that the tools being shared (Sensemaker in particular) were too expensive and sophisticated for their needs, and too costly for smaller NGOs. Simpler tools would be useful in order to more easily digest the information and create visuals and other analyses that can be fed back to those who need to use the information to make changes. Other tools exist that might be helpful, such as Trimble’s Municipal Reporter, Open Data Kit, Kobe, iForm Builder, Episurveyor/Magpi and PoiMapper. One idea is to look at some of the tools being developed and used in the crisis mapping and response space to see if cost is dropping and capacity increasing as the field advances. (Note: several tools for parsing Twitter and other social media platforms were presented at the 2012 International Conference on Crisis Mapping, some of which could be examined and learned from.)

What next?

A final question at the Salon was around how the broader evaluation community can connect with the tools and people who are testing and experimenting with these new ways of conducting monitoring and evaluation. How can we create better momentum in the community to embrace these practices and help build this field?

Although this was the final Salon of our series on monitoring and evaluation, we’ll continue to work on what was learned and ways to take these ideas forward and keep the community talking and growing.

A huge thank you to our lead discussants and participants in this series of Salons, especially to the Community Systems Foundation and the Rockefeller Foundation’s monitoring and evaluation team for joining in the coordination with us. A special thanks to Rockefeller for all of the thoughtful discussion throughout the process and for hosting the Salons.

The next Technology Salon NYC will be November 14, 2012, hosted by the Women’s Refugee Commission and the International Rescue Committee. We’ll be shifting gears a little, and our topic will be around ways that new technologies can support children and youth who migrate, are forcibly displaced or are trafficked.

If you’d like to receive notifications about future salons, sign up for the mailing list!

Previous Salons in the ICTs and M&E Series:

12 lessons learned with ICTs for monitoring and accountability

11 points on strengthening local capacity to use new ICTs for monitoring and evaluation

Read Full Post »

New technologies are opening up all kinds of possibilities for improving monitoring and evaluation. From on-going feedback and crowd-sourced input to more structured digital data collection, to access to large data sets and improved data visualization, the field is changing quickly.

On August 7, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC for the first in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes. Our lead discussants were: Erica Kochi from UNICEF Innovations; Steven Davenport from Development Gateway and John Toner from CSF.

This particular Salon focused on the use of ICTs for social monitoring (a.k.a. ‘beneficiary feedback loops’) and accountability. Below is a summary of the key points that emerged at the Salon.

1) Monitoring and evaluation is changing

M&E is not only about formal data collection and indicators anymore, one discussant commented, “It’s free form, it contains sentiment.” New ICT tools can help donors and governments plan better. SMS and other social monitoring tools provide an additional element to more formal information sources and can help capture the pulse of the population. Combinations of official data sets with SMS data provide new ways of looking at cross-sections of information. Visualizations and trend analysis can offer combinations of information for decision making. Social monitoring, however, can be a scary thing for large institutions. It can seem too uncontrolled or potentially conflictive. One way to ease into it is through “bounded” crowd-sourcing (eg., working with a defined and more ‘trusted’ subset of the public) until there is comfort with these kinds of feedback mechanisms.

2) People need to be motivated to participate in social monitoring efforts

Building a platform or establishing an SMS response tool is not enough. One key to a successful social monitoring effort is working with existing networks, groups and organizations and doing well-planned and executed outreach, for example, in the newspaper, on the radio and on television. Social monitoring can and should go beyond producing information for a particular project or program. It should create an ongoing dialogue between and among people and institutions, expanding on traditional monitoring efforts and becoming a catalyst for organizations or government to better communicate and engage with the community. SMS feedback loops need to be thought of in terms of a dialogue or a series of questions rather than a one-question survey. “People get really engaged when they are involved in back and forth conversation.” Offering prizes or other kinds of external motivation can spike participation rates but also can create expectations that affect or skew programs in the long run. Sustainable approaches need to be identified early on. Rewards can also lead to false reports and re-registering, and need to be carefully managed.

3) Responsiveness to citizen/participant feedback is critical

One way to help motivate individuals to participate in social monitoring is for governments or institutions to show that citizen/participant feedback elicits a response (eg., better delivery of public services).  “Incentives are good,” said one discussant, “But at the core, if you get interactive with users, you will start to see the responses. Then you’ll have a targeted group that you can turn to.” Responsiveness can be an issue, however if there is limited government or institutional interest, resourcing or capacity, so it’s important to work on both sides of the equation so that demand does not outstrip response capacity. Monitoring the responsiveness to citizen/participant feedback is also important. “Was there a response promised? Did it happen? Has it been verified? What was the quality of it?”

4) Privacy and protection are always a concern

Salon participants brought up concerns about privacy and protection, especially for more sensitive issues that can put those who provide feedback at risk. There are a number of good practices in the IT world for keeping data itself private, for example presenting it in aggregate form, only releasing certain data, and setting up controls over who can access different levels of data. However with crowd-sourcing or incident mapping there can be serious concerns for those who report or provide feedback. Program managers need to have a very good handle on the potential risks involved or they can cause unintended harm to participants. Consulting with participants to better understand the context is a good idea.

5) Inclusion needs to be purposeful

Getting a representative response via SMS-based feedback or other social monitoring tools is not always easy. Mandatory ratios of male and female, age groups or other aspects can help ensure better representation. Different districts can be sampled in an effort to ensure overall response is representative. “If not,” commented one presenter, “you’ll just get data from urban males.” Barriers to participation also need consideration, such as language; however, working in multiple languages becomes very complicated very quickly. One participant noted that it is important to monitor whether people from different groups or geographic areas understand survey questions in the same way, and to be able to fine-tune the system as it goes along. A key concern is reaching and including the most vulnerable with these new technologies. “Donors want new technology as a default, but I cannot reach the most excluded with technology right now,” commented a participant.

6) Information should be useful to and used by the community

In addition to ensuring inclusion of individuals and groups, communities need to be involved in the entire process. “We need to be sure we are not just extracting information,” mentioned one participant. Organizations should be asking: What information does the community want? How can they get it themselves or from us? How can we help communities to collect the information they need on their own or provide them with local, sustainable support to do so?

7) Be sure to use the right tools for the job

Character limitation can be an issue with SMS. Decision tree models, where one question prompts another question that takes the user down a variety of paths, are one way around the character limit. SMS is not good for incredibly in-depth surveys however; it is good for breadth not depth. It’s important to use SMS and other digital tools for what they are good for. Paper can often be a better tool, and there is no shame in using it. Discussants emphasized that one shouldn’t underestimate the challenges in working with Telco operators and making short codes. Building the SMS network infrastructure takes months. Social media is on the rise, so how do you channel that into the M&E conversation?

8) Broader evaluative questions need to be established for these initiatives

The purpose of including ICT in different initiatives needs to be clear. Goals and evaluative questions need to be established. Teams need to work together because no one person is likely to have the programmatic, ICT and evaluation skills needed for a successfully implemented and well-documented project. Programs that include ICTs need better documentation and evaluation overall, including cost-benefit analyses and comparative analyses with other potential tools that could be used for these and similar processes.

9) Technology is not automatically cheaper and easier

These processes remain very iterative; they are not ‘automated’ processes. Initial surveys can only show patterns. What is more interesting is back-and-forth dialogue with participants. As one discussant noted, staff still spend a lot of time combing through data and responses to find patterns and nuances within the details. There is still a cost to these projects. In one instance, the major project budget went into a communication campaign that was launched and the work with existing physical networks to get people to participate. Compared to traditional ways of doing things (face-to-face, for example) the cost of outreach is not so expensive, but integrating SMS and other technologies does not automatically mean that money will be saved. The cost of SMS is also large in these kinds of projects because in order to ensure participation, representation, and inclusion, SMS usually needs to be free for participants. Even with bulk rates, if the program is at massive scale, it’s quite expensive. When assuming that governments or local organizations will take over these projects at some point, this is a real consideration.

10) Solutions at huge scale are not feasible for most organizations 

Some participants commented that the UN and the Red Cross and similarly sized organizations are the only ones who can work at the level of scale discussed at the Salon. Not many agencies have the weight to influence governments or mobile service providers, and these negotiations are difficult even for large-scale organizations. It’s important to look at solutions that react and respond to what development organizations and local NGOs can do. “And what about localized tools that can be used at district level or village level? For example, localized tools for participatory budgeting?” asked a participant. “There are ways to link high tech and SMS with low tech, radio outreach, working with journalists, working with other tools,” commented others. “We need to talk more about these ways of reaching everyone. We need to think more about the role of intermediaries in building capacity for beneficiaries and development partners to do this better.

11) New technology is not M&E magic

Even if you include new technology, successful initiatives require a team of people and need to be managed. There is no magic to doing translations or understanding the data – people are needed to put all this together, to understand it, to make it work. In addition, the tools covered at the Salon only collect one piece of the necessary information. “We have to be careful how we say things,” commented a discussant. We call it M&E, but it’s really ‘M’. We get confused with ourselves sometimes. What we are talking about today is monitoring results. Evaluation is how to take all that information then, and make an informed decision. It involves specialists and more information on top of this…” Another participant emphasized that SMS feedback can get at the symptoms but doesn’t seem to get at the root causes. Data needs to be triangulated and efforts made to address root causes and end users need to be involved.

12) Donors need to support adaptive design

Participants emphasized that those developing these programs, tools and systems need to be given space to try and to iterate, to use a process of adaptive design. Donors shouldn’t lock implementers into unsuitable design processes. A focused ‘ICT and Evaluation Fail Faire’ was suggested as a space for improving sharing and learning around ICTs and M&E. There is also learning to be shared from people involved in ICT projects that have scaled up. “We need to know what evidence is needed to scale up. There is excitement and investment, but not enough evidence,” it was concluded.

Our next Salon

Our next Salon in the series will take place on August 30th. It will focus on the role of intermediaries in building capacity for communities and development partners to use new technologies for monitoring and evaluation. We’ll be looking to discover good practices for advancing the use of ICTs in M&E in sustainable ways. Sign up for the Technology Salon mailing list here. [Update: A summary of the August 30 Salon is here.]

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 708 other followers