Feeds:
Posts
Comments

Posts Tagged ‘M&E’

(Reposting, original appears here)

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

.

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before role-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Read Full Post »

I used to write blog posts two or three times a week, but things have been a little quiet here for the past couple of years. That’s partly because I’ve been ‘doing actual work’ (as we like to say) trying to implement the theoretical ‘good practices’ that I like soapboxing about. I’ve also been doing some writing in other places and in ways that I hope might be more rigorously critiqued and thus have a wider influence than just putting them up on a blog.

One of those bits of work that’s recently been released publicly is a first version of a monitoring and evaluation framework for SIMLab. We started discussing this at the first M&E Tech conference in 2014. Laura Walker McDonald (SIMLab CEO) outlines why in a blog post.

Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.

For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact….

FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.

As an organization that provides technology strategy and support to other organizations — both large and small — it is important for SIMLab to better understand the quality of that support and how it may translate into improvements as well as how introduction or improvement of information and communication technology contributes to impact at the broader scale.

This is a difficult proposition, given that isolating a single factor like technology is extremely tough, if not impossible. The Framework thus aims to get at the breadth of considerations that go into successful tech-enabled project design and implementation. It does not aim to attribute impact to a particular technology, but to better understand that technology’s contribution to the wider impact at various levels. We know this is incredibly complex, but thought it was worth a try.

As Laura notes in another blogpost,

One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators – the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and consider the criteria for technology-enabled social change projects.

Here are the adapted criteria that you can read more about in the Framework. They were designed for internal use, but we hope they might be useful to evaluators of technology-enabled programming, commissioners of evaluations of these programs, and those who want to do in-house examination of their own technology-enabled efforts. We welcome your thoughts and feedback — The Framework is published in draft format in the hope that others working on similar challenges can help make it better, and so that they could pick up and use any and all of it that would be helpful to them. The document includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample log frames and evaluator terms of reference.

Happy reading and we really look forward to any feedback and suggestions!!

*****

The Criteria

Criterion 1: Relevance

The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design – particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?

Criterion 2: Effectiveness

A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

Criterion 3: Efficiency

Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?

Criterion 4: Impact

Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying, documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies evaluation criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us.

For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Consider: To what extent does the choice of communications channels or tools enable wider and/or higher quality participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?

Criterion 6: Coherence

The OECD-DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

Read Full Post »

Earlier this month I attended the African Evaluators’ Conference (AfrEA) in Cameroon as part of the Technology and Evaluation stream organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation.

A first post about ICTs and M&E at the Afrea Conference went into some of the deliberations around using or not using ICTs and how we can learn and share more as institutions and evaluators. I’ve written previously about barriers and challenges with using ICTs in M&E of international development programs (see the list of posts at the bottom of this one). Many of these same conversations came up at AfrEA, so I won’t write about these again here. What I did want to capture and share were a few interesting design and implementation thoughts from the various ICT and M&E sessions. Here goes:

1) Asking questions via ICT may lead to more honest answers. Some populations are still not familiar with smart phones and tablets and this makes some people shy and quiet, yet it makes others more curious and animated to participate. Some people worry that mobiles, laptops and tablet create distance between the enumerator and the person participating in a survey. On the other hand, I’m hearing more and more examples of cases where using ICTs for surveying actually allow for a greater sense of personal privacy and more honest answers. I first heard about this several years ago with relation to children and youth in the US and Canada seeking psychological or reproductive health counseling. They seemed to feel more comfortable asking questions about sensitive issues via online chats (as opposed to asking a counselor or doctor face-to-face) because they felt anonymous. This same is true for telephone inquiries.

In the case of evaluations, someone suggested that rather than a mobile or tablet creating distance, a device can actually create an opportunity for privacy. For example, if a sensitive question comes up in a survey, an enumerator can hand the person being interviewed the mobile phone and look away when they provide their answer and hit enter, in the same way that waiters in some countries will swipe your ATM card and politely look away while you enter your PIN. Key is building people’s trust in these methods so they can be sure they are secure.

At a Salon on Feb 28, I heard about mobile polling being used to ask men in the Democratic Republic of Congo about sexual assault against men. There was a higher recorded affirmative rate when the question was answered via a mobile survey than when the question had been asked in other settings or though other means. This of course makes sense, considering that often when a reporter or surveyor comes around asking whether men have been victims of rape, no one wants to say publicly. It’s impossible to know in a situation of violence if a perpetrator might be standing around in the crowd watching someone getting interviewed, and clearly shame and stigma also prevent people from answering openly.

Another example at the AfrEA Tech Salon, was a comparison study done by an organization in a slum area in Accra. Five enumerators who spoke local languages conducted Water, Sanitation and Hygiene (WASH) surveys by mobile phone using Open Data Kit (an open source survey application) and the responses were compared with the same survey done on paper.  When people were asked in person by enumerators if they defecated outdoors, affirmative answers were very low. When people were asked the same question via a voice-based mobile phone survey, 26% of respondents reported open defecation.

2) Risk of collecting GPS coordinates. We had a short discussion on the plusses and minuses of using GPS and collecting geolocation data in monitoring and evaluation. One issue that came up was safety for enumerators who carry GPS devices. Some people highlighted that GPS devices can put staff/enumerators at risk of abuse from organized crime bands, military groups, or government authorities, especially in areas with high levels of conflict and violence. This makes me think that if geographic information is needed in these cases, it might be good to use a mobile phone application that collects GPS rather than a fancy smart phone or an actual GPS unit (for example, one could try out PoiMapper, which works on feature phones).

In addition, evaluators emphasized that we need to think through whether GPS data is really necessary at household level. It is tempting to always collect all the information that we possibly can, but we can never truly assure anyone that their information will not be de-anonymized somehow in the near or distant future, and in extremely high risk areas, this can be a huge risk. Many organizations do not have high-level security for their data, so it may be better to collect community or district level data than household locations. Some evaluators said they use ‘tricks’ to anonymize the geographical data, like pinpointing location a few miles off, but others felt this was not nearly enough to guarantee anonymity.

3) Devices can create unforeseen operational challenges at the micro-level. When doing a mobile survey by phone and asking people to press a number to select a particular answer to a question, one organization working in rural Ghana to collect feedback about government performance found that some phones were set to lock when a call was answered. People were pressing buttons to respond to phone surveys (press 1 for….), but their answers did not register because phones were locked, or answers registered incorrectly because the person was entering their PIN to unlock the phone. Others noted that when planning for training of enumerators or community members who will use their own devices for data collection, we cannot forget the fact that every model of phone is slightly different. This adds quite a lot of time to the training as each different model of phone needs to be explained to trainees. (There are a huge number of other challenges related to devices, but these were two that I had not thought of before.)

4) Motivation in the case of poor capacity to respond. An organization interested in tracking violence in a highly volatile area wanted to take reports of violence, but did not have a way to ensure that there would be a response from an INGO, humanitarian organization or government authority if/when violence was reported. This is a known issue — the difficulties of encouraging reporting if responsiveness is low. To keep people engaged this organization thanks people immediately for reporting and then sends peace messages and encouragement 2-3 times per week. Participants in the program have appreciated these ongoing messages and participation has continued to be steady, regardless of the fact that immediate help has not been provided as a result of reporting.

5) Mirroring physical processes with tech. One way to help digital tools gain more acceptance and to make them more user-friendly is to design them to mirror paper processes or other physical processes that people are already familiar with. For example, one organization shared their design process for a mobile application for village savings and loan (VSL) groups. Because security is a big concern among VSL members, the groups typically keep cash in a box with 3 padlocks. Three elected members must be present and agree to open and remove money from the box in order to conduct any transaction. To mimic this, the VSL mobile application requires 3 PINs to access mobile money or make transactions, and what’s more, the app sends everyone in the VSL Group an SMS notification if the 3 people with the PINs carry out a transaction, meaning the mobile app is even more secure than the original physical lock-box, because everyone knows what is happening all the time with the money.

****

As I mentioned in part 1 of this post, some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward.

Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that are missing!

Previous posts on ICTs and M&E on this blog:

Read Full Post »

I attended the African Evaluators’ Conference (AfrEA) in Cameroon last week as part of the Technology and Evaluation strand organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation. The strand was a fantastic opportunity for learning, sharing and understanding more about the context, possibilities and realities of using ICTs in monitoring and evaluation (M&E). We heard from a variety of evaluators, development practitioners, researchers, tool-developers, donors, and private sector and government folks. Judging by the well-attended sessions, there is a huge amount of interest in ICTs and M&E.

Rather than repeat what’s I’ve written in other posts (see links at the bottom), I’ll focus here on some of the more relevant, interesting, and/or new information from the AfrEA discussions. This first post will go into institutional issues and the ‘field’ of ICTs and M&E. A second post will talk about design and operational tips I learned /was reminded of at AfrEA.

1) We tend to get stuck on data collection –Like other areas (I’m looking at you, Open Data) conversations tend to revolve around collecting data. We need to get beyond that and think more about why we are collecting data and what we are going to do with it (and do we really need all this data?). The evaluation field also needs to explore all the other ways it could be using ICTs for M&E, going beyond mobile phones and surveys. Collecting data is clearly a necessary part of M&E, but those data still need to be analyzed. As a participant from a data visualization firm said, there are so many ways you can use ICTs – they help you make sense of things, you can tag sentiment, you can visualize data and make data-based decisions. Others mentioned that ICTs can help us to share data with various stakeholders, improve sampling in RCTs (Randomized Control Trials), conduct quality checks on massive data sets, and manage staff who are working on data collection. Using big data, we can do analyses we never could have imagined before. We can open and share our data, and stop collecting the same data from the same people multiple times. We can use ICTs to share back what we’ve learned with evaluation stakeholders, governments, the public, and donors. The range of uses of ICTs is huge, yet the discussion tends to get stuck on mobile surveys and data collection, and we need to start thinking beyond that.

2) ICTs are changing how programs are implemented and how M&E is done — When a program already uses ICTs, data collection can be built in through the digital device itself (e.g., tracking user behavior, cookies, and via tests and quizzes), as one evaluator working on tech and education programs noted. As more programs integrate digital tools, it may become easier to collect monitoring and evaluation data with less effort. Along those lines, an evaluator looking at a large-scale mobile-based agricultural information system asked about approaches to conducting M&E that do not rely on enumerators and traditional M&E approaches. In his program, because the farmers who signed up for the mobile information service do not live in the same geographical community, traditional M&E approaches do not seem plausible and ICT-based approaches look like a logical answer. There is little documentation within the international development evaluation community, however, on how an evaluator might design an evaluation in this type of a situation. (I am guessing there may be some insights from market research and possibly from the transparency and accountability sectors, and among people working on “feedback loops”).

3) Moving beyond one-off efforts — Some people noted that mobile data gathering is still done mostly at the project level. Efforts tend to be short-term and one-off. The data collected is not well-integrated into management information systems or national level processes. (Here we may reference the infamous map of mHealth pilots in Uganda, and note the possibility of ICT-enabled M&E in other sectors going this same route). Numerous small pilots may be problematic if the goal is to institutionalize mobile data gathering into M&E at the wider level and do a better job of supporting and strengthening large-scale systems.

4) Sometimes ICTs are not the answer, even if you want them to be – One presenter (who considered himself a tech enthusiast) went into careful detail about his organization’s process of deciding not to use tablets for a complex evaluation across 4 countries with multiple indicators. In the end, the evaluation itself was too complex, and the team was not able to find the right tool for the job. The organization looked at simple, mid-range and highly complex applications and tools and after testing them all, opted out. Each possible tool presented a set of challenges that meant the tool was not a vast improvement over paper-based data collection, and the up-front costs and training were too expensive and lengthy to make the switch to digital tools worthwhile. In addition, the team felt that face-to-face dynamics in the community and having access to notes and written observations in the margins of a paper survey would enable them to conduct a better evaluation. Some tablets are beginning to enable more interactivity and better design for surveys, but not yet in a way that made them a viable option for this evaluation. I liked how the organization went through a very thorough and in-depth process to make this decision.

Other colleagues also commented that the tech tools are still not quite ‘there’ yet for M&E. Even top of the line business solutions are generally found to be somewhat clunky. Million dollar models are not relevant for environments that development evaluators are working in; in addition to their high cost, they often have too many features or require too much training. There are some excellent mid-range tools that are designed for the environment, but many lack vital features such as availability in multiple languages. Simple tools that are more easily accessible and understandable without a lot of training are not sophisticated enough to conduct a large-scale data collection exercise. One person I talked with suggested that the private sector will eventually develop appropriate tools, and the not-for-profit sector will then adopt them. She felt that those of us who are interested in ICTs in M&E are slightly ahead of the curve and need to wait a few years until the tools are more widespread and common. Many people attending the Tech and M&E sessions at AfrEA made the point that use of ICTs in M&E would get easier and cheaper as the field develops, tools get more advanced/appropriate/user-friendly and widely tested, and networks/ platforms/ infrastructure improves in less-connected rural areas.

5) Need for documentation, evaluation and training on use of ICTs in M&E – Some evaluators felt that ICTs are only suitable for routine data collection as part of an ongoing program, but not good for large-scale evaluations. Others pointed out that the notions of ‘ICT for M&E’ and ‘mobile data collection/mobile surveys’ are often used interchangeably, and evaluation practitioners need to look at the multiple ways that ICTs can be used in the wider field of M&E. ICTs are not just useful for moving from paper surveys to mobile data gathering. An evaluator working on a number of RCTs mentioned that his group relies on ICTs for improving samples, reducing bias, and automatically checking data quality.

There was general agreement that M&E practitioners need resources, opportunities for more discussion, and capacity strengthening on the multiple ways that ICTs may be able to support M&E. One evaluator noted that civil society organizations have a tendency to rush into things, hit a brick wall, and then cross their arms and say, “well, this doesn’t work” (in this case, ICTs for M&E). With training and capacity, and as more experience and documentation is gained, he considered that ICTs could have a huge role in making M&E more efficient and effective.

One evaluator, however, questioned whether having better, cheaper, higher quality data is actually leading to better decisions and outcomes. Another evaluator asked for more evidence of what works, when, with whom and under what circumstances so that evaluators could make better decisions around use of ICTs in M&E. Some felt that a decision tree or list of considerations or key questions to think through when integrating ICTs into M&E would be helpful for practitioners. In general, it was agreed that ICTs can help overcome some of our old challenges, but that they inevitably bring new challenges. Rather than shy away from using ICTs, we should try to understand these new challenges and find ways to overcome/work around them. Though the mHealth field has done quite a bit of useful research, and documentation on digital data collection is growing, use of ICTs is still relatively unexplored in the wider evaluation space.

6) There is no simple answer. One of my takeaways from all the sessions was that many M&E specialists are carefully considering options, and thinking quite a lot about which ICTs for what, whom, when and where rather than deciding from the start that ICTs are ‘good and beneficial’ or ‘bad and not worth considering.’ This is really encouraging, and to be expected of a thoughtful group like this. I hope to participate in more discussions of this nature that dig into the nuances of introducing ICTs into M&E.

Some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward. The “field” of ICTs in M&E is quite broad, however, and there are many ways to slice the cake. Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that you think are missing!

(Part 2 of this post)

Previous posts on ICTs and M&E:

Read Full Post »

At Catholic Relief Services’ annual ICT4D meeting in March 2013, I worked with Jill Hannon from Rockefeller Foundation’s Evaluation Office to organize 3 sessions on the use of ICT for Monitoring and Evaluation (ICTME). The sessions covered the benefits (known and perceived) of using ICTs for M&E, the challenges and barriers organizations face when doing so, and some lessons and advice on how to integrate ICTs into the M&E process.

Our lead discussants in the three sessions included: Stella Luk (Dimagi), Guy Sharrack (CRS), Mike Matarasso (CRS), David McAfee (HNI/Datawinners), Mark Boots (Votomobile), and Teressa Trusty (USAID’s IDEA/Mobile Solutions). In addition, we drew from the experiences and expertise of some 60 people who attended our two round table sessions.

Benefits of integrating ICTs into the M&E process

Some of the potential benefits of integrating ICTs mentioned by the various discussants and participants in the sessions included:

  • More rigorous, higher quality data collection and more complete data
  • Reduction in required resources (time, human, money) to collect, aggregate and analyze data
  • Reduced complexity if data systems are simplified; thus increased productivity and efficiency
  • Combined information sources and types and integration of free form, qualitative data with quantitative data
  • Broader general feedback from a wider public via ICT tools like SMS; inclusion of new voices in the feedback process, elimination of the middleman to empower communities
  • Better cross-sections of information, information comparisons; better coordination and cross-comparing if standard, open formats are used
  • Trend-spotting with visualization tools
  • Greater data transparency and data visibility, easier data auditing
  • Real-time or near real-time feedback “up the chain” that enables quicker decision-making, adaptive management, improved allocation of limited resources based on real-time data, quicker communication of decisions/changes back to field-level staff, faster response to donors and better learning
  • Real-time feedback “down the ladder” that allows for direct citizen/beneficiary feedback, and complementing of formal M&E with other social monitoring approaches
  • Scale, greater data security and archiving, and less environmental impact
  • Better user experience for staff as well as skill enhancement and job marketability and competitiveness of staff who use the system

Barriers and challenges of integrating ICTs into M&E processes

A number of challenges and barriers were also identified, including:

  • A lack of organizational capacity to decide when to use ICTs in M&E, for what, and why, and deciding on the right ICT (if any) for the situation. Organizations may find it difficult to get beyond collecting the data to better use of data for decision-making and coordination. There is often low staff capacity, low uptake of ICT tools and resistance to change.
  • A tendency to focus on surveys and less attention to other types of M&E input, such as qualitative input. Scaling analysis of large-scale qualitative feedback is also a challenge: “How do you scale qualitative feedback to 10,000 people or more? People can give their feedback in a number of languages by voice. How do you mine that data?”
  • The temptation to offload excessive data collection to frontline staff without carefully selecting what data is actually going to be used and useful for them or for other decision-makers.
  • M&E is often tacked on at the end of a proposal design. The same is true for ICT. Both ICT and M&E need to be considered and “baked in” to a process from the very beginning.
  • ICT-based M&E systems have missed the ball on sharing data back. “Clinics in Ghana collect a lot of information that gets aggregated and moved up the chain. What doesn’t happen is sharing that information back with the clinic staff so that they can see what is happening in their own clinic and why. We need to do a better job of giving information back to people and closing the loop.” This step is also important for accountability back to communities. On the whole, we need to be less extractive.
  • Available tools are not always exactly right, and no tool seems to provide everything an organization needs, making it difficult to choose the right tool. There are too many solutions, many of which are duplicative, and often the feature sets and the usability of these tools are both poor. There are issues with sustainability and ongoing maintenance and development of M&E platforms.
  • Common definitions for data types and standards for data formatting are needed. The lack of interoperability among ICT solutions also causes challenges. As a field, we don’t do enough linking of systems together to see a bigger picture of which programs are doing what, where and who they are impacting and how.
  • Security and privacy are not adequately addressed. Many organizations or technology providers are unaware of the ethical implications of collecting data via new tools and channels. Many organizations are unclear about the ethical standards for research versus information that is offered up by different constituents or “beneficiaries” (eg., information provided by people participating in programs that use SMS or collect information through SMS-based surveys) versus monitoring and evaluation information. It is also unclear what the rules are for information collected by private companies, who this information can be shared with and what privacy laws mean for ICT-enabled M&E and other types of data collection. If there are too many barriers to collecting information, however, the amount of information collected will be reduced. A balance needs to be found. The information that telecommunications companies hold is something to think about when considering privacy and consent issues, especially in situations of higher vulnerability and risk. (UNOCHA has recently released a report that may be useful.)
  • Not enough is understood about motivation and incentive for staff or community members to participate or share data. “Where does my information go? Do I see the results? Why should I participate? Is anyone responding to my input?” In addition, the common issues of cost, access, capacity, language, literacy, cultural barriers are very much present in attempts to collect information directly from community members. Another question is that of inclusion: Does ICT-enabled data collection or surveying leave certain groups out? (See this study on intrinsic vs extrinsic motivation for feedback.)
  • Donors often push or dictate the use of ICT when it’s perhaps not the most useful for the situation. In addition there is normally not enough time during proposal process for organizations to work on buy-in and good design of an ICT-enabled M&E system. There is often a demand from the top for excessive data collection without an understanding of the effort required to collect it, and time/resource trade-offs for excessive data collection when it leads to less time spent on program implementation. “People making decisions in the capital want to add all these new questions and information and that can be a challenge… What data are valuable to collect? Who will respond to them? Who will use them as the project goes forward?”
  • There seems to be a focus on top-down, externally created solutions rather than building on local systems and strengths or supporting local organizations or small businesses to strengthen their ICTME capacities. “Can strengthening local capacity be an objective in its own right? Are donors encouraging agencies to develop vertical ICTME solutions without strengthening local systems and partners?”
  • Results-based, data-based focus can bias the countable, leave out complex development processes with more difficult to count/measure impacts.

Lessons and good practice for integrating ICTs into M&E processes

ICT is not a silver bullet – it presents its own set of challenges. But a number of good practices surfaced:

  • The use of ICTs for M&E is not just a technology issue, it’s a people and processes issue too, and it is important to manage the change carefully. It’s also important to keep an open mind that ICT4D to support M&E might not always be the best use of scarce resources – there may be more pressing priorities for a project. Getting influential people on your side to support the cause and help leverage funding and support is critical. It’s also important to communicate goals and objectives clearly, and provide incentives to make sure ICTs are successfully adopted. The trick is keeping up with technology advances to improve the system, but also keeping your eye on the ball.
  • When designing an ICTME effort, clarity of purpose and a holistic picture of the project M&E system are needed in order to review options for where ICT4D can best fit. Don’t start with the technology. Start with the M&E purpose and goals and focus on the business need, not the gadgets. Have a detailed understanding of M&E data requirements and data flows as a first step. Follow those with iterative discussions with ICT staff to specify the ICT4D solution requirements.
  • Select an important but modest project to start with and pilot in one location – get it right and work out the glitches before expanding to a second tier of pilots or expanding widely. Have a fully functional model to share for broad buy-in and collect some hard data during the pilot to convince people of adoption. The first ICT initiative will be the most important.  If it is successful, use of ICTs will likely spread throughout an organization.  If the first initiative fails, it can significantly push back the adoption of ICTs in general. For this reason, it’s important to use your best people for the first effort. Teamwork and/or new skill sets may be required to improve ICT-enabled M&E. The “ICT4D 2.0 Manifesto” talks about a tribrid set of skills needed for ICT-enabled programs.
  • Don’t underestimate the need for staff training and ongoing technical assistance to ensure a positive user experience, particularly when starting out. Agencies need to find the right balance between being able to provide support for a limited number of ICT solutions versus the need to support ongoing local innovation.  It’s also important to ask for help when needed.  The most successful M&E projects are led by competent managers who seek out resources both inside and outside their organizations.
  • Good ICT-enabled M&E comes from a partnership between program, M&E and ICT staff, technical support internal and external to the organization. Having a solid training curriculum and a good help desk are important. In addition, in-built capacity for original architecture design and to maintain and adjust the system is a good idea. A lead business owner and manager for the system need to be in place as well as global and local level pioneers and strong leadership (with budget!) to do testing and piloting. At the local level, it is important to have an energetic and savvy local M&E pioneer who has a high level of patience and understands technology.
  • At the community level, a key piece is understanding who you need to hear from for effective M&E and ensuring that ICT tools are accessible to all. It’s also critical to understand who you are ignoring or not reaching with any tool or process. Are women and children left out? What about income level? Those who are not literate?
  • Organizations should also take care that they are not replacing or obliterating existing human responsibilities for evaluation. For example, at community level in Ghana, Assembly Members have the current responsibility for representing citizen concerns. An ICT-enabled feedback loop might undermine this responsibility if it seeks direct-from-citizen evaluation input.  The issue of trust and the human-human link also need consideration. ICT cannot and should not be a replacement for everything. New ICT tools can increase the number of people and factors evaluated; not just increase efficiency of existing evaluations.
  • Along the same lines, it’s important not to duplicate existing information systems, create parallel systems or fragment the government’s own systems. Organizations should be strengthening local government systems and working with government to use the information to inform policy and help with decision-making and implementation of programs.
  • implementors need to think about the direction of information flow. “Is it valuable to share results “upward” and “downward”? It is possible to integrate local decision-making into a system.” Systems can be created that allow for immediate local-level decision-making based on survey input. Key survey questions can be linked to indicators that allow for immediate discussion and solutions to improve service provision.
  • Also, the potential political and social implications of greater openness in information flows needs to be considered. Will local, regional and national government embrace the openness and transparency that ICTs offer? Are donors and NGOs potentially putting people at risk?
  • For best results, pick a feasible and limited number of quality indicators and think through how frontline workers will be motivated to collect the data. Excessive data collection will interfere with or impede service delivery. Make sure managers are capable of handling and analyzing data that comes in and reacting to it, or there is no point in collecting it. It’s important to not only think about what data you want, but how this data will be used. Real-time data collected needs to be actionable. Be sure that those submitting data understand what data they have submitted and can verify its accuracy. Mobile data collection needs to be integrated into real processes and feedback loops. People will only submit information or reports if they see that someone cares about those reports and does something about them.
  • Collecting data through mobile technology may change the behavior being monitored or tracked. One participant commented that when his organization implemented an ICT-based system to track staff performance, people started doing unnecessary activities so that they could tick off the system boxes rather than doing what they knew should be done for better program impact.
  • At the practical level, tips include having robust options for connectivity and power solutions, testing the technology in the field with a real situation, securing reduced costs with vendors for bulk purchasing and master agreements, using standard vendor tools instead of custom building. It’s good to keep the system simple, efficient and effective as possible and to avoid redundancy or the addition of features things that don’t truly offer more functionality.

Thanks to all our participants and lead discussants at the sessions!

Useful information and guides on ICTME:

Mobile-based technology for monitoring and evaluation: A reference guide for project managers, M&E specialists, researchers, donors

3 Reports on mobile data collection

Other posts on ICTs for M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

Using participatory video for M&E

ICTs and M&E at the South Asia Evaluators’ Conclave

Read Full Post »

New technologies are changing the nature of monitoring and evaluation, as discussed in our previous Salon on the use of ICTs in M&E. However, the use of new technologies in M&E efforts can seem daunting or irrelevant to those working in low resource settings, especially if there is little experience or low existing capacity with these new tools and approaches.

What is the role of donors and other intermediaries in strengthening local capacity in communities and development partners to use new technologies to enhance monitoring and evaluation efforts?

On August 30, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC to host the second in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes and to discuss just this question. Our lead discussants were: Revati Prasad from Internews, Tom O’Connell from UNICEF and Jake Watson from the International Rescue Committee. (Thanks Jake for stepping in at the last minute!)

We started off with the comment that “Many of us are faced with the “I” word – in other words, having to demonstrate impact on the ground. But how can we do that if we are 4 levels removed from where change is happening?” How can organizations and donors or those sitting in offices in Washington DC or New York City support grantees and local offices to feed back more quickly and more accurately? From this question, the conversation flowed into a number of directions and suggestions.

1) Determine what works locally

Donor shouldn’t be coming in to say “here’s what works.” Instead, they should be creating local environments for innovation. Rather than pushing things down to people, we need to start thinking from the eyes of the community and incorporate that into how we think and what we do. One participant confirmed that idea with a concrete example. “We went in with ideas – wouldn’t SMS be great… but it became clear that SMS was not the right tool, it was voice. So we worked to establish a hotline. This has connected [the population] with services, it also connects with a database that came from [their] own needs and it tracks what they want to track.” As discussed in the last Salon, however, incentive and motivation are critical. “Early on, even though indicators were set by the community, there was no direct incentive to report.” Once the mentioned call center connected the reporting to access to services, people were more motivated to report.

2) Produce local, not national-level information

If you want to leverage technology for local decision-making, you need local level information, not broad national level information. You also need to recognize that the data will be messy. As one participant said, we need to get away from the idea of imperfect data, and instead think: is the information good enough to enable us to reach that child who wasn’t reached before? We need to stop thinking of knowledge as discrete chunks that endure for 3-4 years. We are actually processing information all the time. We can help managers to think of information as something to filter and use constantly and we can help them with tools to filter information, create simpler dashboards, see bottlenecks, and combine different channels of information to make decisions.

3) Remember why you are using ICTs in M&E

We should be doing M&E in order to achieve better results and leveraging technologies to achieve better impact for communities. Often, however, we end up doing it for the donor. “Donors get really excited about this multicolored thing with 50,000 graphs, but the guy on the ground doesn’t use a bit of it. We need to let go.” commented one participant. “I don’t need to know what the district manager knows. I need to know that he or she has a system in place that works for him or her. My job is to support local staff to have that system working. We need to focus on helping people do their jobs.”

4) Excel might be your ‘killer app’

Worldwide, the range of capacities is huge. Sometimes ICT sounds very sexy, but the greatest success might be teaching people how to use Excel, how to use databases to track human rights violations and domestic violence or setting up a front-end and a data entry system in a local language.

5) Technology capacity doesn’t equal M&E capacity

One participant noted that her organization is working with a technology hub that has very good tech skills but lacks capacity in development and M&E. Their work over the past year has been less about using technology and more about working with the hub to develop these other capacities: how to conduct focus groups, surveys, network analysis, developing toolkits and guides. There’s often excitement on the ground – ‘We can get data in 48 hours! Wow! Let’s go!’ However creating good M&E surveys to be used via technology tools is difficult. One participant expressed that finding local expertise in this area is not easy, especially considering staff turnover. “We don’t always have M&E experts on the ground.” In addition, “there is an art to polls and survey trees, especially when trying to take them from English into other languages. How do you write a primer for staff to create meaningful questions.”

6) Find the best level for ICTs to support the process

ICTs are not always the best tool at the community or district level, given issues of access, literacy, capacity, connection, electricity, etc., but participants mentioned working in blended ways, eg., doing traditional data collection and using ICTs to analyze the data, compile it, produce localized reports, and working with the community to interpret the information for better decision-making. Others use hand-drawn maps, examine issues from the community angle and then incorporate that into digital literacy work and expression work, using new technology tools to tell and document the communities’ stories.

7) Discover the shadow systems and edge of network

One participant noted that people will comply and they will move data through the system as requested from on high, but they simultaneously develop their own ways of tracking information that are actually useful to them. By discovering these ‘shadow systems’, you can see what is really useful. This ‘edge of network’ is where people with whom headquarters doesn’t have contact live and work. We rely on much of their information to build M&E systems yet we don’t consult and work with them often enough. Understanding this ‘edge of network’ is critical to designing and developing good M&E systems and supporting local level M&E for better information and decision-making.

8 ) The devil is in the details

There are many M&E tools to choose from and each has its pros and cons. Participants mentioned KoBo, RapidSMSNokia Data GatheringFrontlineSMS and Episurveyor. While there is a benefit to getting more clean data and getting it in real-time, there will always be post-processing tasks. The data can, however, be thrown on a dashboard for better decision-making. Challenges exist, however. For example, in Haiti, as one participant commented, there is a 10% electrification rate, so solar is required. “It’s difficult to get a local number with Clickatell [an SMS gateway]; you can only get an international number. But getting a local number is very complicated. If you go that route, you need a project coordinator. And if you are using SMS, how do you top off the beneficiaries so that they can reply? The few pennies it costs for people to reply are a deterrent. Yet working with telecom providers is very time-consuming and expensive in any country. Training local staff is an issue – trying to train everyone on the ICT package that you are giving them. You can’t take anything for granted. People usually don’t have experience with these systems.” Literacy is another stumbling block, so some organizations are looking at Interactive Voice Response (IVR) and trying to build a way for it to be rapidly deployed.

9) Who is the M&E for?

Results are one thing, but as one participant noted, “part of results measuring means engaging communities in saying whether the results are good for them.” Another participant commented that Ushahidi maps are great and donors love them. But in CAR, for example, there is 1% internet penetration and maybe 9% of the people text. “If you are creating a crisis map about the incidence of violence, your humanitarian actors may access it, it may improve service delivery, but it is in no way useful for people on the ground. There is reliance on technology, but how to make it useful for local communities is still the big question…. It’s hard to talk about citizen engagement and citizen awareness if you are not reaching citizens because they don’t have access to technology.” And “what about the opportunity cost for the poor? ”asked one participant. “Time is restricted. CSOs push things down to the people least able to use the time for participation. There is a cost to participation, yet we assume participation is a global good. The poorest are really scraping for time and resources.  ‘Who is the data for?’ is still a huge question. Often it’s ‘here’s what we’re going to do for you’ rather than meeting with people first, asking what’s wrong, then listening and asking what they would like to do about it, and listening some more.”

10) Reaching the ‘unreachable’

Reaching and engaging the poorest is still difficult, and the truly unreached will require very different approaches. “We’re really very much spoke to hub,” said one participant, “This is not enough. How can we innovate and resolve this.” Another emphasized the need to find out who’s not part of the conversation, who is left out or not present when these community discussions take place. “You might find out that adolescent girls with mobility issues are not there. You can ask those with whom you are consulting if they know of someone who is not at the meeting. You need to figure out how to reach the invisible members of the community.”  However, as noted, “we also have to protect them. Sometimes identifying people can expose them. There is no clear answer.”

11) Innovation or building on what’s already there?

So will INGOs and donors continue to try to adapt old survey ideas to new technology tools? And will this approach survive much longer? “Aren’t we mostly looking for information that we can act on? Are we going to keep sending teams out all the time or will we begin to work with information we can access differently? Can we release ourselves from that dependence on survey teams?” Some felt that ‘data exhaust’ might be one way of getting information differently; for example a mode like Google Flu Trends. But others noted the difficulty of getting information from non-online populations, who are the majority. In addition, with these new ICT-based methods, there is still a question about representativeness and coverage. Integrated approaches where ICTs are married with traditional methods seem to be the key. This begs the question: “Is innovation really better than building up what’s already there?” as one participant commented. “We need to ask – does it add value? Is it better than what is already there? If it does add perceived value locally, then how do we ensure that it comes to some kind of result. We need to keep our eye on the results we want to achieve. We need to be more results-oriented and do reality checks. We need to constantly ask ourselves:  Are we listening to folks?”

In conclusion

There is much to think about in this emerging area of ICTs and Monitoring and Evaluation.  Join us for the third Salon in the series on October 17 where we’ll continue discussions. If you are not yet on the Technology Salon mailing list, you can sign up here. A summary of the first Salon in the series is here. (A summary of the October 17th Salon is here.)

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

New technologies are opening up all kinds of possibilities for improving monitoring and evaluation. From on-going feedback and crowd-sourced input to more structured digital data collection, to access to large data sets and improved data visualization, the field is changing quickly.

On August 7, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC for the first in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes. Our lead discussants were: Erica Kochi from UNICEF Innovations; Steven Davenport from Development Gateway and John Toner from CSF.

This particular Salon focused on the use of ICTs for social monitoring (a.k.a. ‘beneficiary feedback loops’) and accountability. Below is a summary of the key points that emerged at the Salon.

1) Monitoring and evaluation is changing

M&E is not only about formal data collection and indicators anymore, one discussant commented, “It’s free form, it contains sentiment.” New ICT tools can help donors and governments plan better. SMS and other social monitoring tools provide an additional element to more formal information sources and can help capture the pulse of the population. Combinations of official data sets with SMS data provide new ways of looking at cross-sections of information. Visualizations and trend analysis can offer combinations of information for decision making. Social monitoring, however, can be a scary thing for large institutions. It can seem too uncontrolled or potentially conflictive. One way to ease into it is through “bounded” crowd-sourcing (eg., working with a defined and more ‘trusted’ subset of the public) until there is comfort with these kinds of feedback mechanisms.

2) People need to be motivated to participate in social monitoring efforts

Building a platform or establishing an SMS response tool is not enough. One key to a successful social monitoring effort is working with existing networks, groups and organizations and doing well-planned and executed outreach, for example, in the newspaper, on the radio and on television. Social monitoring can and should go beyond producing information for a particular project or program. It should create an ongoing dialogue between and among people and institutions, expanding on traditional monitoring efforts and becoming a catalyst for organizations or government to better communicate and engage with the community. SMS feedback loops need to be thought of in terms of a dialogue or a series of questions rather than a one-question survey. “People get really engaged when they are involved in back and forth conversation.” Offering prizes or other kinds of external motivation can spike participation rates but also can create expectations that affect or skew programs in the long run. Sustainable approaches need to be identified early on. Rewards can also lead to false reports and re-registering, and need to be carefully managed.

3) Responsiveness to citizen/participant feedback is critical

One way to help motivate individuals to participate in social monitoring is for governments or institutions to show that citizen/participant feedback elicits a response (eg., better delivery of public services).  “Incentives are good,” said one discussant, “But at the core, if you get interactive with users, you will start to see the responses. Then you’ll have a targeted group that you can turn to.” Responsiveness can be an issue, however if there is limited government or institutional interest, resourcing or capacity, so it’s important to work on both sides of the equation so that demand does not outstrip response capacity. Monitoring the responsiveness to citizen/participant feedback is also important. “Was there a response promised? Did it happen? Has it been verified? What was the quality of it?”

4) Privacy and protection are always a concern

Salon participants brought up concerns about privacy and protection, especially for more sensitive issues that can put those who provide feedback at risk. There are a number of good practices in the IT world for keeping data itself private, for example presenting it in aggregate form, only releasing certain data, and setting up controls over who can access different levels of data. However with crowd-sourcing or incident mapping there can be serious concerns for those who report or provide feedback. Program managers need to have a very good handle on the potential risks involved or they can cause unintended harm to participants. Consulting with participants to better understand the context is a good idea.

5) Inclusion needs to be purposeful

Getting a representative response via SMS-based feedback or other social monitoring tools is not always easy. Mandatory ratios of male and female, age groups or other aspects can help ensure better representation. Different districts can be sampled in an effort to ensure overall response is representative. “If not,” commented one presenter, “you’ll just get data from urban males.” Barriers to participation also need consideration, such as language; however, working in multiple languages becomes very complicated very quickly. One participant noted that it is important to monitor whether people from different groups or geographic areas understand survey questions in the same way, and to be able to fine-tune the system as it goes along. A key concern is reaching and including the most vulnerable with these new technologies. “Donors want new technology as a default, but I cannot reach the most excluded with technology right now,” commented a participant.

6) Information should be useful to and used by the community

In addition to ensuring inclusion of individuals and groups, communities need to be involved in the entire process. “We need to be sure we are not just extracting information,” mentioned one participant. Organizations should be asking: What information does the community want? How can they get it themselves or from us? How can we help communities to collect the information they need on their own or provide them with local, sustainable support to do so?

7) Be sure to use the right tools for the job

Character limitation can be an issue with SMS. Decision tree models, where one question prompts another question that takes the user down a variety of paths, are one way around the character limit. SMS is not good for incredibly in-depth surveys however; it is good for breadth not depth. It’s important to use SMS and other digital tools for what they are good for. Paper can often be a better tool, and there is no shame in using it. Discussants emphasized that one shouldn’t underestimate the challenges in working with Telco operators and making short codes. Building the SMS network infrastructure takes months. Social media is on the rise, so how do you channel that into the M&E conversation?

8) Broader evaluative questions need to be established for these initiatives

The purpose of including ICT in different initiatives needs to be clear. Goals and evaluative questions need to be established. Teams need to work together because no one person is likely to have the programmatic, ICT and evaluation skills needed for a successfully implemented and well-documented project. Programs that include ICTs need better documentation and evaluation overall, including cost-benefit analyses and comparative analyses with other potential tools that could be used for these and similar processes.

9) Technology is not automatically cheaper and easier

These processes remain very iterative; they are not ‘automated’ processes. Initial surveys can only show patterns. What is more interesting is back-and-forth dialogue with participants. As one discussant noted, staff still spend a lot of time combing through data and responses to find patterns and nuances within the details. There is still a cost to these projects. In one instance, the major project budget went into a communication campaign that was launched and the work with existing physical networks to get people to participate. Compared to traditional ways of doing things (face-to-face, for example) the cost of outreach is not so expensive, but integrating SMS and other technologies does not automatically mean that money will be saved. The cost of SMS is also large in these kinds of projects because in order to ensure participation, representation, and inclusion, SMS usually needs to be free for participants. Even with bulk rates, if the program is at massive scale, it’s quite expensive. When assuming that governments or local organizations will take over these projects at some point, this is a real consideration.

10) Solutions at huge scale are not feasible for most organizations 

Some participants commented that the UN and the Red Cross and similarly sized organizations are the only ones who can work at the level of scale discussed at the Salon. Not many agencies have the weight to influence governments or mobile service providers, and these negotiations are difficult even for large-scale organizations. It’s important to look at solutions that react and respond to what development organizations and local NGOs can do. “And what about localized tools that can be used at district level or village level? For example, localized tools for participatory budgeting?” asked a participant. “There are ways to link high tech and SMS with low tech, radio outreach, working with journalists, working with other tools,” commented others. “We need to talk more about these ways of reaching everyone. We need to think more about the role of intermediaries in building capacity for beneficiaries and development partners to do this better.

11) New technology is not M&E magic

Even if you include new technology, successful initiatives require a team of people and need to be managed. There is no magic to doing translations or understanding the data – people are needed to put all this together, to understand it, to make it work. In addition, the tools covered at the Salon only collect one piece of the necessary information. “We have to be careful how we say things,” commented a discussant. We call it M&E, but it’s really ‘M’. We get confused with ourselves sometimes. What we are talking about today is monitoring results. Evaluation is how to take all that information then, and make an informed decision. It involves specialists and more information on top of this…” Another participant emphasized that SMS feedback can get at the symptoms but doesn’t seem to get at the root causes. Data needs to be triangulated and efforts made to address root causes and end users need to be involved.

12) Donors need to support adaptive design

Participants emphasized that those developing these programs, tools and systems need to be given space to try and to iterate, to use a process of adaptive design. Donors shouldn’t lock implementers into unsuitable design processes. A focused ‘ICT and Evaluation Fail Faire’ was suggested as a space for improving sharing and learning around ICTs and M&E. There is also learning to be shared from people involved in ICT projects that have scaled up. “We need to know what evidence is needed to scale up. There is excitement and investment, but not enough evidence,” it was concluded.

Our next Salon

Our next Salon in the series will take place on August 30th. It will focus on the role of intermediaries in building capacity for communities and development partners to use new technologies for monitoring and evaluation. We’ll be looking to discover good practices for advancing the use of ICTs in M&E in sustainable ways. Sign up for the Technology Salon mailing list here. [Update: A summary of the August 30 Salon is here.]

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »