Feeds:
Posts
Comments

Posts Tagged ‘monitoring’

Modified from the original, posted on the MERL Tech Blog, July 20, 2020

For the past six years, I’ve been organizing the MERL Tech conference and related activities. We cancelled this year’s conference (planned for Johannesburg in September) because of coronavirus, but plenty has been happening despite the fact that we can’t gather in person.

One project I’m happy to launch today is the State of the Field of MERL Tech research, which pulls together lessons from five years of convening hundreds of monitoring, evaluation, research, and learning (MERL) and technology practitioners who have joined us as part of the MERL Tech community.

These four new papers build on research that Michael Bamberger and I co-authored in 2014, which aimed to set the stage and begin framing this (then) emerging field. For this latest research, we started by examining the evolution of the field since 2014 and plotting three waves of MERL Tech (as described below) onto Gartner’s Hype Cycle. Each of the waves is explored further in its own paper.

Three waves of MERL Tech explored in the State of the Field series.

Now is a good time to take stock of the past, given that 2020 marks a turning point in many ways. The world is in the midst of the COVID-19 pandemic, and there is an urgent need to know what is happening, where, and to what extent. Data is a critical piece of the COVID-19 response — it can mean the difference between life and death — but data collection, use, and sharing can also invade privacy or cause harm now or in the future. As technology use grows due to stay-at-home orders and a push for “remote monitoring” and “remote program delivery” so, too, does the amount of data captured and shared.

At the same time, we’re witnessing (and I hope, also joining in with) a global call for justice — perhaps a tipping point — in the wake of decades of racist and colonialist systems that operate at the level of nations, institutions, organizations, global aid and development, and the tech sector. There is no denying that these power dynamics and systems have shaped the MERL space as a whole, including the MERL Tech space.

Moments of crisis test a field, and we live in extreme times. The coming decade will demand a nimble, adaptive, fair, and just use of data for managing complexity and for gaining longer-term understanding of change and impact. The sector, its relationships, and its power dynamics will need a fundamental re-shaping.

It is in this time of upheaval and change that we are releasing four papers covering the field from 2014-2019 as a launchpad for thinking about the future of MERL Tech. In September 2018, the papers’ authors began reviewing the past five years of MERL Tech events to identify lessons, trends, and issues in this rapidly changing field. They also reviewed the literature base in an effort to determine what we know about technology in MERL, what we yet need to understand, and what are the gaps in the formal literature. No longer is this a nascent field, yet it is one that is hard to keep up with, due to its fast pace and constant shifts. We have learned many lessons over the past five years, but complex political, technical, and ethical questions remain.

Can the wider MERL Tech community take action to make the next phase of MERL Tech development effective, responsible, ethical, just, and equitable? We share these papers as conversation pieces and hope they will generate more discussion in the MERL Tech space about where to go from here.

The State of the Field series includes four papers:

MERL Tech State of the Field: The Evolution of MERL Tech: Linda Raftree, independent consultant and MERL Tech Conference organizer.

What We Know About Traditional MERL Tech: Insights from a Scoping Review: Zach Tilton, Michael Harnar, and Michele Behr, University of Western Michigan; Soham Banerji and Manon McGuigan, independent consultants; and Paul Perrin, Gretchen Bruening, John Gordley and Hannah Foster, University of Notre Dame; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Big Data to Data Science: Moving from “What” to “How” in the MERL Tech SpaceKecia Bertermann, Luminate; Alexandra Robinson, Threshold.World; Michael Bamberger, independent consultant; Grace Lyn Higdon, Institute of Development Studies; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Emerging Technologies and Approaches in Monitoring, Evaluation, Research, and Learning for International Development Programs: Kerry Bruce and Joris Vandelanotte, Clear Outcomes; and Valentine Gandhi, The Development CAFE and Social Impact.

Read Full Post »

(Reposting, original appears here)

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

.

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before role-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Read Full Post »

(Joint post from Linda Raftree, MERL Tech and Megan Colnar, Open Society Foundations)

The American Evaluation Association Conference happens once a year, and offers literally hundreds of sessions. It can take a while to sort though all of them. Because there are so many sessions, it’s easy to feel a bit lost in the crowds of people and content.

So, Megan Colnar (Open Society Foundations) and I thought we’d share some of the sessions that caught our eye.

I’m on the look-out for innovative tech applications, responsible and gender-sensitive data collection practices, and virtual or online/social media-focused evaluation techniques and methods. Megan plans to tune into sessions on policy change, complexity-aware techniques, and better MEL practices for funders. 

We both can’t wait to learn about evaluation in the post-truth and fake news era. Full disclosure, our sessions are also featured below.

Hope we see you there!

Wednesday, November 8th

3.15-4.15

4.30-6.00

We also think a lot of the ignite talks during this session in the Thurgood Salon South look interesting, like:

6.15-7.15

7.00-8.30

Tour of a few poster sessions before dinner. Highlights might include:

  • M&E for Journalism (51)
  • Measuring Advocacy (3)
  • Survey measures of corruption (53)
  • Theory of change in practice (186)
  • Using social networks as a decision-making tool (225)

 

Thursday, Nov 9th

8.00-9.00 – early risers are rewarded with some interesting options

9.15-10.15

10.30-11.15

12.15-1.15

1.15-2.00

2.15-3.00

3.15-4.15

4.30-5.15

 

Friday, Nov 10th

8.00-9.30early risers rewarded again!

11.00-11.45

1.45-3.15

3.30-4.15

4.30-5.15

5.30-6.15– if you can hold out for one more on a Friday evening

6.30-7.15

 

Saturday, Nov 11th–you’re on your own! Let us know what treasures you discover

Read Full Post »

(I’ve been blogging a little bit over at MERLTech.org. Here’s a repost.)

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general. These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Read Full Post »

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »

I used to write blog posts two or three times a week, but things have been a little quiet here for the past couple of years. That’s partly because I’ve been ‘doing actual work’ (as we like to say) trying to implement the theoretical ‘good practices’ that I like soapboxing about. I’ve also been doing some writing in other places and in ways that I hope might be more rigorously critiqued and thus have a wider influence than just putting them up on a blog.

One of those bits of work that’s recently been released publicly is a first version of a monitoring and evaluation framework for SIMLab. We started discussing this at the first M&E Tech conference in 2014. Laura Walker McDonald (SIMLab CEO) outlines why in a blog post.

Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.

For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact….

FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.

As an organization that provides technology strategy and support to other organizations — both large and small — it is important for SIMLab to better understand the quality of that support and how it may translate into improvements as well as how introduction or improvement of information and communication technology contributes to impact at the broader scale.

This is a difficult proposition, given that isolating a single factor like technology is extremely tough, if not impossible. The Framework thus aims to get at the breadth of considerations that go into successful tech-enabled project design and implementation. It does not aim to attribute impact to a particular technology, but to better understand that technology’s contribution to the wider impact at various levels. We know this is incredibly complex, but thought it was worth a try.

As Laura notes in another blogpost,

One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators – the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and consider the criteria for technology-enabled social change projects.

Here are the adapted criteria that you can read more about in the Framework. They were designed for internal use, but we hope they might be useful to evaluators of technology-enabled programming, commissioners of evaluations of these programs, and those who want to do in-house examination of their own technology-enabled efforts. We welcome your thoughts and feedback — The Framework is published in draft format in the hope that others working on similar challenges can help make it better, and so that they could pick up and use any and all of it that would be helpful to them. The document includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample log frames and evaluator terms of reference.

Happy reading and we really look forward to any feedback and suggestions!!

*****

The Criteria

Criterion 1: Relevance

The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design – particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?

Criterion 2: Effectiveness

A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

Criterion 3: Efficiency

Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?

Criterion 4: Impact

Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying, documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies evaluation criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us.

For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Consider: To what extent does the choice of communications channels or tools enable wider and/or higher quality participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?

Criterion 6: Coherence

The OECD-DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

Read Full Post »

Screen Shot 2016-01-12 at 10.17.25 AMSince I started looking at the role of ICTs in monitoring and evaluation a few years back, one concern that has consistently come up is: “Are we getting too focused on quantitative M&E because ICTs are more suited to gather quantitative data? Are we forgetting the importance of qualitative data and information? How can we use ICTs for qualitative M&E?”

So it’s great to see that Insight Share (in collaboration with UNICEF) has just put out a new guide for facilitators on using Participatory Video (PV) and the Most Significant Change (MSC) methodologies together.

 

The Most Significant Change methodology is a qualitative method developed (and documented in a guide in 2005) by Rick Davies and Jess Dart (described below):

Screen Shot 2016-01-12 at 9.59.32 AM

Participatory Video methodologies have also been around for quite a while, and they are nicely laid out in Insight Share’s Participatory Video Handbook, which I’ve relied on in the past to guide youth participatory video work. With mobile video becoming more and more common, and editing tools getting increasingly simple, it’s now easier to integrate video into community processes than it has been in the past.

Screen Shot 2016-01-12 at 10.00.54 AM

The new toolkit combines these two methods and provides guidance for evaluators, development workers, facilitators, participatory video practitioners, M&E staff and others who are interested in learning how to use participatory video as a tool for qualitative evaluation via MSC. The toolkit takes users through a nicely designed, step-by-step process to planning, implementing, interpreting and sharing results.

I highly recommend taking a quick look at the toolkit to see if it might be a useful method of qualitative M&E — enhanced and livened up a bit with video!

Read Full Post »

Screen Shot 2015-09-02 at 7.38.45 PMBack in 2010, I wrote a post called “Where’s the ICT4D distance learning?” which lead to some interesting discussions, including with the folks over at TechChange, who were just getting started out. We ended up co-hosting a Twitter chat (summarized here) and having some great discussions on the lack of opportunities for humanitarian and development practitioners to professionalize their understanding of ICTs in their work.

It’s pretty cool today, then, to see that in addition to having run a bunch of on-line short courses focused on technology and various aspects of development and social change work, TechChange is kicking off their first Diploma program focusing on using ICT for monitoring and evaluation — an area that has become increasingly critical over the past few years.

I’ve participated in a couple of these short courses, and what I like about them is that they are not boring one-way lectures. Though you are studying at a distance, you don’t feel like you’re alone. There are variations on the type and length of the educational materials including short and long readings, videos, live chats and discussions with fellow students and experts, and smaller working groups. The team and platform do a good job of providing varied pedagogical approaches for different learning styles.

The new Diploma in ICT and M&E program has tracks for working professionals (launching in September of 2015) and prospective Graduate Students (launching in January 2016). Both offer a combination of in-person workshops, weekly office hours, a library of interactive on-demand courses, access to an annual conference, and more. (Disclaimer – you might see some of my blog posts and publications there).

The graduate student track will also have a capstone project, portfolio development support, one-on-one mentorship, live simulations, and a job placement component. Both courses take 16 weeks of study, but these can be spread out over a whole year to provide maximum flexibility.

For many of us working in the humanitarian and development sectors, work schedules and frequent travel make it difficult to access formal higher-level schooling. Not to mention, few universities offer courses related to ICTs and development. The idea of incurring a huge debt is also off-putting for a lot of folks (including me!). I’m really happy to see good quality, flexible options for on-line learning that can improve how we do our work and that also provides the additional motivation of a diploma certificate.

You can find out more about the Diploma program on the TechChange website  (note: registration for the fall course ends September 11th).

 

 

 

Read Full Post »

The private sector has been using dashboards for quite some time, but international development organizations face challenges when it comes to identifying the right data dashboards and accompanying systems for decision-making.

Our May 29th, 2015, Technology Salon (sponsored by The Rockefeller Foundation) explored data dashboards and data visualization for improved decision making with lead discussants John DeRiggi, Senior Data Architect, DAI; Shawna Hoffman, Associate Manager, Evaluation and Learning at The MasterCard Foundation; Stephanie Evergreen, Evergreen Data.

In short, we learned at the Salon that most organizations are struggling with the data dashboard process. There are a number of reasons that dashboards fail. They may never get off the ground, they may not deliver what was promised, they may deliver but no one uses them, or they may deliver but the data is poor and bad decisions are made. Using data for better decision-making is an ongoing process – not a task or product to complete and then relegate to automation. Just getting a dashboard up and running doesn’t guarantee that it’s a success – it’s critical to look deeper to see if the data and its visualization have actually improved decisions and how. Like with any ICT tool, user centered design and ongoing iteration are key. Successful dashboards are organized, useful, include targets, and have trends and predictions. Organizational culture and change management are critical in the process.

Points discussed in detail*:

1) Ask whether you actually need a dashboard

The first question to ask is whether a dashboard is needed or possible. One discussant, who specializes in data visualization, noted that she’s often brought in because someone wants to do data visualization, and she then needs to work backwards with the organization through a number of other preparatory steps before getting to the part on data visualization. It’s critical to have data dashboard discussions with different parts of the organization in order to understand real needs and expectations. Often people will say they need a dashboard because they want to make better decisions, noted another lead discussant. “But what kind of decisions, and what information is needed to make those decisions? Where does that information come from? Who will get it?”

2) Define the audience and type of dashboard

People often think that they can create one dashboard that will fulfill everyone’s needs. As one discussant put it, they will say the audience for the dashboard is “everyone – all decision makers at all levels!” In reality most organizations will need several dashboards for different levels of decision-making. It’s important to know who will own it, use it, keep it up, and collect the data. Will it be internal or externally facing? Discussing all of this is a key part of the process of thinking through the dashboard. As one discussant outlined, dashboards can be strategic, analytical or operational. But it’s difficult for them to be all three at once. So organizations need to come to a clear understanding of their data and decision-making needs. What information, if available, would help different teams at different levels with their decision making? One dashboard can’t be everything to everyone. Creating a charter that outlines what the dashboard project is and what it aims to do is a way to help avoid mission creep, said one discussant.

3) Work with users to develop your dashboard

To start off the process, it’s important to clearly identify the audience and find out what they need – don’t assume you know, recommended one discussant. But also, as a Salon participant pointed out, don’t assume that they know either. Have a conversation where their and your expertise comes together. “The higher up you go, the less people may understand about data. One idea is to just take the ‘data’ out of the conversation. Ask decision-makers what questions they are trying to answer, what problems they are trying to solve. Then find out how to collect and visualize the data that helps them answer their questions,” suggested another participant. Create ownership and accountability at all levels – with users, with staff who will input the data, with project managers, with grantees – you need cooperation from all levels noted others. Clear buy-in will also help with data quality. If people see the results of their data coming out in a data visualization, they may be more inclined to provide quality data. One way to involve users is to gather different teams to talk about their data and to create ‘entity relationship models’ together. “People can get into the weeds, and then you can build a vocabulary for the organization. Then you can use that model to build the system and create commonality across it,” said one discussant. Another idea is to create paper prototypes of dashboards with users so that they can envision them better.

4) Dashboards help people engage with the data they’ve collected

A dashboard is a window into your data, said one participant. In some cases, seeing their data visualized can help staff to see that they have been providing poor quality data. “People didn’t realize how bad their data was until they saw their dashboard,” said one discussant. Another noted that people may disagree with what the data tells them in the dashboard and feel motivated to provide better data. On the other hand, they may realize that their data was actually good, and instead they need to improve ineffective programs. A danger is that putting a dashboard on top of bad data shines a light on the data, said one participant, and this might create an incentive for people to manipulate their data.

5) Don’t be over-ambitious

Align the dashboard with indicators that link to strategic goals and directions and stay focused, recommended one discussant. There is often a temptation to over-complicate with tons of data and visuals. But extraneous data leads to misinterpretation or distraction. Dashboards should make complex data available in an accessible way to users, she said. You can always make more visuals if needed, but you want a concise story told in the data and visuals that you’re depicting. Determine what is useful, productive and credible and leave out what is exciting but extraneous. “Don’t try to have 30 indicators.”

6) Be clear about your data categories and indicators

Rolling up data from a large number of different programs into a dashboard is a huge challenge, especially if different sites or programs are using different data models. For example, if one program is describing an activity as a ‘workshop’ and the other uses ‘training session,’ said one discussant, you have a problem. A Salon participant explained that her organization started with shallow but important common denominators across programs. Over time they aim to go deeper to begin looking at outcomes and impact.

7) Think through how you’ll sustain the dashboard and related system(s)

One discussant said that her organization established three different teams to work on the dashboard process: a) Metrics – Where do we have credible representative data? Where do we have indicators but we don’t have data? b) Plumbing: Where are the data sources? How do they feed into each other? Who is responsible, and can this be aggregated up? And c) Visualization: What visual would help different decision makers make their decisions? Depending on where the organization is in its stage of readiness and its existing staff capacities, different combinations of skill sets may be required to supplement existing ones. Data experts can help teams understand what is possible, yet program or management teams and other dashboard users also need to be involved so that they can identify the questions they are trying to answer with the data and the dashboard.

8) Don’t underestimate the time/resources needed for a functional dashboard

People may not realize that you can’t make a dashboard without data to support it, noted one participant. “It’s like a power point presentation… a power point doesn’t just appear out of nowhere. It’s a result of conversations, research, data, design and more. But for some reason, people think a dashboard will just magically create itself out of thin air.” People also seem to think you can create and launch a dashboard and then put it on autopilot, but that is not the case. The dashboard will need constant changes and iteration, and there will be continual work to keep it up. The questions being asked will also likely change over time and so the dashboard may need to shift to take this into consideration. Time will be required to get buy-in for the dashboard and its use. One Salon participant said that in her former organization, they met quarterly to present, use and discuss the dashboard, and it took about 2 years in order for it to become useful and for people to become invested in it. It’s very important, said one participant, to ensure that management knows that the dashboard is not a static thing – it will need ongoing attention and management.

9) Be selective when it comes to the technology

People tend to think that dashboards are just visual, said a Salon participant. They think they are really cool, business solution platforms. Often senior leadership has seen been pitched something really expensive and complicated, with all kinds of bells and whistles, and they may think that is what they need. It’s important to know where your organization is in terms of capacity before determining which technology would be the best fit, however, noted one discussant. She counseled organizations to use whatever they have on hand rather than bringing in new software that takes people 6 months to learn how to use. Simple excel-based dashboards might be the best place to start, she said.

10) Legacy systems can be combined with new data viz capabilities

One discussant shared how his company’s information system, which was set up over 15 years ago, did not allow for the creation of APIs. This meant that the team could not build derivative software products from their massive existing database. It is too expensive to replace the entire system, and building modules to replace some of it would lead to fragmenting the user experience. So the team built a thin web service layer on top of the existing system. This exposed the data to friendly web formats from which developers could build interactive products.

11) Be realistic about “real time” and “data quality”

One question that came up was around the the level of evidence needed to make good decisions. Having perfect data served up into a perfect visualization is utopian, said one Salon participant. The idea is that we could have ‘real time’ data to inform our decisions, she explained, yet it’s hard to quality check data so quickly. “So at what level can we say we’ll make decisions based on a level of certainty – is it when we feel 80% of the data is good quality? Do we need to lower that to 60% so that we have timely data? Is that too low?” Another question was around the kinds of decisions that require ‘real time’ data versus those that could be made based on data that is 3 to 6 months old. Salon participants said this will depend on the kind of program and the type of decision. The sector in which one is working may also determine the level of comfort with real time and with data quality – for example, the humanitarian sector may need more timely data and accept a lower level of verification whereas the development sector may be the opposite.

Another point was that dashboards should include error bars and available metadata, as well as in some cases a link to raw data for those who want to dig into the data and understand what is behind the dashboard. Sometimes the dashboard process will highlight that there is simply not much quality data available for some programs in some countries. This can be an opportunity to work with staff on the ground to strengthen capacity to collect it.

12) Relax

As one discussant said, “much of the concern about data quality is related to our own hang-ups as data nerds and what we feel comfortable putting out there for people to use to make decisions. We always say ‘we need more research.’” But here the context is different. “Stakeholders and management want the answer. We need to just put the data out there with some caveats to help them.” One way to offer more context for a dashboard is creating a dashboard report that provides some narrative alongside the visualization. Dashboards should also show trends, not only what has happened already, she said. People need to see trends towards the future so that decisions can be made. It was also pointed out that a dashboard shouldn’t be the only basis for decisions. Like a car dashboard – these data dashboards signal that something is changing but you still need to look under the hood to see what it is. The dashboard should trigger questions – it should be a launch pad for discussion.

13) Organizational culture is a huge part of this process

The internal culture and people’s attitudes towards data are embedded into how an organization operates, noted one Salon participant. This varies depending on the type of organization – an evaluation focused organization vs. a development organization vs. a contractor vs. a humanitarian organization, for example. Outside consultants can help you to build a dashboard, but it will be critical to have someone managing organizational change on the inside who knows the current culture and where the organization is aiming to go with the dashboard process. The process is getting easier, however. Many organizations are thirsty for data now, noted one lead discussant. “Often the research or evaluation team create a dashboard and send it to the management team, and then everyone loves it and wants one. People are ready for it now.”

More resources on data dashboards and visualization.

Special thanks to our lead discussants and to our hosts for this Salon! If you’d like to join our Salon discussions in the future, sign up at the Technology Salon site.

*Salons run under Chatham House Rule, so no attribution has been made in this post.

Read Full Post »

Today as we jump into the M&E Tech conference in DC (we’ll also have a Deep Dive on the same topic in NYC next week), I’m excited to share a report I’ve been working on for the past year or so with Michael Bamberger: Emerging Opportunities in a Tech-Enabled World.

The past few years have seen dramatic advances in the use of hand-held devices (phones and tablets) for program monitoring and for survey data collection. Progress has been slower with respect to the application of ICT-enabled devices for program evaluation, but this is clearly the next frontier.

In the paper, we review how ICT-enabled technologies are already being applied in program monitoring and in survey research. We also review areas where ICTs are starting to be applied in program evaluation and identify new areas in which new technologies can potentially be applied. The technologies discussed include hand-held devices for quantitative and qualitative data collection and analysis, data quality control, GPS and mapping devices, environmental monitoring, satellite imaging and big data.

While the technological advances and the rapidly falling costs of data collection and analysis are opening up exciting new opportunities for monitoring and evaluation, the paper also cautions that more attention should be paid to basic quality control questions that evaluators normally ask about representativity of data and selection bias, data quality and construct validity. The ability to use techniques such as crowd sourcing to generate information and feedback from tens of thousands of respondents has so fascinated researchers that concerns about the representativity or quality of the responses have received less attention than is the case with conventional instruments for data collection and analysis.

Some of the challenges include the potential for: selectivity bias and sample design, M&E processes being driven by the requirements of the technology and over-reliance on simple quantitative data, as well as low institutional capacity to introduce ICT and resistance to change, and issues of privacy.

None of this is intended to discourage the introduction of these technologies, as the authors fully recognize their huge potential. One of the most exciting areas concerns the promotion of a more equitable society through simple and cost-effective monitoring and evaluation systems that give voice to previously excluded sectors of the target populations; and that offer opportunities for promoting gender equality in access to information. The application of these technologies however needs to be on a sound methodological footing.

The last section of the paper offers some tips and ideas on how to integrate ICTs into M&E practice and potential pitfalls to avoid. Many of these were drawn from Salons and discussions with practitioners, given that there is little solid documentation or evidence related to the use of ICTs for M&E.

Download the full paper here! 

Read Full Post »

Older Posts »