Feeds:
Posts
Comments

Archive for the ‘ICT4D’ Category

For our Tuesday, July 27th Salon, we discussed partnerships and interoperability in global health systems. The room housed a wide range of perspectives, from small to large non-governmental organizations to donors and funders to software developers to designers to healthcare professionals to students. Our lead discussants were Josh Nesbit, CEO at Medic Mobile; Jonathan McKay, Global Head of Partnerships and Director of the US Office of Praekelt.org; and Tiffany Lentz, Managing Director, Office of Social Change Initiatives at ThoughtWorks

We started by hearing from our discussants on why they had decided to tackle issues in the area of health. Reasons were primarily because health systems were excluding people from care and organizations wanted to find a way to make healthcare inclusive. As one discussant put it, “utilitarianism has infected global health. A lack of moral imagination is the top problem we’re facing.”

Other challenges include requests for small scale pilots and customization/ bespoke applications, lack of funding and extensive requirements for grant applications, and a disconnect between what is needed on the ground and what donors want to fund. “The amount of documentation to get a grant is ridiculous, and then the system that is requested to be built is not even the system that needs to be made,” commented one person. Another challenge is that everyone is under constant pressure to demonstrate that they are being innovative. [Sidenote: I’m reminded of this post from 2010….] “They want things that are not necessarily in the best interest of the project, but that are seen to be innovations. Funders are often dragged along by that,” noted another person.

The conversation most often touched on the unfulfilled potential of having a working ecosystem and a common infrastructure for health data as well as the problems and challenges that will most probably arise when trying to develop these.

“There are so many uncoordinated pilot projects in different districts, all doing different things,” said one person. “Governments are doing what they can, but they don’t have the funds,” added another, “and that’s why there are so many small pilots happening everywhere.” One company noted that it had started developing a platform for SMS but abandoned it in favor of working with an existing platform instead. “Can we create standards and protocols to tie some of this work together? There isn’t a common infrastructure that we can build on,” was the complaint. “We seem to always start from scratch. I hope donors and organizations get smart about applying pressure in the right areas. We need an infrastructure that allows us to build on it and do the work!” On the other hand, someone warned of the risks of pushing everyone to “jump on a mediocre software or platform just because we are told to by a large agency or donor.”

The benefits of collaboration and partnership are apparent: increased access to important information, more cooperation, less duplication, the ability to build on existing knowledge, and so on. However, though desirable, partnerships and interoperability is not easy to establish. “Is it too early for meaningful partnerships in mobile health? I was wondering if I could say that…” said one person. “I’m not even sure I’m actually comfortable saying it…. But if you’re providing essential basic services, collecting sensitive medical data from patients, there should be some kind of infrastructure apart from private sector services, shouldn’t there?” The question is who should own this type of a mediator platform: governments? MNOs?

Beyond this, there are several issues related to control and ownership. Who would own the data? Is there a way to get to a point where the data would be owned by the patients and demonetized? If the common system is run by the private sector, there should be protections surrounding the patients’ sensitive information. Perhaps this should be a government-run system. Should it be open source?

Open source has its own challenges. “Well… yes. We’ve practiced ‘hopensource’,” said one person (to widespread chuckles).

Another explained that the way we’ve designed information systems has held back shifts in health systems. “When we’re comparing notes and how we are designing products, we need to be out ahead of the health systems and financing shifts. We need to focus on people-centered care. We need to gather information about a person over time and place. About the teams who are caring for them. Many governments we’re working with are powerless and moneyless. But even small organizations can do something. When we show up and treat a government as a systems owner that is responsible to deliver health care to their citizens, then we start to think about them as a partner, and they begin to think about how they could support their health systems.”

One potential model is to design a platform or system such that it can eventually be handed off to a government. This, of course, isn’t a simple idea in execution. Governments can be limited by their internal expertise. The personnel that a government has at the time of the handoff won’t necessarily be there years or months later. So while the handoff itself may be successful in the short term, there’s no firm guarantee that the system will be continually operational in the future. Additionally, governments may not be equipped with the knowledge to make the best decisions about software systems they purchase. Governments’ negotiating capacity must be expanded if they are to successfully run an interoperable system. “But if we can bring in a snazzy system that’s already interoperable, it may be more successful,” said one person.

Having a common data infrastructure is crucial. However, we must also spend some time thinking about what the data itself should look like. Can it be standardized? How can we ensure that it is legible to anyone with access to it?

These are only some of the relevant political issues, and at a more material level, one cannot ignore the technical challenges of maintaining a national scale system. For example, “just getting a successful outbound dialing rate is hard!” said one person. “If you are running servers in Nigeria it just won’t always be up! I think human centered design is important. But there is also a huge problem simply with making these things work at scale. The hardcore technical challenges are real. We can help governments to filter through some of the potential options. Like, can a system demonstrate that it can really operate at massive scale?” Another person highlighted that “it’s often non-profits who are helping to strengthen the capacity of governments to make better decisions. They don’t have money for large-scale systems and often don’t know how to judge what’s good or to be a strong negotiator. They are really in a bind.”

This is not to mention that “the computers have plastic over them half the time. Electricity, computers, literacy, there are all these issues. And the TelCo infrastructure! We have layers of capacity gaps to address,” said one person.

There are also donors to consider. They may come into a project with unrealistic expectations of what is normal and what can be accomplished. There is a delicate balance to be struck between inspiring the donors to take up the project and managing expectations so that they are not disappointed.” One strategy is to “start hopeful and steadily temper expectations.” This is true also with other kinds of partnerships. “Building trust with organizations so that when things do go bad, you can try to manage it is crucial. Often it seems like you don’t want to be too real in the first conversation. I think, ‘if I lay this on them at the start it can be too real and feel overwhelming.…'” Others recommended setting expectations about how everyone together is performing. “It’s more like, ‘together we are going to be looking at this, and we’ll be seeing together how we are going to work and perform together.”

Creating an interoperable data system is costly and time-consuming, oftentimes more so than donors and other stakeholders imagine, but there are real benefits. Any step in the direction of interoperability must deal with challenges like those considered in this discussion. Problems abound. Solutions will be harder to come by, but not impossible.

So, what would practitioners like to see? “I would like to see one country that provides an incredible case study showing what good partnership and collaboration looks like with different partners working at different levels and having a massive impact and improved outcomes. Maybe in Uganda,” said one person. “I hope we see more of us rally around supporting and helping governments to be the system owners. We could focus on a metric or shared cause – I hope in the near future we have a view into the equity measure and not just the vast numbers. I’d love to see us use health equity as the rallying point,” added another. From a different angle, one person felt that “from a for-profit, we could see it differently. We could take on a country, a clinic or something as our own project. What if we could sponsor a government’s health care system?”

A participant summed the Salon up nicely: “I’d like to make a flip-side comment. I want to express gratitude to all the folks here as discussants. This is one of the most unforgiving and difficult environments to work in. It’ SO difficult. You have to be an organization super hero. We’re among peers and feel it as normal to talk about challenges, but you’re really all contributing so much!”

Salons are run under Chatham House Rule so not attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Advertisements

Read Full Post »

Our latest Technology Salon, at the African Evaluation Association (AfrEA) Conference in Uganda on March 29th, focused on how mobile and social media platforms are being used in monitoring and evaluation processes. Our lead discussants were Jamie Arkin from Human Network International (soon to be merging with VotoMobile) who spoke about interactive voice response (IVR); John Njovu, an independent consultant working with the Ministry of National Development Planning of the Zambian government, who shared experiences with technology tools for citizen feedback to monitor budgets and support transparency and accountability; and Noel Verrinder from Genesis who talked about using WhatsApp in a youth financial education program.

Using IVR for surveys

Jamie shared how HNI deploys IVR surveys to obtain information about different initiatives or interventions from a wide public or to understand the public’s beliefs about a particular topic. These surveys come in three formats: random dialing of telephone numbers until someone picks up; asking people to call in, for example, on a radio show; or using an existing list of phone numbers. “If there is an 80% phone penetration or higher, it is equal to a normal household level survey,” she said. The organization has list of thousands of phone numbers and can segment these to create a sample. “IVR really amplifies people’s voices. We record in local language. We can ask whether the respondent is a man or a woman. People use their keypads to reply or we can record their voices providing an open response to the question.” The voice responses are later digitized into text for analysis. In order to avoid too many free voice responses, the HNI system can cut the recording off after 30 seconds or limit voice responses to the first 100 calls. Often keypad responses are most effective as people are not used to leaving voice mails.

IVR is useful in areas where there is low literacy. “In Rwanda, 80% of women cannot read a full sentence, so SMS is not a silver bullet,” Jamie noted. “Smartphones are coming, and people want them, but 95% of people in Uganda have a simple feature phone, so we cannot reach them by Facebook or WhatsApp. If you are going with those tools, you will only reach the wealthiest 5% of the population.”

In order to reduce response bias, the survey question order can be randomized. Response rates tend to be ten times higher on IVR than on SMS surveys, Jamie said, in part, because IVR is cheaper for respondents. The HNI system can provide auto-analysis for certain categories such as most popular response. CSV files can also be exported for further analysis. Additionally, the system tracks length of session, language, time of day and other meta data about the survey exercise.

Regulatory and privacy implications in most countries are unclear about IVR, and currently there are few legal restrictions against calling people for surveys. “There are opt-outs for SMS but not for IVRs, if you don’t want to participate you just hang up.” In some case, however, like Rwanda, there are certain numbers that are on “do not disturb” lists and these need to be avoided, she said.

Citizen-led budget monitoring through Facebook

John shared results of a program where citizens were encouraged to visit government infrastructure projects to track whether budget allocations had been properly done. Citizens would visit a health center or a school to inquire about these projects and then fill out a form on Facebook to share their findings. A first issue with the project was that voters were interested in availability and quality of service delivery, not in budget spending. “”I might ask what money you got, did you buy what you said, was it delivered and is it here. Yes. Fine. But the bigger question is: Are you using it? The clinic is supposed to have 1 doctor, 3 nurses and 3 lab technicians. Are they all there? Yes. But are they doing their jobs? How are they treating patients?”

Quantity and budget spend were being captured but quality of service was not addressed, which was problematic. Another challenge with the program was that people did not have a good sense of what the dollar can buy, thus it was difficult for them to assess whether budget had been spent. Additionally, in Zambia, it is not customary for citizens to question elected officials. The idea that the government owes the people something, or that citizens can walk into a government office to ask questions about budget is not a traditional one. “So people were not confident in asking question or pushing government for a response.”

The addition of technology to the program did not resolve any of these underlying issues, and on top of this, there was an apparent mismatch with the idea of using mobile phones to conduct feedback. “In Zambia it was said that everyone has a phone, so that’s why we thought we’d put in mobiles. But the thing is that the number of SIMs doesn’t equal the number of phone owners. The modern woman may have a good phone or two, but as you go down to people in the compound they don’t have even basic types of phones. In rural areas it’s even worse,” said John, “so this assumption was incorrect.” When the program began running in Zambia, there was surprise that no one was reporting. It was then realized that the actual mobile ownership statistics were not so clear.

Additionally, in Zambia only 11% of women can read a full sentence, and so there are massive literacy issues. And language is also an issue. In this case, it was assumed that Zambians all speak English, but often English is quite limited among rural populations. “You have accountability language that is related to budget tracking and people don’t understand it. Unless you are really out there working directly with people you will miss all of this.”

As a result of the evaluation of the program, the Government of Zambia is rethinking ways to assess the quality of services rather than the quantity of items delivered according to budget.

Gathering qualitative input through WhatsApp 

Genesis’ approach to incorporating WhatsApp into their monitoring and evaluation was more emergent. “We didn’t plan for it, it just happened,” said Noel Verrinder. Genesis was running a program to support technical and vocational training colleges in peri-urban and rural areas in the Northwest part of South Africa. The young people in the program are “impoverished in our context, but they have smartphones, WhatsApp and Facebook.”

Genesis had set up a WhatsApp account to communicate about program logistics, but it morphed into a space for the trainers to provide other kinds of information and respond to questions. “We started to see patterns and we could track how engaged the different youth were based on how often they engaged on WhatsApp.” In addition to the content, it was possible to gain insights into which of the participants were more engage based on their time and responses on WhatsApp.

Genesis had asked the youth to create diaries about their experiences, and eventually asked them to photograph their diaries and submit them by WhatsApp, given that it made for much easier logistics as compared to driving around to various neighborhoods to track down the diaries. “We could just ask them to provide us with all of their feedback by WhatsApp, actually, and dispense with the diaries at some point,” noted Noel.

In future, Genesis plans to incorporate WhatsApp into its monitoring efforts in a more formal way and to consider some of the privacy and consent aspects of using the application for M&E. One challenge with using WhatsApp is that the type of language used in texting is short and less expressive, so the organization will have to figure out how to understand emoticons. Additionally, it will need to ask for consent from program participants so that WhatsApp engagement can be ethically used for M&E purposes.

Read Full Post »

Development, humanitarian and human rights organizations increasingly collect and use digital data at the various stages of their programming. This type of data has the potential to yield great benefit, but it can also increase individual and community exposure to harm and privacy risks. How can we as a sector better balance data collection and open data sharing with privacy and security, especially when it involves the most vulnerable?

A number of donors, humanitarian and development organizations (including Oxfam, CRS, UN bodies and others) have developed or are in the process of developing guidelines to help them to be more responsible about collection, use, sharing and retention of data from those who participate in their programs.

I’m part of a team (including mStar, Sonjara, Georgetown University, the USAID Global Development Lab, and an advisory committee that includes several shining stars from the ‘responsible data’ movement) that is conducting research on existing practices, policies, systems, and legal frameworks through which international development data is collected, used, shared, and released. Based on this research, we’ll develop ‘responsible data’ practice guidelines for USAID that aim to help:

  • Mitigate privacy and security risks for beneficiaries and others
  • Improve performance and development outcomes through use of data
  • Promote transparency, accountability and public good through open data

The plan is to develop draft guidelines and then to test their application on real programs.

We are looking for digital development projects to assess how our draft guidelines would work in real world settings. Once the projects are selected, members of the research team will visit them to better understand “on-the-ground” contexts and project needs. We’ll apply draft practice guidelines to each case with the goal of identifying what parts of the guidelines are useful/ applicable, and where the gaps are in the guidelines. We’ll also capture feedback from the project management team and partners on implications for project costs and timelines, and we’ll document existing digital data-related good practices and lessons. These findings will further refine USAID’s Responsible Data Practice guidelines.

What types of projects are we looking for?

  • Ongoing or recently concluded projects that are using digital technologies to collect, store, analyze, manage, use and share individuals’ data.
  • Cases where data collected is sensitive or may put project participants at risk.
  • The project should have informal or formal processes for privacy/security risk assessment and mitigation especially with respect to field implementation of digital technologies (listed above) as part of their program. These may be implicit or explicit (i.e. documented or written). They potentially include formal review processes conducted by ethics review boards or institutional review boards (IRBs) for projects.
  • All sectors of international development and all geographies are welcome to submit case studies. We are looking for diversity in context and programming.
  • We prefer case studies from USAID-funded projects but are open to receiving case studies from other donor-supported projects.

If you have a project or an activity that falls into the above criteria, please let us know here. We welcome multiple submissions from one organization; just reuse the form for each proposed case study.

Please submit your projects by February 15, 2017.

And please share this call with others who may be interested in contributing case studies.

Click here to submit your case study.

Also feel free to get in touch with me if you have questions about the project or the call!

 

Read Full Post »

At the 2016 American Evaluation Association conference, I chaired a session on benefits and challenges with ICTs in Equity-Focused Evaluation. The session frame came from a 2016 paper on the same topic. Panelists Kecia Bertermann from Girl Effect, and Herschel Sanders from RTI added fascinating insights on the methodological challenges to consider when using ICTs for evaluation purposes and discussant Michael Bamberger closed out with critical points based on his 50+ years doing evaluations.

ICTs include a host of technology-based tools, applications, services, and platforms that are overtaking the world. We can think of them in three key areas: technological devices, social media/internet platforms and digital data.

An equity focus evaluation implies ensuring space for the voices of excluded groups and avoiding the traditional top-down approach. It requires:

  • Identifying vulnerable groups
  • Opening up space for them to make their voices heard through channels that are culturally responsive, accessible and safe
  • Ensuring their views are communicated to decision makers

It is believed that ICTs, especially mobile phones, can help with inclusion in the implementation of development and humanitarian programming. Mobile phones are also held up as devices that can allow evaluators to reach isolated or marginalized groups and individuals who are not usually engaged in research and evaluation. Often, however, mobiles only overcome geographic inclusion. Evaluators need to think harder when it comes to other types of exclusion – such as that related to disability, gender, age, political status or views, ethnicity, literacy, or economic status – and we need to consider how these various types of exclusions can combine to exacerbate marginalization (e.g., “intersectionality”).

We are seeing increasing use of ICTs in evaluation of programs aimed at improving equity. Yet these tools also create new challenges. The way we design evaluations and how we apply ICT tools can make all the difference between including new voices and feedback loops or reinforcing existing exclusions or even creating new gaps and exclusions.

Some of the concerns with the use of ICTs in equity- based evaluation include:

Methodological aspects:

  • Are we falling victim to ‘elite capture’ — only hearing from higher educated, comparatively wealthy men, for example? How does that bias our information? How can we offset that bias or triangulate with other data and multi-methods rather than depending only on one tool-based method?
  • Are we relying too heavily on things that we can count or multiple-choice responses because that’s what most of these new ICT tools allow?
  • Are we spending all of our time on a device rather than in communities engaging with people and seeking to understand what’s happening there in person?
  • Is reliance on mobile devices or self-reporting through mobile surveys causing us to miss contextual clues that might help us better interpret the data?
  • Are we falling into the trap of fallacy in numbers – in other words, imagining that because lots of people are saying something, that it’s true for everyone, everywhere?

Organizational aspects:

  • Do digital tools require a costly, up-front investment that some organizations are not able to make?
  • How do fear and resistance to using digital tools impact on data gathering?
  • What kinds of organizational change processes are needed amongst staff or community members to address this?
  • What new skills and capacities are needed?

Ethical aspects:

  • How are researchers and evaluators managing informed consent considering the new challenges to privacy that come with digital data? (Also see: Rethinking Consent in the Digital Age)?
  • Are evaluators and non-profit organizations equipped to keep data safe?
  • Is it possible to anonymize data in the era of big data given the capacity to cross data sets and re-identify people?
  • What new risks might we be creating for community members? To local enumerators? To ourselves as evaluators? (See: Developing and Operationalizing Responsible Data Policies)

Evaluation of Girl Effect’s online platform for girls

Kecia walked us through how Girl Effect has designed an evaluation of an online platform and applications for girls. She spoke of how the online platform itself brings constraints because it only works on feature phones and smart phones, and for this reason it was decided to work with 14-16 year old urban girls in megacities who have access to these types of devices yet still experience multiple vulnerabilities such as gender-based violence and sexual violence, early pregnancy, low levels of school completion, poor health services and lack of reliable health information, and/or low self-esteem and self-confidence.

The big questions for this program include:

  • Is the content reaching the girls that Girl Effect set out to reach?
  • Is the content on the platform contributing to change?

Because the girl users are on the platform, Girl Effect can use features such as polls and surveys for self-reported change. However, because the girls are under 18, there are privacy and security concerns that sometimes limit the extent to which the organization feels comfortable tracking user behavior. In addition, the type of phones that the girls are using and the fact that they may be borrowing others’ phones to access the site adds another level of challenges. This means that Girl Effect must think very carefully about the kind of data that can be gleaned from the site itself, and how valid it is.

The organization is using a knowledge, attitudes and practices (KAP) framework and exploring ways that KAP can be measured through some of the exciting data capture options that come with an online platform. However it’s hard to know if offline behavior is actually shifting, making it important to also gather information that helps read into the self-reported behavior data.

Girl Effect is complementing traditional KAP indicators with web analytics (unique users, repeat visitors, dwell times, bounce rates, ways that users arrive to the site) with push-surveys that go out to users and polls that appear after an article (“Was this information helpful? Was it new to you? Did it change your perceptions? Are you planning to do something different based on this information?”) Proxy indicators are also being developed to help interpret the data. For example, does an increase in frequency of commenting on the site by a particular user have a link with greater self-esteem or self-efficacy?

However, there is only so much that can be gleaned from an online platform when it comes to behavior change, so the organization is complementing the online information with traditional, in-person, qualitative data gathering. The site is helpful there, however, for recruiting users for focus groups and in-depth interviews. Girl Effect wants to explore KAP and online platforms, yet also wants to be careful about making assumptions and using proxy indicators, so the traditional methods are incorporated into the evaluation as a way of triangulating the data. The evaluation approach is a careful balance of security considerations, attention to proxy indicators, digital data and traditional offline methods.

Using SMS surveys for evaluation: Who do they reach?

Herschel took us through a study conducted by RTI (Sanders, Lau, Lombaard, Baker, Eyerman, Thalji) in partnership with TNS about the use of SMS surveys for evaluation. She noted that the rapid growth of mobile phones, particularly in African countries, opens up new possibilities for data collection. There has been an explosion of SMS surveys for national, population-based surveys.

Like most ICT-enabled MERL methods, use of SMS for general population surveys brings both promise:

  • High mobile penetration in many African countries means we can theoretically reach a large segment of the population.
  • These surveys are much faster and less expensive than traditional face-to- face surveys.
  • SMS surveys work on virtually any GSM phone.
  • SMS offers the promise of reach. We can reach a large and geographically dispersed population, including some areas that are excluded from FTF surveys because of security concerns.

And challenges:

  • Coverage: We cannot include illiterate people or those without access to a mobile phone. Also, some sample frames may not include the entire population with mobile phones.
  • Non-response: Response rates are expected to be low for a variety of reasons, including limited network connectivity or electricity; if two or people share a phone, we may not reach all people associated with that phone; people may feel a lack of confidence with technology. These factors might affect certain sub-groups differently, so we might underrepresent the poor, rural areas, or women.
  • Quality of measurement. We only have 160 CHARACTERS for both the question AND THE RESPONSE OPTIONS. Further, an interviewer is not present to clarify any questions.

RTI’s research aimed to answer the question: How representative are general population SMS surveys and are there ways to improve representativeness?

Three core questions were explored via SMS invitations sent in Kenya, Ghana, Nigeria and Uganda:

  • Does the sample frame match the target population?
  • Does non-response have an impact on representativeness?
  • Can we improve quality of data by optimizing SMS designs?

One striking finding was the extent to which response rates may vary by country, Hershel said. In some cases this was affected by agreements in place in each country. Some required a stronger opt-in process. In Kenya and Uganda, where a higher percentage of users had already gone through an opt-in process and had already participated in SMS-based surveys, there was a higher rate of response.

screen-shot-2016-11-03-at-2-23-26-pm

These response rates, especially in Ghana and Nigeria, are noticeably low, and the impact of the low response rates in Nigeria and Ghana is evident in the data. In Nigeria, where researchers compared the SMS survey results against the face-to-face data, there was a clear skew away from older females, towards those with a higher level of education and who are full-time employed.

Additionally, 14% of the face-to-face sample, filtered on mobile users, had a post-secondary education, whereas in the SMS data this figure is 60%.

Additionally, Compared to face-to-face data, SMS respondents were:

  • More likely to have more than 1 SIM card
  • Less likely to share a SIM card
  • More likely to be aware of and use the Internet.

This sketches a portrait of a more technological savvy respondent in the SMS surveys, said Herschel.

screen-shot-2016-11-03-at-2-24-18-pm

The team also explored incentives and found that a higher incentive had no meaningful impact, but adding reminders to the design of the SMS survey process helped achieve a wider slice of the sample and a more diverse profile.

Response order effects were explored along with issues related to questionnaire designers trying to pack as much as possible onto the screen rather than asking yes/no questions. Hershel highlighted that that when multiple-choice options were given, 76% of SMS survey respondents only gave 1 response compared to 12% for the face-to-face data.

screen-shot-2016-11-03-at-2-23-53-pmLastly, the research found no meaningful difference in response rate between a survey with 8 questions and one with 16 questions, she said. This may go against common convention which dictates that “the shorter, the better” for an SMS survey. There was no observable break off rate based on survey length, giving confidence that longer surveys may be possible via SMS than initially thought.

Hershel noted that some conclusions can be drawn:

  • SMS excels for rapid response (e.g., Ebola)
  • SMS surveys have substantial non-response errors
  • SMS surveys overrepresent

These errors mean SMS cannot replace face-to-face surveys … yet. However, we can optimize SMS survey design now by:

  • Using reminders during data collection
  • Be aware of response order effects. So we need to randomize substantive response options to avoid bias.
  • Not using “select all that apply” questions. It’s ok to have longer surveys.

However, she also noted that the landscape is rapidly changing and so future research may shed light on changing reactions as familiarity with SMS and greater access grow.

Summarizing the opportunities and challenges with ICTs in Equity-Focused Evaluation

Finally we heard some considerations from Michael, who said that people often get so excited about possibilities for ICT in monitoring, evaluation, research and learning that they neglect to address the challenges. He applauded Girl Effect and RTI for their careful thinking about the strengths and weaknesses in the methods they are using. “It’s very unusual to see the type of rigor shown in these two examples,” he said.

Michael commented that a clear message from both presenters and from other literature and experiences is the need for mixed methods. Some things can be done on a phone, but not all things. “When the data collection is remote, you can’t observe the context. For example, if it’s a teenage girl answering the voice or SMS survey, is the mother-in-law sitting there listening or watching? What are the contextual clues you are missing out on? In a face-to-face context an evaluator can see if someone is telling the girl how to respond.”

Additionally,“no survey framework will cover everyone,” he said. “There may be children who are not registered on the school attendance list that is being used to identify survey respondents. What about immigrants who are hiding from sight out of fear and not registered by the government?” He cautioned evaluators to not forget about folks in the community who are totally missed out and skipped over, and how the use of new technology could make that problem even greater.

Another point Michael raised is that communicating through technology channels creates a different behavior dynamic. One is not better than the other, but evaluators need to be aware that they are different. “Everyone with teenagers knows that the kind of things we communicate online are very different than what we communicate in a face-to-face situation,” he said. “There is a style of how we communicate. You might be more frank and honest on an online platform. Or you may see other differences in just your own behavior dynamics on how you communicate via different kinds of tools,” he said.

He noted that a range of issues has been raised in connection to ICTs in evaluation, but that it’s been rare to see priority given to evaluation rigor. The study Herschel presented was one example of a focus on rigor and issues of bias, but people often get so excited that they forget to think about this. “Who has access.? Are people sharing phones? What are the gender dynamics? Is a husband restricting what a woman is doing on the phone? There’s a range of selection bias issues that are ignored,” he said.

Quantitative bias and mono-methods are another issue in ICT-focused evaluation. The tool choice will determine what an evaluator can ask and that in turn affects the quality of responses. This leads to issues with construct validity. If you are trying to measure complex ideas like girls’ empowerment and you reduce this to a proxy, there can often be a large jump in interpretation. This doesn’t happen only when using mobile phones for evaluation data collection purposes but there are certain areas that may be exacerbated when the phone is the tool. So evaluators need to better understand behavior dynamics and how they related to the technical constraints of a particular digital or mobile platform.

The aspect of information dissemination is another one worth raising, said Michael. “What are the dynamics? When we incorporate new tools, we tend to assume there is just one-step between the information sharer and receiver, yet there is plenty of literature that shows this is normally at least 2 steps. Often people don’t get information directly, but rather they share and talk with someone else who helps them verify and interpret the information they get on a mobile phone. There are gatekeepers who control or interpret, and evaluators need to better understand those dynamics. Social network analysis can help with that sometimes – looking at who communicates with whom? Who is part of the main infuencer hub? Who is marginalized? This could be exciting to explore more.”

Lastly, Michael reiterated the importance of mixed methods and needing to combine online information and communications with face-to-face methods and to be very aware of invisible groups. “Before you do an SMS survey, you may need to go out to the community to explain that this survey will be coming,” he said. “This might be necessary to encourage people to even receive the survey, to pay attention or to answer it.” The case studies in the paper “The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges” explore some of these aspects in good detail.

Read Full Post »

This post was written with input from Maliha Khan, Independent Consultant; Emily Tomkys, Oxfam GB; Siobhan Green, Sonjara and Zara Rahman, The Engine Room.

A friend reminded me earlier this month at the MERL Tech Conference that a few years ago when we brought up the need for greater attention to privacy, security and ethics when using ICTs and digital data in humanitarian and development contexts, people pointed us to Tor, encryption and specialized apps. “No, no, that’s not what we mean!” we kept saying. “This is bigger. It needs to be holistic. It’s not just more tools and tech.”

So, even if as a sector we are still struggling to understand and address all the different elements of what’s now referred to as “Responsible Data” (thanks to the great work of the Engine Room and key partners), at least we’ve come a long way towards framing and defining the areas we need to tackle. We understand the increasing urgency of the issue that the volume of data in the world is increasing exponentially and the data in our sector is becoming more and more digitalized.

This year’s MERL Tech included several sessions on Responsible Data, including Responsible Data Policies, the Human Element of the Data Cycle, The Changing Nature of Informed Consent, Remote Monitoring in Fragile Environments and plenary talks that mentioned ethics, privacy and consent as integral pieces of any MERL Tech effort.

The session on Responsible Data Policies was a space to share with participants why, how, and what policies some organizations have put in place in an attempt to be more responsible. The presenters spoke about the different elements and processes their organizations have followed, and the reasoning behind the creation of these policies. They spoke about early results from the policies, though it is still early days when it comes to implementing them.

What do we mean by Responsible Data?

Responsible data is about more than just privacy or encryption. It’s a wider concept that includes attention to the data cycle at every step, and puts the rights of people reflected in the data first:

  • Clear planning and purposeful collection and use of data with the aim of improving humanitarian and development approaches and results for those we work with and for
  • Responsible treatment of the data and respectful and ethical engagement with people we collect data from, including privacy and security of data and careful attention to consent processes and/or duty of care
  • Clarity on data sharing – what data, from whom and with whom and under what circumstances and conditions
  • Attention to transparency and accountability efforts in all directions (upwards, downwards and horizontally)
  • Responsible maintenance, retention or destruction of data.

Existing documentation and areas to explore

There is a huge bucket of concepts, frameworks, laws and policies that already exist in various other sectors and that can be used, adapted and built on to develop responsible approaches to data in development and humanitarian work. Some of these are in conflict with one another, however, and those conflicts need to be worked out or at least recognized if we are to move forward as a sector and/or in our own organizations.

Some areas to explore when developing a Responsible Data policy include:

  • An organization’s existing policies and practices (IT and equipment; downloading; storing of official information; confidentiality; monitoring, evaluation and research; data collection and storage for program administration, finance and audit purposes; consent and storage for digital images and communications; social media policies).
  • Local and global laws that relate to collection, storage, use and destruction of data, such as: Freedom of information acts (FOIA); consumer protection laws; data storage and transfer regulations; laws related to data collection from minors; privacy regulations such as the latest from the EU.
  • Donor grant requirements related to data privacy and open data, such as USAID’s Chapter 579 or International Aid Transparency Initiative (IATI) stipulations.

Experiences with Responsible Data Policies

At the MERL Tech Responsible Data Policy session, organizers and participants shared their experiences. The first step for everyone developing a policy was establishing wide agreement and buy-in for why their organizations should care about Responsible Data. This was done by developing Values and Principles that form the foundation for policies and guidance.

Oxfam’s Responsible Data policy has a focus on rights, since Oxfam is a rights-based organization. The organization’s existing values made it clear that ethical use and treatment of data was something the organization must consider to hold true to its ethos. It took around six months to get all of the global affiliates to agree on the Responsible Program Data policy, a quick turnaround compared to other globally agreed documents because all the global executive directors recognized that this policy was critical. A core point for Oxfam was the belief that digital identities and access will become increasingly important for inclusion in the future, and so the organization did not want to stand in the way of people being counted and heard. However, it wanted to be sure that this was done in a way that balanced and took privacy and security into consideration.

The policy is a short document that is now in the process of operationalization in all the countries where Oxfam works. Because many of Oxfam’s affiliate headquarters reside in the European Union, it needs to consider the new EU regulations on data, which are extremely strict, for example, providing everyone with an option for withdrawing consent. This poses a challenge for development agencies who normally do not have the type of detailed databases on ‘beneficiaries’ as they do on private donors. Shifting thinking about ‘beneficiaries’ and treating them more as clients may be in order as one result of these new regulations. As Oxfam moves into implementation, challenges continue to arise. For example, data protection in Yemen is different than data protection in Haiti. Knowing all the national level laws and frameworks and mapping these out alongside donor requirements and internal policies is extremely complicated, and providing guidance to country staff is difficult given that each country has different laws.

Girl Effect’s policy has a focus on privacy, security and safety of adolescent girls, who are the core constituency of the organization. The policy became clearly necessary because although the organization had a strong girl safeguarding policy and practice, the effect of digital data had not previously been considered, and the number of programs that involve digital tools and data is increasing. The Girl Effect policy currently has four core chapters: privacy and security during design of a tool, service or platform; content considerations; partner vetting; and MEAL considerations. Girl Effect looks at not only the privacy and security elements, but also aims to spur thinking about potential risks and unintended consequences for girls who access and use digital tools, platforms and content. One core goal is to stimulate implementers to think through a series of questions that help them to identify risks. Another is to establish accountability for decisions around digital data.

The policy has been in process of implementation with one team for a year and will be updated and adapted as the organization learns. It has proven to have good uptake so far from team members and partners, and has become core to how the teams and the wider organization think about digital programming. Cost and time for implementation increase with the incorporation of stricter policies, however, and it is challenging to find a good balance between privacy and security, the ability to safely collect and use data to adapt and improve tools and platforms, and user friendliness/ease of use.

Catholic Relief Services has an existing set of eight organizational principles: Sacredness and Dignity of the human person; Rights and responsibilities; Social Nature of Humanity; The Common Good; Subsidiarity; Solidarity; Option for the Poor; Stewardship. It was a natural fit to see how these values that are already embedded in the organization could extend to the idea of Responsible Data. Data is an extension of the human person, therefore it should be afforded the same respect as the individual. The principle of ‘common good’ easily extends to responsible data sharing. The notion of subsidiarity says that decision-making should happen as close as possible to the place where the impact of the decision will be the strongest, and this is nicely linked with the idea of sharing data back with communities where CRS works and engaging them in decision-making. The option for the poor urges CRS to place a preferential value on privacy, security and safety of the data of the poor over the data demands of other entities.

The organization is at the initial phase of creating its Responsible Data Policy. The process includes the development of the values and principles, two country learning visits to understand the practices of country programs and their concerns about data, development of the policy, and a set of guidelines to support staff in following the policy.

USAID recently embarked on its process of developing practical Responsible Data guidance to pair with its efforts in the area of open data. (See ADS 579). More information will be available soon on this initiative.

Where are we now?

Though several organizations are moving towards the development of policies and guidelines, it was clear from the session that uncertainties are the order of the day, as Responsible Data is an ethical question, often relying on tradeoffs and decisions that are not hard and fast. Policies and guidelines generally aim to help implementers ask the right questions, sort through a range of possibilities and weigh potential risks and benefits.

Another critical aspect that was raised at the MERL Tech session was the financial and staff resources that can be required to be responsible about data. On the other hand, for those organizations receiving funds from the European Union or residing in the EU or the UK (where despite Brexit, organizations will likely need to comply with EU Privacy Regulations), the new regulations mean that NOT being responsible about data may result in hefty fines and potential legal action.

Going from policy to implementation is a challenge that involves both capacity strengthening in this new area as well as behavior change and a better understanding of emerging concepts and multiple legal frameworks. The nuances by country, organization and donor make the process difficult to get a handle on.

Because staff and management are already overburdened, the trick to developing and implementing Responsible Data Policies and Practice will be finding ways to strengthen staff capacity and to provide guidance in ways that do not feel overwhelmingly complex. Though each situation will be different, finding ongoing ways to share resources and experiences so that we can advance as a sector will be one key step for moving forward.

Read Full Post »

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »

I used to write blog posts two or three times a week, but things have been a little quiet here for the past couple of years. That’s partly because I’ve been ‘doing actual work’ (as we like to say) trying to implement the theoretical ‘good practices’ that I like soapboxing about. I’ve also been doing some writing in other places and in ways that I hope might be more rigorously critiqued and thus have a wider influence than just putting them up on a blog.

One of those bits of work that’s recently been released publicly is a first version of a monitoring and evaluation framework for SIMLab. We started discussing this at the first M&E Tech conference in 2014. Laura Walker McDonald (SIMLab CEO) outlines why in a blog post.

Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.

For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact….

FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.

As an organization that provides technology strategy and support to other organizations — both large and small — it is important for SIMLab to better understand the quality of that support and how it may translate into improvements as well as how introduction or improvement of information and communication technology contributes to impact at the broader scale.

This is a difficult proposition, given that isolating a single factor like technology is extremely tough, if not impossible. The Framework thus aims to get at the breadth of considerations that go into successful tech-enabled project design and implementation. It does not aim to attribute impact to a particular technology, but to better understand that technology’s contribution to the wider impact at various levels. We know this is incredibly complex, but thought it was worth a try.

As Laura notes in another blogpost,

One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators – the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and consider the criteria for technology-enabled social change projects.

Here are the adapted criteria that you can read more about in the Framework. They were designed for internal use, but we hope they might be useful to evaluators of technology-enabled programming, commissioners of evaluations of these programs, and those who want to do in-house examination of their own technology-enabled efforts. We welcome your thoughts and feedback — The Framework is published in draft format in the hope that others working on similar challenges can help make it better, and so that they could pick up and use any and all of it that would be helpful to them. The document includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample log frames and evaluator terms of reference.

Happy reading and we really look forward to any feedback and suggestions!!

*****

The Criteria

Criterion 1: Relevance

The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design – particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?

Criterion 2: Effectiveness

A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

Criterion 3: Efficiency

Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?

Criterion 4: Impact

Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying, documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies evaluation criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us.

For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Consider: To what extent does the choice of communications channels or tools enable wider and/or higher quality participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?

Criterion 6: Coherence

The OECD-DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

Read Full Post »

Older Posts »