Feeds:
Posts
Comments

Posts Tagged ‘benefits’

Our Technology Salon on Digital ID (“Will Digital Identities Support or Control Us”) took place at the OSF offices on June 3 with lead discussants Savita Bailur and Emrys Schoemaker from Caribou Digital and Aiden Slavin from ID2020.

In general, Salon Participants noted the potential positives of digital ID, such as improved access to services, better service delivery, accountability, and better tracking of beneficiaries. However, they shared concerns about potential negative impacts, such as surveillance and discrimination, disregard for human rights and privacy, lack of trust in government and others running digital ID systems, harm to marginalized communities, lack of policy and ethical frameworks, complexities of digital ID systems and their associated technological requirements, and low capacity within NGOs to protect data and to deal with unintended consequences.

What do we mean by digital identity (digital ID)?

Arriving at a basic definition of digital ID is difficult due to its interrelated aspects. To begin with: What is identity? A social identity arises from a deep sense of who we are and where we come from. A person’s social identity is a critical part of how they experience an ID system. Analog ID systems have been around for a very long time and digitized versions build on them.

The three categories below (developed by Omidyar) are used by some to differentiate among types of ID systems:

  • Issued ID includes state or national issued identification like birth certificates, driver’s licenses, and systems such as India’s biometric ID system (Aadhar), built on existing analog models of ID systems and controlled by institutions.
  • De facto ID is an emerging category of ID that is formed through data trails that people leave behind when using digital devices, including credit scoring based on mobile phone use or social media use. De facto ID is somewhat outside of an individual’s control, as it is often based on analysis of passive data for which individuals have not given consent to collect or use in this way. De facto ID also includes situations where refugees are tracked via cellphone data records (CDRs). De facto ID is a new and complex way of being identified and categorized.
  • Self-asserted ID is linked to the decentralization of ID systems. It is based on the possession of forms of ID that prove who we are that we manage ourselves. A related term is self-managed ID, which recognizes that there is no ID that is “self-asserted” because our identity is relational and always relies on others recognizing us as who we are and who we believe ourselves to be.

(Also see this glossary of Digital ID definitions.)

As re-identification technologies are becoming more and more sophisticated, the line between de-facto and official, issued IDs is blurring, noted one participant. Others said they prefer using a broad umbrella term “Identity in the Digital Age” to cover the various angles.

Who is digital ID benefiting?

Salon Participants tended to think that digital ID is mainly of interest to institutions. Most IDs are developed, designed, managed, and issued by institutions. Thus the interests baked into the design of an ID system are theirs. Institutions tend to be excited about digital ID systems because they are interoperable, and helps them with beneficiary management, financial records, entry/exit across borders and the like.

This very interoperability, however, is what raises privacy, vulnerability, and data protection issues. Some of the most cutting-edge Digital ID systems are being tested on some of the most vulnerable populations in the world:  refugees in Jordan, Uganda, Lebanon, and Myanmar. These digital ID systems have created massive databases for analysis, e.g., the UNHCR’s Progress data base has 80 million records.

This brings with it a huge responsibility to protect. It also raises questions about the “one ID system to rule them all” idea. On the one hand, a single system can offer managerial control, reduce fraud, and improve tracking. Yet, as one person said, “what a horrifying prospect that an institution can have this much control! Should we instead be supporting one thousand ID systems to bloom?”

Can we trust institutions and governments to manage digital ID Systems?

One of the institutions positioning itself as the leader in Digital ID is the World Food Program (WFP). As one participant highlighted, this is an agency that has come under strong criticism for its partnership with Palantir and a lack of transparency around where data goes and who can access it. Seismic downstream effects that affect trust in the entire sector can be generated these kinds of partnerships. “This has caused a lot of angst in the sector. The WFP wants to have the single system to rule them all, whereas many of us would rather see an interoperable ecosystem.” Some organizations consider their large-scale systems to have more rigorous privacy, security, and informed consent measures than the WFP’s SCOPE system.

Trust is a critical component of a Digital ID system. The Estonian model, for example, offers visibility into which state departments are accessing a person’s data and when, which builds citizen’s trust in the system. Some Salon participants expressed concern over their own country governments running a Digital ID system. “In my country, we don’t trust institutions because we have a failed state,” said one person, “so people would never want the government to have their information in that way.” Another person said that in his country, the government is known for its corruption, and the idea that the government could manage an ID system with any kind of data integrity was laughable. “If these systems are not monitored or governed properly, they can be used to target certain segments of the population for outright repression. People do want greater financial inclusion, for example, but these ID systems can be easily weaponized and used against us.”

Fear and mistrust in digital ID systems is not universal, however. One Salon participant said that their research in Indonesia found that a digital ID was seen to be part of being a “good citizen,” even if local government was not entirely trusted. A Salon participant from China reported that in her experience, the digital ID system there has not been questioned much by citizens. Rather, it is seen as a convenient way for people to learn about new government policies and to carry out essential transactions more quickly.

What about data integrity and redress?

One big challenge with digital ID systems as they are currently managed is that there is very little attention to redress. “How do you fix errors in information? Where are the complaints mechanisms?” asked one participant. “We think of digital systems as being really flexible, but they are really hard to clean out,” said another. “You get all these faulty data crumbs that stick around. And they seem so far removed from the user. How do people get data errors fixed? No one cares about the integrity of the system. No one cares but you if your ID information is not correct. There is really very little incentive to address discrepancies and provide redress mechanisms.”

Another challenge is the integrity of the data that goes into the system. In some countries, people go back to their villages to get a birth certificate, at point at which data integrity can suffer due to faulty information or bribes, among other things. In one case, researchers spoke to a woman who changed her religion on her birth certificate thinking it would save her from discrimination when she moved to a new town. In another case, the village chief made a woman change her name to a Muslim name on her birth certificate because the village was majority Muslim.” There are power dynamics at the local level that can challenge the integrity of the ID system.

Do digital ID systems improve the lives of women and children?

There is a long-standing issue in many parts of the world with children not having a birth certificate, said one Salon discussant. “If you don’t have a legal ID, technically you don’t exist, so that first credential is really important.” As could probably be expected, however, fewer females than males have legal ID.

In a three-country research project, the men interviewed thought that women do not need ID as much as men did. However, when talking with women it was clear that they are the ones who are dealing with hospitals and schools and other institutions who require ID. The study found that In Bangladesh, when women did have ID, it was commonly held and controlled by their husbands. In one case study, a woman wanted to sign up as a cook for an online cooking service, but she needed an ID to do so. She had to ask her husband for the ID, explain what she needed it for, and get his permission in order to join the cooking service. In another, a woman wanted to provide beauty care services through an online app. She needed to produce her national ID and two photos to join up with the app and to create a bKash mobile money account. Her husband did not want her to have a bKash account, so she had to provide his account details, meaning that all of her earnings went to her husband (see more here on how ID helps women access work). In India, a woman wanted to escape her husband, so she moved from the countryside to Bangalore to work as a maid. Her in-laws retained all of her ID, and so she had to rely on her brother to set up everything for her in Bangalore.

Another Salon participant explained that in India also, micro-finance institutions had imposed a regulation that when a woman registered to be part of a project, she had to provide the name of a male member to qualify her identity. When it was time to repay the loan or if a woman missed a payment, her brother or husband would then receive a text about it. The question is how to create trust-based systems that do not reinforce patriarchal values and where individuals are clear about and have control over how information is shared?

“ID is embedded in your relationships and networks,” it was explained. “It creates a new set of dependencies and problems that we need to consider.” In order to understand the nuances in how ID and digital ID are impacting people, we need more of these micro-level stories. “What is actually happening? What does it mean when you become more identifiable?”

Is it OK to use digital ID systems for social control and social accountability? 

The Chinese social credit system, according to one Salon participant, includes a social control function. “If you have not repaid a loan, you are banned from purchasing a first-class air ticket or from checking into expensive hotels.” An application used In Nairobi called Tala also includes a social accountability function, explained another participant. “Tala is a social credit scoring app that gives small loans. You download an app with all your contacts, and it works out via algorithms if you are credit-worthy. If you are, you can get a small loan. If you stop paying your loans, however, Tala alerts everyone in your contact list. In this way, the app has digitized a social accountability function.”

The initial reaction from Salon Participants was shock, but it was pointed out that traditional Village Savings and Loans Associations (VSLAs) function the same way – through social sanction. “The difference here is transparency and consent,” it was noted. “In a community you might not have choice about whether everyone knows you defaulted on your small loan. But you are aware that this is what will happen. With Tala, people didn’t realize that the app had access to their contacts and that it would alert those contacts, so consent and transparency are the issues.”

The principle of informed consent in the humanitarian space poses a constant challenge. “Does a refugee who registers with UNHCR really have any choice? If they need food and have to provide minimal information to get it, is that consent? What if they have zero digital literacy?” Researcher Helen Nissenbaum, it was noted, has written that consent is problematic and that we should not pursue it. “It’s not really about individual consent. It’s about how we set standards and ensure transparency and accountability for how an individual’s information is used,” explained one Salon participant.

These challenges with data use and consent need to be considered beyond just individual privacy, however, as another participant noted. “There is all manner of vector-based data in the WFP’s system. Other agencies don’t have this kind of disaggregated data at the village level or lower. What happens if Palantir, via the WFP, is the first company in the world to have that low level disaggregation? And what happens with the digital ID of particularly vulnerable groups of people such as refugee communities or LGBTQI communities? How could these Digital IDs be used to discriminate or harm entire groups of people? What does it mean if a particular category or tag like ‘refugee’ or ‘low income’ follows you around forever?”

One Salon participant said that in Jordanian camps, refugees would register for one thing and be surprised at how their data then automatically popped up on the screen of a different partner organization. Other participants expressed concerns about how Digital ID systems and their implications could be explained to people with less digital experience or digital literacy. “Since the GDPR came into force, people have the right to an explanation if they are subject to an automated decision,” noted one person “But what does compliance look like? How would anyone ever understand what is going on?” This will become increasingly complex as technology advances and we begin to see things like digital phenotyping being used to serve up digital content or determine our benefits.

Can we please have better standards, regulations and incentives?

A final question raised about Digital ID systems was who should be implementing and managing them: UN agencies? Governments? Private Sector? Start-ups? At the moment the ecosystem includes all sorts of actors and feels a bit “Wild Wild West” due to insufficient control and regulation. At the same time, there are fears (as noted above) about a “one system to rule them all approach.” “So,” asked one person, “what should we be doing then? Should UN agencies be building in-house expertise? Should we be partnering better with the private sector? We debate this all the time internally and we can never agree.” Questions also remain about what happens with the biometric and other data that failed start-ups or discontinued digital ID systems hold. And is it a good idea to support government-controlled ID systems in countries with corrupt or failed governments, or those who will use these systems to persecute or exercise undue control over their populations?

As one person asked, “Why are we doing this? Why are we even creating these digital ID systems?”

Although there are huge concerns about Digital ID, the flip side is that a Digital ID system could potentially offer better security for sensitive information, at least in the case of humanitarian organizations. “Most organizations currently handle massive amounts of data in Excel sheets and Google docs with zero security,” said one person. “There is PII [personally identifiable information] flowing left, right, and center.” Where donors have required better data management standards, there has been improvement, but it requires massive investment, and who will pay for it?” Sadly, donors are currently not covering these costs. As a representative from one large INGO explained, “we want to avoid the use of Excel to track this stuff. We are hoping that our digital ID system will be more secure. We see this as a very good idea if you can nail down the security aspects.”

The EU’s General Data Protection Regulation (GDPR) is often quoted as the “gold standard,” yet implementation is complex and the GDPR is not specific enough, according to some Salon participants. Not to mention, “if you are UN, you don’t have to follow GDPR.” Many at the Salon felt that the GDPR has had very positive effects but called out the lack of incentive structures that would encourage full adoption. “No one does anything unless there is an enforcing function.” Others felt that the GDPR was too prescriptive about what to do, rather than setting limits on what not to do.

One effort to watch is the Pan Canadian Trust Foundation, mentioned as a good example of creating a functioning and decentralized ecosystem that could potentially address some of the above challenges.

The Salon ended with more questions than answers, however there is plenty of research and conversation happening about digital ID and a wide range of actors engaging with the topic. If you’d like to read more, check out this list of resources that we put together for the Salon and add any missing documents, articles, links and resources!

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

 

 

 

Advertisements

Read Full Post »

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »