Feeds:
Posts
Comments

Western perspectives on technology tend to dominate the media, despite the fact that technology impacts on people’s lives are nuanced, diverse and contextually specific. At our March 8 Technology Salon NYC (hosted by Thoughtworks) we discussed how structural issues in journalism and technology lead to narrowed perspectives and reduced nuance in technology reporting.

Joining the discussion were folks from for-profit and non-profit US-based media houses with global reporting remits, including: Nabiha Syed, CEO, The Markup; Tekendra Parmar, Tech Features Editor, Business Insider; Andrew Deck, Reporter, Rest of World and Vittoria Elliot, Reporter, WIRED. Salon participants working for other media outlets and in adjacent fields contributed to our discussion as well.

Power dynamics are at the center. English language technology media establishments tend to report as if tech stories begin and end in Silicon Valley. This affects who media talks and listens to, what stories are found and who is doing the finding, which angles and perspectives are centered, and who decides what is published. As one Salon participant said, “we came to the Salon for a conversation about tech journalism, but bigger issues are coming up. This is telling, because, no matter what type of journalism you’re doing, you’re reckoning with wider systemic issues in journalism… [like] how we pay for it, who the audiences are, how we shift the sense of who we’re reporting for, and all the existential questions in journalism.”

Some media outlets are making an intentional effort to better ground stories in place, cultural context, political context, and non-Western markets in order to challenge certain assumptions and biases in Silicon Valley. Their work aims to bring non-US-centric stories to a wider general audience in the US and abroad and to enter the media diet of Silicon Valley itself to change perspectives and expand world views using narrative, character, and storytelling that is not laced with US biases.

Challenges remain with building global audiences, however. Most publications have only a handful of people focusing on stories outside of their headquarter country. Yet “in addition to getting the stories – you also have to build global and local networks so that the stories get distributed,” as one person said. US media outlets don’t often invest in building relationships with local influencers and policy makers who could help to spread a story, react or act on it. This can mean there is little impact and low readership, leading to decision makers at media outlets saying “see, we didn’t have good metrics, those kinds of stories don’t perform well.”  This is not only the case for journalism in the US. An Indian reader may not be interested in reading about the Philippines and vice versa. So, almost every story needs a different conceptualization of audience, which is difficult for publications to afford and achieve.

Ad-revenue business models are part of the problem.  While the vision of a global audience with wide perspectives and nuance is lofty, the practicalities of implementation make it difficult. Business models based on ad revenue (clicks, likes, time spent on a page) tend to reinforce status quo content at the cost of excluding non-Western voices and other marginalized users of technology. Moving to alternative ways to measure impact can be hard for editors that have been working in the for-profit industry for several years. Even in non-profit media, “there is a shadow cast from these old metrics…. Donors will say, ‘okay, great, wonderful story, super glad that there was a regulatory change… but how many people saw it?’ And so there’s a lot of education that needs to happen.”

Identifying new approaches and metrics. Some Salon participants are looking at how to get beyond clicks to measure impact and journalism’s contribution to change without committing the sin of centering the story on the journalist. Some teams are testing “impact meetings,” with the reporting team looking at “who has power – Consumers? Regulators? Legislators? Civil society? Mapping that out, and figuring out what form the information needs to be in to get into audiences’ hands and heads… Cartoons? Instagram? An academic conversation? We identify who in the room has some power, get something into their hands, and then they do all the work.”

Another person talked about creating Listening Circles to develop participatory and grounded narratives that will have greater impact. In this case, journalists convene groups of experts and people with lived experiences on a particular topic to learn who are the power brokers, what key topics need to be raised, what is the media covering too much or too little of, and what stories or perspectives are missing from this coverage. This is similar to how a journalist normally works — talking with sources — except that the sources are in a group together and can sharpen each other’s ideas. In this sense, media works as a convener to better understand the issue and themes. It makes space for smaller more grounded organizations to join the conversations. It also helps media outlets identify key influencers and involve them from the start so that they are more interested in sharing the story when it’s ready to go. This can help catalyze ongoing movement on the theme or topic among these organizations.

These approaches look familiar to advocacy, community development, communication for development and social and behavior change communication approaches used in the development sector, since they include an entryway, a plan for inclusion from the start, an off ramp and hand over, and an understanding that the media agency is not the center of the story but can feed extra energy into a topic to help it move forward.

The difference between journalism and advocacy has emerged as a concern as traditional approaches to reporting change. Participatory work is often viewed as being less “objective” and more like advocacy. “Should journalists be advocates or not?” is a key question. Yet, as noted during the Salon discussion, journalists have always interrogated the actions of powerful people – e.g., the Elon Musks of the world. “If we’re going to interrogate power, then it’s not a huge jump to say we want to inform people about the power they already have, and all we’re doing is being intentional about getting this information to where it needs to go,” one person commented.

‘Another Salon participant agreed. ‘If you break a story about a corrupt politician, you expect that corrupt politician to be hauled before whatever institutions exist or for them to lose their job. No one is hand wringing there about whether we’ve done our jobs well, right? It is when we start to take active interest in areas that are considered outside of traditional media, when you move from politics and the economy to technology or gender or any of these other areas considered softer’, that there is a sense that you have shifted into activism and are less focused on hard-hitting journalism.” Another participant said “there’s a real discomfort when activist organizations like our work… even though the idea is that you’re supposed to be creating impact, but you’re not supposed to want that activist label.”’

Another Salon participant agreed. ‘If you break a story about a corrupt politician, you expect that corrupt politician to be hauled before whatever institutions exist or for them to lose their job. No one is hand wringing there about whether we’ve done our jobs well, right? It is when we start to take active interest in areas that are considered outside of traditional media, when you move from politics and the economy to technology or gender or any of these other areas considered ‘softer’, that there is a sense that you have shifted into activism and are less focused on hard-hitting journalism.” Another participant said “there’s a real discomfort when activist organizations like our work… Even though the idea is that you’re supposed to be creating impact, you’re not supposed to want that activist label.”

Identity and objectivity came up in the discussion as well. “The people who are most precious about whether we are objective tend to be a cohort at the intersection of gender, race, and class. Upper middle class white guys are the ones who can go anywhere in the world and report any story and are still ‘objective’. But if you try and think about other communities reporting on themselves or working in different ways, the question is always, ‘wait, how can that be done objectively?’”

A Pew Research Poll in 2022 found that overall,76% of journalists in the US are white and 51% are male. In science and tech beats, 60% of political reporters and 58% of tech journalists are men, and 77% of science and tech reporters are white, 7% Asian, 3% Black, and 3% Hispanic. Some Salon participants pointed out that this is a human resource and hiring problem that derives from structural issues both in journalism and the wider world. In tech reporting and the media space in general, those who tend to be hired are English speaking, highly educated, upper or upper middle class people from a major metropolitan area in their country. There are very few, media outlets that bring in other perspectives.

Salon participants pointed to these statistics and noted that white, US-born journalists are considered able to “objectively” report on any story in any part of the world. They can “parachute in and cover anything they want.” Yet non-white and/or non-US-born and queer journalists are either shoehorned into being experts for their own race, gender, sexual orientation or ethnicity./national identity or seen as unable to be objective because of their identities. “If you’re an English speaking, educated person from the motherland, [it’s assumed that] your responsibility is to tell the story of your people.”

In addition, the US flattens nuance in racism, classism, and other equity issues. Because the US is in an era of diversity, said one Salon participant, media outlets think it’s enough to find a Brown person and put them in leadership. They don’t often look at other issues like race, class, caste or colorism or how those play out within communities of color. “You also have to ask the question of, okay, which people from this place have the resources, the access to get the kind of education that makes them the people that institutions rely on to tell the stories of an entire country or region. How does the system reinforce, again, that internal class dynamic or that broader class and racial dynamic, even as it’s counting for ‘diversity’ on the internal side.”

Waiting for harm to happen. Another challenge raised with tech reporting is the tendency to wait until something terrible happens before a story or issue is covered. News outlets wait until a problem is acute and then write an article and say “look over here, this is happening, isn’t that awful, someone should do something,” as one Salon participant said. The mandate tends to be to “wait until harm is bad enough to be visible before reporting” rather than reducing or mitigating harm. “With technology, the speed of change is so rapid – there needs to be something beyond the horse-race journalism of ‘here’s some investment, here’s a new technology, here’s a hot take and here’s why that matters,’. There needs to be something more meaningful than that.”

Newsworthiness is sometimes weaponized to kill reporting on marginalized communities, said one person. Pitches are informed by the subjectivity and lived experiences of senior editors who may not have a nuanced understanding of how technologies and related issues affect queer communities and/or people of color. Reporters often have to find an additional “hook” to get approval to run a story about these groups or populations because the story itself is not considered newsworthy enough. The hook will often be something that ties it back to Silicon Valley — for example, a story deemed “not newsworthy” might suddenly become important when it can be linked to something that a powerful person in tech does. Reporters have to be creative to get buy in for international stories whose importance is not fully grasped by editors; for example, by pitching how a story will bring in subscriptions, traffic, or an award, or by running a US-focused story that does well, and then pitching the international version of the story.

Reporting on structural challenges in tech. Media absolutely helps bring issues to the forefront, said one Salon participant, and there are lots of great examples recently of dynamic investigative reporting and layered, nuanced storytelling. It remains difficult, however, to report on structural issues or infrastructure. Many of the harms that happen due to technology need to be resolved at the policy, regulatory, or structural level. “This is the ‘boring’ part of the story, but it’s where everything is getting cemented in terms of what technology can do and what harms will result.”

One media outlet tackled this by conducting research to show structural barriers to equity in technology access. A project measured broadband speeds in different parts of cities across the US during COVID to show how inequalities in bandwidth affected people’s access to jobs, income and services. The team joined up with other media groups and shared the data so that it could reach different audiences through a variety of story lines, some national and some local.

The field is shifting, as one Salon participant concluded, and it’s all about owning the moment. “You must own the choices that you’re making…. I do not care if this thing called journalism and these people called journalists continue to exist in the way that they do now… We must rediscover the role of the storyteller who keeps us alive and gives meaning to our societies. This model [of journalism] was not built for someone like me to engage in it fully, to see myself reflected in it fully. Institutional journalism was not made for many of people in this room. It was not made for us to imagine that we are leaders in it, bearers of it, creators of it, or anything other than just its subjects in some sort of ‘National Geographic’ way. And that means owning the moment that we’re in and the opportunities it’s bringing us.”

Technology Salons run under Chatham House Rule, so no attribution has been made in this post. If you’d like to join us for a Salon, sign up here. If you’d like to suggest a topic or provide funding support to Salons in NYC please get in touch!

Last year I wrote a report for UNHCR that explores the potential of digital mental health and psycho-social support (MHPSS) for displaced and stateless adolescents. The key question was whether digital could help to safely expand MHPSS services to a population that is often at high risk due to life circumstances and contexts, yet remains largely under-served.

While it is possible that digital could provide some support (and in fact many young people already go online to find mental health support, especially from their peers), there is also a debate raging around whether social media and the online environment are key contributors to the adolescent mental health crisis. As usual, when you dig into a complex area like this one, nuance is important.

To unpack the topic, we started with the World Health Organization’s traditional MHPSS pyramid, which is used by most humanitarian organizations to frame their MHPSS work. We adapted the pyramid to consider how digital interventions might be safely and feasibly incorporated at the different layers. This presupposes that adolescents can safely access the digital environment so that they could take advantage of digital MHPSS services.

Figure 1. A revised mental health and psycho-social support (MHPSS) pyramid showing ways that digital interventions might enhance adolescent MHPSS. An underlying MHPSS approach that supports safe internet access and safe digital environments for adolescents is needed to enable these interventions.

The report summarizes existing evidence and insights from UNHCR staff working in several country operations to lay out the case and the caveats for digital MHPSS for forcibly displaced and stateless adolescents. We offer ideas on if, when, where, and how digital MHPSS might be explored as an option for reaching these adolescents. We also look at the risks of digital interventions, and explore contextual challenges with digital interventions for this population. This leads to a set of core insights into the key benefits of digital MHPSS at the different levels of the MHPSS Pyramid alongside the barriers, limitations, and risks.

We highlight good practices for designing and implementing digital MHPSS programming with forcibly displaced and stateless adolescents and make recommendations for further action by UNHCR at strategic, advocacy, policy, monitoring, evaluation, research, operational, and guidance levels. Rounding off the report is a checklist for practitioners to follow when designing and implementing digital MHPSS approaches and interventions.

Read the full report here or take a glance at the executive summary and let us know what you think!

As the world became more digital in the wake of COVID-19, the number of mobile applications and online services and support increased exponentially. Many of these apps offer important support to people who live and move in contexts where they are at risk. Digital apps for sensitive services (such as mental health, reproductive health, shelter and support for gender-based violence, and safe spaces for LGBTQI+ communities) can expose people to harm at the family, peer, and wider societal level if not designed carefully. This harm can be severe – for example, detention or death. Though people who habitually face risk have their own coping mechanisms, those designing digital apps and services also have a responsibility to mitigate harm.

At our March 8 Technology Salon NYC (hosted at Thoughtworks), we discussed how to create safe, private digital solutions for sensitive services. Joining were Gerda Binder, UNICEF’s Oky Period Tracker App for Girls; Jonathan McKay, SameSame Collective; Stephanie Mikkelson, United Nations Population Fund; Tania Lee, Trestle, Jane Piercy, Reproductive Equity Now Foundation; and 25 others, making for a rich discussion on this critical topic!

Key Takeaways from the conversation

1. Do constant threat modeling. Threat modeling needs to include a wide range of potential challenges including mis- and disinformation, hostile family and community members, shifting legal landscapes, and law enforcement tactics. The latter are especially important if you are working in environments where people are being persecuted by government. Roughly 70 countries criminalize consensual same-sex activities and some forms of gender expression, most in Sub-Saharan Africa, for example. The US is placing ever greater legal restrictions on gender expression and identity and on reproductive rights, and laws differ from state-to-state, making the legal landscape highly complex. Hate groups are organizing online to perpetrate violence against women, girls and LGBTQI+ people in many other parts of the world as well. In Egypt, police have used the dating app Grindr to entrap, arrest and prosecute gay men. Similar tactics were used in the US to identify and ‘out’ gay priests. Since political and social contexts and the tactics of those who want to do harm change rapidly, ongoing threat modeling is critical. Your threat models will look different in each context and for each digital app.

2. Involve communities and other stakeholders and experts. Co-creation processes are vital for identifying what to design as well as how to design for safety and privacy. By working together with communities, you will have a much better idea of what they need and want, the various challenges they face to access and use a digital tool, and the kinds of risks and harms that need to be reduced through design and during implementation. For example, a lot of apps have emergency buttons designed to protect women, one Salon participant explained. These often alert the police, however that might absolutely be the wrong choice. “Women will tell you about their experiences with police as perpetrators of gender-based violence” (GBV). It’s important to hire tech designers who identify with the groups you are designing for/with. Subject matter experts are key stakeholders, too. There are decades of experience working with groups who are at risk, so don’t re-invent the wheel. Standards exist for how to work on themes like GBV, data protection, and other aspects of safe design of apps and digital services – use them!

3. Collect as little data as possible. Despite the value of data in measuring impact and use and helping to adapt interventions to meet the needs of the target population, collection of personal and sensitive data is extremely dangerous for people using these apps and for organizations providing the services. Data collected from individuals who explicitly or implicitly admit to same-sex activities or gender non-conforming behavior could, in theory, be used by their family and community as evidence in their persecution. Similarly, sexual activity and fertility data tracked in a period tracker could be used to ‘prove’ that a girl or woman is/was fertile or infertile, had sex, miscarried, or aborted — all of which can be a risk depending on the family, social, or legal context. Communication on sensitive topics increases the risk of prosecution because email, web searches, social media posts, text messages, voice messages, call logs, and anything that can be found on a phone or computer can be used as evidence. If a digital app or service can function without collecting data, then it should! For example, it’s not necessary to collect a person’s data to provide them with legal advice or to allow them to track their period.

4. Be thoughtful about where data is stored. When using third party apps to help manage a digital solution, it’s important to know exactly what data is stored, whether the data can be deleted, and whether it can be subpoenaed. Also consider that if an app or third-party data processor is sold to another company, the data they store will likely be sold along with the app, and the policies related to data might change.

While sometimes it is safer to store data on an individual’s device, in other cases it might be safer for data to live in the cloud and/or in different country. This will depend on the threat landscape and actors. You’ll want to also review data privacy regulations for the countries where you are based, where the data is stored, and where your target end users live. All of these regulations may need to be complied with depending on where data is collected, processed, and stored. Some countries have “data sovereignty laws” that dictate that data must reside in the country where it was collected. Some governments have even drafted laws that require government to have access to this data. Others have so-called “hostage” laws that require that digital platforms maintain at least one employee in the country. These employees have been harassed by governments who push them to comply with certain types of censorship or surrender data from their digital platforms. If government is your main threat actor, you might need to decide whether non-compliance with data laws is a risk that you are willing to take.

5. Improve consent processes and transparency. Consent cannot be conceived as a one-time, one-off process, because circumstances change and so does consent. Generally digital platforms do a terrible job at telling people about what happens to their data and informing them of the possible risks to their privacy and safety. It’s complicated to explain where data goes and what happens to it, but we all need to do better with consent and transparency. Engaging people who will use your app in designing a good process is one way to help develop easy to understand language and explanations.

6. Help people protect themselves. Add content to your website, app, or bot that helps people learn how to adjust their privacy settings and understand the risks of using your service and how to protect themselves while doing so. Some features that were mentioned by Salon participants include those that allow people to disguise the apps they are using, quickly delete their data and/or the app itself, mask or ‘forget’ phone numbers so that the number won’t appear in the contact list and so that text message content won’t repopulate if the number is used again to send a text, and using different phone numbers for the organization’s website and for outreach so that the numbers are harder to trace back to the organization or a service.

7. Plan for the end of your project and/or funding. It’s important to plan for how you will safely delete all your data and any data held by third parties at the end of your funding cycle if the app or service is discontinued. In addition, you’ll need to think about what happens to the people who relied on your service. Will you leave them high and dry? Some organizations think of this as an “off ramp” and recommend that you plan for the end of the effort from the very beginning.

8. Take care of your staff. Ensure that you have enough staff capacity to respond to any incoming requests or needs of the people your service targets. Additionally, keep staff safe from harm. Countries, like Hungary, Russia and Indonesia have laws that make the provision of educational material related to LGBTQI+ identities challenging, especially to minors. Similarly, some countries and some US states prohibit any type of counseling related to abortion or gender affirmative care. This poses a risk to organizations who establish legal entities and employ people in these countries and states and to their staff. It’s critical to ensure that you have enough resources to keep staff safe. You will also want to be sure to provide support for them to avoid high levels of burn out and to deal with any vicarious trauma. Keeping staff safe and healthy is not only good for them, but also for your service because better morale will mean higher quality support services.

9. Accept that there will be trade-offs. Password protected apps are more secure, but they can pose higher barriers to use because they introduce friction. If your app doesn’t collect personal data it will be safer, but it will be more difficult to offer a password reset or recovery options, which is a usability challenge, especially in places where people have lower literacy and less experience using apps and passwords. When data is stored locally, it’s less susceptible to large scale data mining, however it might be more at risk of a family member or law enforcement forcing the data to be shared, and if a device is lost or broken, the data will be lost.

Large platforms may be more prone to commercial privacy risks, yet in some ways they provide greater data security. As one person said, “We decided to just go with WhatsApp because we could never develop a platform as secure as theirs – we simply don’t have the engineering power that they do.” Another person mentioned that they offer a Signal option (which is encrypted) for private messaging but that many people do not use Signal and prefer to communicate through platforms they already use. These more popular platforms are less secure, so the organization had to find other ways to set protective parameters for people who use them. Some organizations have decided that, despite the legal challenges it might bring, they simply will not hand over data to law enforcement. To prevent this situation from happening, they have only set up legal entities in countries where human rights protections for the populations they serve are strong. You’ll want to carefully discuss all these different privacy and usability choices, including with potential end users, to come to the best decision for each app or service.

Additional resources on this topic include:

Technology Salons run under Chatham House Rule, so no attribution has been made in this post. If you’d like to join us for a Salon, sign up here. If you’d like to suggest a topic or provide funding support to Salons in NYC please get in touch!

Modified from the original, posted on the MERL Tech Blog, July 20, 2020

For the past six years, I’ve been organizing the MERL Tech conference and related activities. We cancelled this year’s conference (planned for Johannesburg in September) because of coronavirus, but plenty has been happening despite the fact that we can’t gather in person.

One project I’m happy to launch today is the State of the Field of MERL Tech research, which pulls together lessons from five years of convening hundreds of monitoring, evaluation, research, and learning (MERL) and technology practitioners who have joined us as part of the MERL Tech community.

These four new papers build on research that Michael Bamberger and I co-authored in 2014, which aimed to set the stage and begin framing this (then) emerging field. For this latest research, we started by examining the evolution of the field since 2014 and plotting three waves of MERL Tech (as described below) onto Gartner’s Hype Cycle. Each of the waves is explored further in its own paper.

Three waves of MERL Tech explored in the State of the Field series.

Now is a good time to take stock of the past, given that 2020 marks a turning point in many ways. The world is in the midst of the COVID-19 pandemic, and there is an urgent need to know what is happening, where, and to what extent. Data is a critical piece of the COVID-19 response — it can mean the difference between life and death — but data collection, use, and sharing can also invade privacy or cause harm now or in the future. As technology use grows due to stay-at-home orders and a push for “remote monitoring” and “remote program delivery” so, too, does the amount of data captured and shared.

At the same time, we’re witnessing (and I hope, also joining in with) a global call for justice — perhaps a tipping point — in the wake of decades of racist and colonialist systems that operate at the level of nations, institutions, organizations, global aid and development, and the tech sector. There is no denying that these power dynamics and systems have shaped the MERL space as a whole, including the MERL Tech space.

Moments of crisis test a field, and we live in extreme times. The coming decade will demand a nimble, adaptive, fair, and just use of data for managing complexity and for gaining longer-term understanding of change and impact. The sector, its relationships, and its power dynamics will need a fundamental re-shaping.

It is in this time of upheaval and change that we are releasing four papers covering the field from 2014-2019 as a launchpad for thinking about the future of MERL Tech. In September 2018, the papers’ authors began reviewing the past five years of MERL Tech events to identify lessons, trends, and issues in this rapidly changing field. They also reviewed the literature base in an effort to determine what we know about technology in MERL, what we yet need to understand, and what are the gaps in the formal literature. No longer is this a nascent field, yet it is one that is hard to keep up with, due to its fast pace and constant shifts. We have learned many lessons over the past five years, but complex political, technical, and ethical questions remain.

Can the wider MERL Tech community take action to make the next phase of MERL Tech development effective, responsible, ethical, just, and equitable? We share these papers as conversation pieces and hope they will generate more discussion in the MERL Tech space about where to go from here.

The State of the Field series includes four papers:

MERL Tech State of the Field: The Evolution of MERL Tech: Linda Raftree, independent consultant and MERL Tech Conference organizer.

What We Know About Traditional MERL Tech: Insights from a Scoping Review: Zach Tilton, Michael Harnar, and Michele Behr, University of Western Michigan; Soham Banerji and Manon McGuigan, independent consultants; and Paul Perrin, Gretchen Bruening, John Gordley and Hannah Foster, University of Notre Dame; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Big Data to Data Science: Moving from “What” to “How” in the MERL Tech SpaceKecia Bertermann, Luminate; Alexandra Robinson, Threshold.World; Michael Bamberger, independent consultant; Grace Lyn Higdon, Institute of Development Studies; Linda Raftree, independent consultant and MERL Tech Conference organizer.

Emerging Technologies and Approaches in Monitoring, Evaluation, Research, and Learning for International Development Programs: Kerry Bruce and Joris Vandelanotte, Clear Outcomes; and Valentine Gandhi, The Development CAFE and Social Impact.

 

(Reposting, original appears here)

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

.

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before role-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Over the past few months, I’ve been working with CARE to develop a Responsible Data Maturity Model. This “RDMM” joins a growing set of tools (created by a wide variety of organizations) aimed at supporting organizations to move towards more responsible data management.

Responsible Data is a concept developed by the Responsible Data Forum. It outlines the collective duty to prioritize and respond to the ethical, legal, social and privacy-related challenges that come from using data. Responsible Data encompasses a variety of issues which are sometimes thought about separately, like data privacy and data protection, or ethical challenges. For any of these to be truly addressed, they need to be considered together.

CARE’s model identifies five levels of Responsible Data maturity:

  • Unaware: when an organization has not thought about Responsible Data much at all.
  • Ad-Hoc: when some staff or teams are raising the issue or doing something on their own, but there is no institutionalization of Responsible Data.
  • Developing: when there is some awareness, but the organization is only beginning to put policy, guidelines, procedures and governance in place.
  • Mastering: when the organization has its own house in order and is supporting its partners to do the same.
  • Leading: when the organization is looked to as a Responsible Data leader amongst its peers, setting an example of good practice, and influencing the wider field. Ideally an organization would be close to ‘mastering’ before placing itself in the ‘leading’ stage.

The main audience for the RDMM is the point person who is tasked with moving an organization or team forward to improve data practices and data ethics. The model can be adapted and used in ways that are appropriate for other team members who do not have Responsible Data as their main, day-to-day focus.

There are multiple other uses for the RDMM, however, for example:

  • As a diagnostic or baseline and planning tool for organizations to see where they are now, where they would like to be in 3 or 5 years, and where they need to put more support/resources.
  • As an audit framework for Responsible Data.
  • As a retro-active, after-action assessment tool or case study tool for looking at a particular program and seeing which Responsible Data elements were in place and contributed to good data practices, and then developing a case study to highlight good practices and gaps.
  • As a tool for evaluation if looking at a baseline/end-line for organizational approaches to Responsible Data.
  • In workshops as a participatory self-assessment tool to 1) help people see that moving towards a more responsible data approach is incremental and 2) to identify what a possible ideal state might look like. The tool can be adapted to what an organization sees as its ideal future state.
  • To help management understand and budget for a more responsible data approach.
  • With an adapted context, “persona,” or work stream approach that helps identify what Responsible Data maturity might look like for a particular project or program or for a particular role within a team or organization. For example, for headquarters versus for a country office, for the board versus for frontline implementers. It could also help organizations identify what parts of Responsible Data different positions or teams should be concerned with and accountable for.
  • As an investment roadmap for headquarters, leadership or donors to get a sense of what is the necessary investment to reach Responsible Data maturity.
  • As an iterative pathway to action, and a way to establish indicators or markers to mainstream Responsible Data throughout an organization.
  • In any other way you might think of! The RDMM is published with a Creative Commons License that allows you to modify and adapt it to suit your needs.

Over the past few months, we’ve tested the model with teams at headquarters, country offices, in mixed teams of people from different offices in one organization, and with groups from different organizations. We asked them to go through the different areas of the model and self-assess at which level they place themselves currently and which level they would like to achieve within a set time frame, for example 3 or 5 years. Then we worked with them to develop action points that would allow them to arrive to the desired level.

Teams found the exercise useful because:

  • It allowed them to break Responsible Data into disparate pieces that could be assigned to different parts of an organization or different members of a team.
  • It helped to lay out indicators or “markers” related to Responsible Data that could be integrated throughout an organization.
  • It allowed both teams and management to see that Responsible Data is a marathon not a sprint and will require that multiple work streams are addressed over time with the involvement of different skill sets and different parts of the organization (strategy, operations and IT, legal, programs, M&E, innovations, HR, fundraising and partnerships, etc.)
  • It helped teams with limited resources to see how to make incremental steps forward without feeling pressured to make Responsible Data their only focus.

We hope others will find the RDMM useful as well! It’s published under a creative commons license, so feel free to use it and adapt it in ways that will suit your needs.

We’re in the process of translating it into French and Spanish. We’d love to know if you use it, how, and if it is helpful to you! Please get in touch with me for more information.

Download the Responsible Data Maturity Model as a Word file.

Download the Responsible Data Maturity Model as a pdf

On Thursday September 19, we gathered at the OSF offices for the Technology Salon on “Automated Decision Making in Aid: What could possibly go wrong?” with lead discussants Jon Truong, and Elyse Voegeli, two of the creators of Automating NYC; and Genevieve Fried and Varoon Mathur, Fellows at the AI Now Institute at NYU.

To start off, we asked participants whether they were optimistic or skeptical about the role of Automated Decision-making Systems (ADS) in the aid space. The response was mixed: about half skeptics and half optimists, most of whom qualified their optimism as “cautious optimism” or “it depends on who I’m talking to” or “it depends on the day and the headlines” or “if we can get the data, governance, and device standards in place.”

What are ADS?

Our next task was to define ADS. (One reason that the New York City ADS task force was unable to advance is that its members were unable to agree on the definition of an ADS).

One discussant explained that NYC’s provisional definition was something akin to:

  • Any system that uses data algorithms or computer programs to replace or assist a human decision-making process.

This may seem straightforward, yet, as she explained, “if you go too broad you might include something like ‘spellcheck’ which feels like overkill. On the other hand, spellcheck is a good case for considering how complex things can get. What if spellcheck only recognized Western names? That would be an example of encoding bias into the ADS. However, the degree of harm that could come from spellcheck as compared to using ADS for predictive policing is very different. Defining ADS is complex.”

Other elements of the definition of an ADS are that it includes computational implementation of an algorithm. Algorithms are basically clear instructions or criteria followed in order to make a decision. Algorithms can be manual. ADS include the power of computation, noted another discussant. And perhaps a computer and complex system should be included as well, and a decision-making point or cut off; for example, an algorithm that determines who gets a loan. It is also important to consider statistical modeling and forecasting, which allow for prediction.

Using data and criteria for making decisions is nothing new, and it’s often done without specific systems or computers. People make plenty of very bad decisions without computers, and the addition of computers and algorithms is sometimes considered a more objective approach, because instructions can be set and run by a computer.

Why are there issues with ADS?

In practice things are not as clear cut as they might seem, explained one of our discussants. We live in a world where people are treated differently because of their demographic identity, and curation of data can represent some populations over others or misrepresent certain populations because of how they have been treated historically. These current and historic biases make their way into the algorithms, which are created by humans, and this encodes human biases into an ADS. When feeding existing data into a computer so that it can learn, we bring our historical biases into decision-making. The data we feed into an ADS may not reflect changing demographics or shifts in the data, and algorithms may not reflect ongoing institutional policy changes.

As another person said, “systems are touted as being neutral, but they are subject to human fallacies. We live in a world that is full of injustice, and that is reflected in a data set or in an algorithm. The speed of the system, once it’s computerized, replicates injustices more quickly and at greater scale.” When people or institutions believe that the involvement of a computer means the system is neutral, we have a problem. “We need to take ADS with a grain of salt, similar to how we tell children not to believe everything they see on the Internet.”

Many people are unaware of how an algorithm works. Yet over time, we tend to rely on algorithms and believe in them as unbiased truth. When ADS are not monitored, tested, and updated, this becomes problematic. ADS can begin to make decisions for people rather than supporting people in making decisions, and this can go very wrong, for example when decisions are unquestioningly made based on statistical forecasting models.

Are there ways to curb these issues with ADS?

Consistent monitoring. ADS should also be monitored constantly over time by humans. One Salon participant suggested setting up checkpoints in the decision-making process to alert humans that something is afoul. Another suggested that research and proof of concept are critical. For example, running the existing human-only system alongside the ADS and comparing the decisions over time help to flag differences that can then be examined to see which of the processes is working better and to adjust or discontinue the ADS if it is incorrect. (In some cases, this process may actually flag biases in the human system). Random checks can be set up as can control situations where some decisions are made without using an ADS so that results can be compared between the two.

Recourse and redress. There should be simple and accessible ways for people affected by ADS to raise issues and make complaints. All ADS can make mistakes – there can be false positives (where an error points falsely to a match or the presence of a condition) and false negatives (where an error points to the absence of a match or a condition when indeed it is present). So there needs to be recourse for people affected by errors or in cases where biased data is leading to further discrimination or harm. Anyone creating an ADS needs to build in a way for mistakes to be managed and corrected.

Education and awareness. A person may not be aware that an ADS has affected them, and they likely won’t understand how an ADS works. Even people using ADS for decisions about others often forget that it’s an ADS deciding. This is similar to how people forget that their newsfeed on Facebook is based on their historical choices in content and their ‘likes’ and is not a neutral serving of objective content.

Improving the underlying data. Algorithms will only get better when there are constant feedback loops and new data that help the computer learn, said one Salon participant. Currently most algorithms are trained on highly biased samples that do not reflect marginalized groups and communities. For example, there is very little data about many of the people participating in or eligible for aid and development programs.

So we need proper data sets that are continually updated if we are to use ADS in aid work. This is a problem, however, if the data that is continually fed into the ADS remains biased. One person shared this example: If some communities are policed more because of race, economic status, etc., there will continually be more data showing that people in those communities are committing crimes. In whiter or wealthier communities, where there is less policing, less people are arrested. If we update our data continually without changing the fact that some communities are policed more than others (thus will appear to have higher crime rates), we are simply creating a feedback loop that confirms our existing biases.

Privacy concerns also enter the picture. We may want to avoid collecting data on race, gender, ethnicity or economic status so that we don’t expose people to discrimination, stigma, or harm. For example, in the case of humanitarian work or conflict zones, sensitive data can make people or groups a target for governments or unfriendly actors. However, it’s hard to make decisions that benefit people if their data is missing. It ends up being a catch 22.

Transparency is another way to improve ADS. “In the aid sector, we never tell people how decisions are made, regardless of whether those are human or machine-made decisions,” said one Salon participant. When the underlying algorithm is obscured, it cannot be reviewed for value judgments. Some compared this to some of the current non-algorithmic decision-making processes in the aid system (which are also not transparent) and suggested that aid systems could get more intelligent if they began to surface their own specific biases.

The objectives of the ADS can be reviewed. Is the system used to further marginalize or discriminate against certain populations, or can this be turned on its head? asked one discussant. ADS could be used to try to determine which police officers might commit violence against civilians rather than to predict which people might commit a crime. (See the Algorithmic Justice League’s work). 

ADS in the aid system – limited to the powerful few?

Because of the underlying challenges with data (quality, standards, lack of) in the aid sector, ADS is still a challenge. One area where data is available and where ADS are being built and used is in supply chain management, for example, at massive UN agencies like the World Food Program.

Some questioned whether this exacerbates concentration of power in these large agencies, running counter to agreed-upon sector goals to decentralize power and control to smaller, local organizations who are ‘on the ground’ and working directly in communities. Does ADS then bring even more hierarchy, bias, and exclusion into an already problematic system of power and privilege? Could there be ways of using ADS differently in the aid system that would not replicate existing power structures? Could ADS itself be used to help people see their own biases? “Could we build that into an ADS? Could we have a read out of decisions we came to and then see what possible biases were?” asked one person.

How can we improve trust in ADS?

Most aid workers, national organizations, and affected communities have a limited understanding of ADS, leading to lower levels of trust in ADS and the decisions they produce. Part of the issue is the lack of participation and involvement in the design, implementation, validation, and vetting of ADS. On the other hand, one Salon participant pointed out that given all the issues with bias and exclusion, “maybe they would trust an ADS even less if they understood how an ADS works.”

Involving both users of an ADS and the people affected by ADS decisions is crucial. This needs to happen early in the process, said one person. It shouldn’t be limited to having people complain or report once the ADS has wronged them. They need to be at the table when the system is being developed and trialed.

If trust is to be built, the explainability of an algorithm needs consideration. “How can you explain the algorithm to people who are affected by it? Humanitarian workers cannot describe an ADS if they don’t understand it. We need to find ways to explain ADS to a non-technical audience so that they can be involved,” said one person. “We’ve shown sophisticated models to leaders, and they defaulted to spreadsheets.”

This brought up the need for change management if ADS are introduced. Involving and engaging decision-makers in the design and creation of ADS systems is a critical step for their adoption. This means understanding how decisions are made currently and based on what factors. Technology and data teams need to be in the room to understand the open and hidden nature of decision-making.

Isn’t decision making without ADS also highly biased and obscured?

People are often resistant to talking about or sharing how decisions have been made in the past, however, because those decisions may have been biased or inconsistent, based on faulty data, or made for political or other reasons.

As one person pointed out, both government and the aid system are deeply politicized and suffer from local biases, corruption and elite capture. A spatial analysis of food distribution in two countries, for example, showed extreme biases along local political leader lines. A related analysis of the road network and aid distribution allowed a clear view into the unfairness of food distribution and efficiency losses.

Aid agencies themselves make highly-biased decisions all the time, it was noted. Decisions are often political, situational, or made to enhance the reputation of an individual or agency. These decisions are usually not fully documented. Is this any less transparent than the ‘black box’ of an algorithm? Not to mention that agencies have countless dashboards that are aimed at helping them make efficient, unbiased decisions, yet recommendations based on the data may run counter to what is needed politically or for other reasons in a given moment.

Could (should) the humanitarian sector assume greater leadership on ADS?

Most ADS are built by private sector partners. When they are sold to the public or INGO sector, these companies indemnify themselves against liability and keep their trade secrets. It becomes impossible to hold them to account for any harm produced. One person asked whether the humanitarian sector could lead by bringing in different incentives – transparency, multi-stakeholder design, participation, and a focus on wellbeing? Could we try this and learn from it and develop and document processes whereby this could be done at scale? Could the aid sector open source how ADS are designed and created so that data scientists and others could improve?

Some were skeptical about whether the aid sector would be capable of this. “Theoretically we could do this,” said one person, “but it would then likely be concentrated in the hands of these few large agencies. In order to have economies of scale, it will have to be them because automation requires large scale. If that is to happen, then the smaller organizations will have to trust the big ones, but currently the small organizations don’t trust the big ones to manage or protect data.” And what about the involvement of governments, said another person, we would need to consider the role of the public sector.

“I like the idea of the humanitarian sector leading,” added one person, “but aid agencies don’t have the greatest track record for putting their constituencies in the driving seat. That’s not how it works. A lot of people are trying to correct that, but aid sector employees are not the people who will be affected by these systems in the end. We could think about working with organizations who have the outreach capacity to do work with these groups, but again, these organizations are not made up of the affected people. We have to remember that.”

How can we address governance and accountability?

When you bring in government, private sector, aid agencies, software developers, data, and the like, said another person, you will have issues of intellectual property, ownership, and governance. What are the local laws related to data transmission and storage? Is it enough to open source just the code or ADS framework without any data in it? If you work with local developers and force them to open source the algorithm, what does that mean for them and their own sustainability as local businesses?

Legal agreements? Another person suggested that we focus on open sourcing legal agreements rather than algorithms. “There are always risks, duties, and liabilities listed in contracts and legal agreements. The private sector in particular will always play the indemnity card. And that means there is no commercial incentive to fix the tools that are being used. What if we pivoted this conversation to commercial liability? If a model is developed in Manhattan, it won’t work in Malawi — a company has a commercial duty to flag and recognize that. This type of issue is hidden if we focus the conversation on open software or open models. It’s rare that all the technology will be open and transparent. What we should push for is open contracting, and that could help a lot with governance.”

Certification? Others suggested that we adapt existing audit systems like the LEED certification (which allows engineers and architects to audit whether buildings are actually environmentally sustainable) or the IRB process (external boards that review research to flag ethical issues). “What if there were a team of data scientists and others who could audit ADS and determine the flaws and biases?” suggested one person. “That way the entire thing wouldn’t need to be open, but it could still be audited independently”. This was questioned, however, in that a stamp of approval on a single system could lead people to believe that every system designed by a particular group would pass the test.

Ethical frameworks could be a tool, yet which framework? A recent article cited 84 different ethical frameworks for Artificial Intelligence.

Regulation? Self-regulation has failed, said one person. Why aren’t we talking about actual regulation? The General Data Protection Regulation (GDPR) in Europe has a specific article (Article 22) about ADS that states that people have a right to know when ADS are used to made decisions that affect them, the right to contest decisions made by ADS, and right to request that humans review ADS decisions.

SPHERE Standards / Core Humanitarian Standard? Because of the legal complexities of working across multiple countries and with different entities in different jurisdictions (including some like the UN who are exempt from the law), an add-on to the SPHERE standards might be considered, said one person. Or something linked to the Core Humanitarian Standard (CHS), which includes a certification process. Donors will often ask whether an agency is CHS certified.

So, is there any good to come from ADS?

We tend to judge ADS with higher standards than we judge humans, said one Salon participant. Loan officers have been making biased decisions for years. How can we apply the standards of impartiality and transparency to both ADS and human decision making? ADS may be able to fix some of our current faulty and biased decisions. This may be useful for large systems, where we can’t afford to deploy humans at scale. Let’s find some potential bright spots for ADS.

Some positive examples shared by participants included:

  • Human rights organizations are using satellite imagery to identify areas that have been burned or otherwise destroyed during conflict. This application of automated decision making doesn’t deal directly with people or allocation of resources, it supports human rights research.
  • In California, ADS has been used to expunge the records of people convicted for marijuana-related violations now that marijuana has been legalized. This example supports justice and fairness.
  • During Hurricane Irma, an organization in the Virgin Islands used an excel spreadsheet to track whether people met the criteria for assistance. Aid workers would interview people and the sheet would calculate automatically whether they were eligible. This was not high tech or sexy, but it was automated and fast. The government created the criteria and these were open and transparently communicated to people ahead of time so that if they didn’t receive benefits, they were clear about why.
  • Flood management is an area where there is a lot of data and forecasting. Governments have been using ADS to evacuate people before it’s too late. This sector can gain in efficiency with ADS, which could be expanded to other weather-based hazards. Because it is a straightforward use case that involves satellites and less personal data it may be a less political space, making deployment easier.
  • Drones also use ADS to stitch together hundreds of thousands of photos to create large images of geographical areas. Though drone data still needs to be ground truthed, it is less of an ethical minefield than when personal or household level data is collected, said one participant. Other participants, however, had issues with the portrayal of drones as less of an ethical minefield, citing surveillance, privacy, and challenges with the ownership and governance of the final knowledge product, the data for which was likely collected without people’s consent.

How can the humanitarian sector prepare for ADS?

In conclusion, one participant summed up that decision making has always been around. As ADS is explored more in-depth with groups like the one at this Salon and as we delve into the ethics and improve on ADS, there is great potential. ADS will probably never totally replace humans but can supplement humans to make better decisions.

How are we in the humanitarian sector preparing people at all levels of the system to engage with these systems, design them ethically, reduce harm, and make them more transparent? How are we working to build capacities at the local level to understand and use ADS? How are we figuring out ways to ensure that the populations who will be affected by ADS are aware of what is happening? How are we ensuring recourse and redress in the case of bad decisions or bias? What jobs might be created (rather than eliminated) with the introduction of more ADS?

ADS are not going to go away, and the humanitarian sector doesn’t have to wait until they are perfected to get involved in shaping and improving them so that they support our work in ethical and useful ways rather than in harmful or unethical ways.

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Our Technology Salon on Digital ID (“Will Digital Identities Support or Control Us”) took place at the OSF offices on June 3 with lead discussants Savita Bailur and Emrys Schoemaker from Caribou Digital and Aiden Slavin from ID2020.

In general, Salon Participants noted the potential positives of digital ID, such as improved access to services, better service delivery, accountability, and better tracking of beneficiaries. However, they shared concerns about potential negative impacts, such as surveillance and discrimination, disregard for human rights and privacy, lack of trust in government and others running digital ID systems, harm to marginalized communities, lack of policy and ethical frameworks, complexities of digital ID systems and their associated technological requirements, and low capacity within NGOs to protect data and to deal with unintended consequences.

What do we mean by digital identity (digital ID)?

Arriving at a basic definition of digital ID is difficult due to its interrelated aspects. To begin with: What is identity? A social identity arises from a deep sense of who we are and where we come from. A person’s social identity is a critical part of how they experience an ID system. Analog ID systems have been around for a very long time and digitized versions build on them.

The three categories below (developed by Omidyar) are used by some to differentiate among types of ID systems:

  • Issued ID includes state or national issued identification like birth certificates, driver’s licenses, and systems such as India’s biometric ID system (Aadhar), built on existing analog models of ID systems and controlled by institutions.
  • De facto ID is an emerging category of ID that is formed through data trails that people leave behind when using digital devices, including credit scoring based on mobile phone use or social media use. De facto ID is somewhat outside of an individual’s control, as it is often based on analysis of passive data for which individuals have not given consent to collect or use in this way. De facto ID also includes situations where refugees are tracked via cellphone data records (CDRs). De facto ID is a new and complex way of being identified and categorized.
  • Self-asserted ID is linked to the decentralization of ID systems. It is based on the possession of forms of ID that prove who we are that we manage ourselves. A related term is self-managed ID, which recognizes that there is no ID that is “self-asserted” because our identity is relational and always relies on others recognizing us as who we are and who we believe ourselves to be.

(Also see this glossary of Digital ID definitions.)

As re-identification technologies are becoming more and more sophisticated, the line between de-facto and official, issued IDs is blurring, noted one participant. Others said they prefer using a broad umbrella term “Identity in the Digital Age” to cover the various angles.

Who is digital ID benefiting?

Salon Participants tended to think that digital ID is mainly of interest to institutions. Most IDs are developed, designed, managed, and issued by institutions. Thus the interests baked into the design of an ID system are theirs. Institutions tend to be excited about digital ID systems because they are interoperable, and helps them with beneficiary management, financial records, entry/exit across borders and the like.

This very interoperability, however, is what raises privacy, vulnerability, and data protection issues. Some of the most cutting-edge Digital ID systems are being tested on some of the most vulnerable populations in the world:  refugees in Jordan, Uganda, Lebanon, and Myanmar. These digital ID systems have created massive databases for analysis, e.g., the UNHCR’s Progress data base has 80 million records.

This brings with it a huge responsibility to protect. It also raises questions about the “one ID system to rule them all” idea. On the one hand, a single system can offer managerial control, reduce fraud, and improve tracking. Yet, as one person said, “what a horrifying prospect that an institution can have this much control! Should we instead be supporting one thousand ID systems to bloom?”

Can we trust institutions and governments to manage digital ID Systems?

One of the institutions positioning itself as the leader in Digital ID is the World Food Program (WFP). As one participant highlighted, this is an agency that has come under strong criticism for its partnership with Palantir and a lack of transparency around where data goes and who can access it. Seismic downstream effects that affect trust in the entire sector can be generated these kinds of partnerships. “This has caused a lot of angst in the sector. The WFP wants to have the single system to rule them all, whereas many of us would rather see an interoperable ecosystem.” Some organizations consider their large-scale systems to have more rigorous privacy, security, and informed consent measures than the WFP’s SCOPE system.

Trust is a critical component of a Digital ID system. The Estonian model, for example, offers visibility into which state departments are accessing a person’s data and when, which builds citizen’s trust in the system. Some Salon participants expressed concern over their own country governments running a Digital ID system. “In my country, we don’t trust institutions because we have a failed state,” said one person, “so people would never want the government to have their information in that way.” Another person said that in his country, the government is known for its corruption, and the idea that the government could manage an ID system with any kind of data integrity was laughable. “If these systems are not monitored or governed properly, they can be used to target certain segments of the population for outright repression. People do want greater financial inclusion, for example, but these ID systems can be easily weaponized and used against us.”

Fear and mistrust in digital ID systems is not universal, however. One Salon participant said that their research in Indonesia found that a digital ID was seen to be part of being a “good citizen,” even if local government was not entirely trusted. A Salon participant from China reported that in her experience, the digital ID system there has not been questioned much by citizens. Rather, it is seen as a convenient way for people to learn about new government policies and to carry out essential transactions more quickly.

What about data integrity and redress?

One big challenge with digital ID systems as they are currently managed is that there is very little attention to redress. “How do you fix errors in information? Where are the complaints mechanisms?” asked one participant. “We think of digital systems as being really flexible, but they are really hard to clean out,” said another. “You get all these faulty data crumbs that stick around. And they seem so far removed from the user. How do people get data errors fixed? No one cares about the integrity of the system. No one cares but you if your ID information is not correct. There is really very little incentive to address discrepancies and provide redress mechanisms.”

Another challenge is the integrity of the data that goes into the system. In some countries, people go back to their villages to get a birth certificate, at point at which data integrity can suffer due to faulty information or bribes, among other things. In one case, researchers spoke to a woman who changed her religion on her birth certificate thinking it would save her from discrimination when she moved to a new town. In another case, the village chief made a woman change her name to a Muslim name on her birth certificate because the village was majority Muslim.” There are power dynamics at the local level that can challenge the integrity of the ID system.

Do digital ID systems improve the lives of women and children?

There is a long-standing issue in many parts of the world with children not having a birth certificate, said one Salon discussant. “If you don’t have a legal ID, technically you don’t exist, so that first credential is really important.” As could probably be expected, however, fewer females than males have legal ID.

In a three-country research project, the men interviewed thought that women do not need ID as much as men did. However, when talking with women it was clear that they are the ones who are dealing with hospitals and schools and other institutions who require ID. The study found that In Bangladesh, when women did have ID, it was commonly held and controlled by their husbands. In one case study, a woman wanted to sign up as a cook for an online cooking service, but she needed an ID to do so. She had to ask her husband for the ID, explain what she needed it for, and get his permission in order to join the cooking service. In another, a woman wanted to provide beauty care services through an online app. She needed to produce her national ID and two photos to join up with the app and to create a bKash mobile money account. Her husband did not want her to have a bKash account, so she had to provide his account details, meaning that all of her earnings went to her husband (see more here on how ID helps women access work). In India, a woman wanted to escape her husband, so she moved from the countryside to Bangalore to work as a maid. Her in-laws retained all of her ID, and so she had to rely on her brother to set up everything for her in Bangalore.

Another Salon participant explained that in India also, micro-finance institutions had imposed a regulation that when a woman registered to be part of a project, she had to provide the name of a male member to qualify her identity. When it was time to repay the loan or if a woman missed a payment, her brother or husband would then receive a text about it. The question is how to create trust-based systems that do not reinforce patriarchal values and where individuals are clear about and have control over how information is shared?

“ID is embedded in your relationships and networks,” it was explained. “It creates a new set of dependencies and problems that we need to consider.” In order to understand the nuances in how ID and digital ID are impacting people, we need more of these micro-level stories. “What is actually happening? What does it mean when you become more identifiable?”

Is it OK to use digital ID systems for social control and social accountability? 

The Chinese social credit system, according to one Salon participant, includes a social control function. “If you have not repaid a loan, you are banned from purchasing a first-class air ticket or from checking into expensive hotels.” An application used In Nairobi called Tala also includes a social accountability function, explained another participant. “Tala is a social credit scoring app that gives small loans. You download an app with all your contacts, and it works out via algorithms if you are credit-worthy. If you are, you can get a small loan. If you stop paying your loans, however, Tala alerts everyone in your contact list. In this way, the app has digitized a social accountability function.”

The initial reaction from Salon Participants was shock, but it was pointed out that traditional Village Savings and Loans Associations (VSLAs) function the same way – through social sanction. “The difference here is transparency and consent,” it was noted. “In a community you might not have choice about whether everyone knows you defaulted on your small loan. But you are aware that this is what will happen. With Tala, people didn’t realize that the app had access to their contacts and that it would alert those contacts, so consent and transparency are the issues.”

The principle of informed consent in the humanitarian space poses a constant challenge. “Does a refugee who registers with UNHCR really have any choice? If they need food and have to provide minimal information to get it, is that consent? What if they have zero digital literacy?” Researcher Helen Nissenbaum, it was noted, has written that consent is problematic and that we should not pursue it. “It’s not really about individual consent. It’s about how we set standards and ensure transparency and accountability for how an individual’s information is used,” explained one Salon participant.

These challenges with data use and consent need to be considered beyond just individual privacy, however, as another participant noted. “There is all manner of vector-based data in the WFP’s system. Other agencies don’t have this kind of disaggregated data at the village level or lower. What happens if Palantir, via the WFP, is the first company in the world to have that low level disaggregation? And what happens with the digital ID of particularly vulnerable groups of people such as refugee communities or LGBTQI communities? How could these Digital IDs be used to discriminate or harm entire groups of people? What does it mean if a particular category or tag like ‘refugee’ or ‘low income’ follows you around forever?”

One Salon participant said that in Jordanian camps, refugees would register for one thing and be surprised at how their data then automatically popped up on the screen of a different partner organization. Other participants expressed concerns about how Digital ID systems and their implications could be explained to people with less digital experience or digital literacy. “Since the GDPR came into force, people have the right to an explanation if they are subject to an automated decision,” noted one person “But what does compliance look like? How would anyone ever understand what is going on?” This will become increasingly complex as technology advances and we begin to see things like digital phenotyping being used to serve up digital content or determine our benefits.

Can we please have better standards, regulations and incentives?

A final question raised about Digital ID systems was who should be implementing and managing them: UN agencies? Governments? Private Sector? Start-ups? At the moment the ecosystem includes all sorts of actors and feels a bit “Wild Wild West” due to insufficient control and regulation. At the same time, there are fears (as noted above) about a “one system to rule them all approach.” “So,” asked one person, “what should we be doing then? Should UN agencies be building in-house expertise? Should we be partnering better with the private sector? We debate this all the time internally and we can never agree.” Questions also remain about what happens with the biometric and other data that failed start-ups or discontinued digital ID systems hold. And is it a good idea to support government-controlled ID systems in countries with corrupt or failed governments, or those who will use these systems to persecute or exercise undue control over their populations?

As one person asked, “Why are we doing this? Why are we even creating these digital ID systems?”

Although there are huge concerns about Digital ID, the flip side is that a Digital ID system could potentially offer better security for sensitive information, at least in the case of humanitarian organizations. “Most organizations currently handle massive amounts of data in Excel sheets and Google docs with zero security,” said one person. “There is PII [personally identifiable information] flowing left, right, and center.” Where donors have required better data management standards, there has been improvement, but it requires massive investment, and who will pay for it?” Sadly, donors are currently not covering these costs. As a representative from one large INGO explained, “we want to avoid the use of Excel to track this stuff. We are hoping that our digital ID system will be more secure. We see this as a very good idea if you can nail down the security aspects.”

The EU’s General Data Protection Regulation (GDPR) is often quoted as the “gold standard,” yet implementation is complex and the GDPR is not specific enough, according to some Salon participants. Not to mention, “if you are UN, you don’t have to follow GDPR.” Many at the Salon felt that the GDPR has had very positive effects but called out the lack of incentive structures that would encourage full adoption. “No one does anything unless there is an enforcing function.” Others felt that the GDPR was too prescriptive about what to do, rather than setting limits on what not to do.

One effort to watch is the Pan Canadian Trust Foundation, mentioned as a good example of creating a functioning and decentralized ecosystem that could potentially address some of the above challenges.

The Salon ended with more questions than answers, however there is plenty of research and conversation happening about digital ID and a wide range of actors engaging with the topic. If you’d like to read more, check out this list of resources that we put together for the Salon and add any missing documents, articles, links and resources!

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

 

 

 

At our April Technology Salon we discussed the evidence and good practice base for blockchain and Distributed Ledger Technologies (DLTs) in the humanitarian sector. Our discussants were Larissa Fast (co-author with Giulio Coppi of the Global Alliance for Humanitarian Innovation/GAHI’s report on Humanitarian Blockchain, Senior Lecturer at HCRI, University of Manchester and Research Associate at the Humanitarian Policy Group) and Ariana Fowler (UNICEF Blockchain Strategist).

Though blockchain fans suggest DLTs can address common problems of humanitarian organizations, the extreme hype cycle has many skeptics who believe that blockchain and DLTs are simply overblown and for the most part useless for the sector. Until recently, evidence on the utility of blockchain/DLTs in the humanitarian sector has been slim to none, with some calling for the sector to step back and establish a measured approach and a learning agenda in order to determine if blockchain is worth spending time on. Others argue that evaluators misunderstand what to evaluate and how.

The GAHI report provides an excellent overview of blockchain and DLTs in the sector along with recommendations at the project, policy and system levels to address the challenges that would need to be overcome before DLTs can be ethically, safely, appropriately and effectively scaled in humanitarian contexts.

What’s blockchain? What’s a DLT?

We started with a basic explanation of DLTs and Blockchain and how they work. (See page 5 of the GAHI report for more detail).

The GAHI report aimed to get beyond the potential of Blockchain and DLTs to actual use cases — however, in the humanitarian sector there is still more potential than evidence. Although there were multiple use cases to choose from, the report authors chose to go in-depth on five, selected to provide a sense of the different ways that blockchain is specifically being used in the sector.

These use cases all currently have limited “nodes” (e.g., places where the data is stored) and only a few “controlling entities” (that determine what information is stored or put on the chain). They are all “private“ (as opposed to public) blockchains, meaning they are not taking advantage of DLT potential for dispersed information, and they end up being more like “a very expensive database.”

What’s the deal with private vs public blockchains?

Private versus public blockchains are an ideological sticking point in “deep blockchain culture,” noted one Salon participant. “’Cryptobros’ and blockchain fundamentalists think private blockchains are the Antichrist.” Private blockchains are considered an oxymoron and completely antithetical to the idea of blockchain.

So why are humanitarian organizations creating private blockchains? “They are being cautious about protecting data as they test out blockchain and DLTs. It’s a conscious choice to proceed in a controlled way, because once information is on the blockchain, it’s immutable — it cannot be removed.” When first trying out a DLT or blockchain, “Humanitarians tend to be cautious. They don’t want to play with the permanency of a public blockchain since they are working with vulnerable populations.”

Because of the blockchain hype cycle, however, there is some skepticism about organizations using private blockchains. “Are they setting up a private blockchain with one node so that they can say that they’re using blockchain just to get funding?”

An issue with private blockchains is that they are not open and transparent. The code is developed behind closed doors, meaning that it’s difficult to make it interoperable, whereas “with a public chain, you can check the code and interact with it.”

Does the humanitarian sector have the capacity to use blockchain?

As one person pointed out, knowledge and capacity around blockchain in the humanitarian sector is very low. There are currently very few people who understand both humanitarian work and the private sector/technology side of blockchain. “We desperately need intermediaries because people in the two sectors talk past each other. They use the same words to mean very different things, and this leads to misunderstandings.” This is a perpetual issue in the “humanitarian tech” space, and it often leads to applications that are not in the best interest of those on the receiving end of humanitarian work.

Capacity challenges also come up with regard to managing partnerships that involve intellectual properly. When cooperating with the private sector, organizations are normally required to sign an MOU that gives rights to the company. Often humanitarian agencies do not fully understand what they are signing up for. This can mean that the company uses the humanitarian collaboration to develop technologies that are later used in ways that the humanitarian agency considers unethical or disturbing. Having technology or blockchain expertise within an organization makes it possible to better negotiate those types of situations, but often only the larger INGOs can afford that type of expertise. Similarly, organizations lack expertise in the legal and regulatory space with regard to blockchain.

How will blockchain become locally owned? Should we wait for a user-friendly version?

Technology moves extremely fast, and organizations need a certain level of capacity to create it and maintain it. “I’m an engineer working in the humanitarian space,” said one Salon participant. “Blockchain is such a complex software solution that I’m very skeptical it will ever be at a stage where it could be locally owned and managed. Even with super basic SMS-based services we have maintenance issues and challenges handing off the tech. If in this room we are struggling to understand blockchain, how will this ever work in lower tech and lower resource areas?” Another participant asked a similar question with regard to handing off a blockchain solution to a local government.

Does the sector needs to wait for a simplified and “user friendly” version of blockchain before humanitarians get into the space? Some said yes, but other participants said that the technology is moving quickly, and that it is critical for humanitarians to “get in there” to try to slow it down. “Sometimes blockchain is not the solution. Sometimes a database is just fine. We need people to pump the brakes before things get out of control.”

“How can people learn about blockchain? How could a grassroots organization begin to set one up?” asked one person. There is currently no “Square Space for Blockchain,” and the technology remains complicated, but those with a strong drive could learn, according to one person. But although “coders might be able to teach themselves ‘light blockchain,’ there is definitely a barrier to entry.” This is a challenge with the whole area of blockchain. “It skipped the education step. We need a ‘learning revolution ‘if we want people to actually use it.”

Enabling environments for learning to use blockchain don’t exist in conflict zones. The knowledge is held by a few individuals, and this makes long-term support and maintenance of DLT and blockchain systems very difficult. How to localize and own the knowledge? How to ensure sustainability? The sector needs to think about what the “Blockchain 101” is. There needs to be more accompaniment, investment and support for the enabling environment if blockchain is to be useful and sustainable in the sector.

Are there any examples of humanitarian blockchain that are working?

The GAHI report talks about five cases in particular. Disberse was highlighted by one Salon participant as an example that seems to be working. Disberse is a private fin-tech company that uses blockchain, but it was started by former humanitarians. “This example works in part because there is a sense of commitment to the humanitarian sector alongside the technical expertise.”

In general, in the humanitarian space, the place where blockchain/ DLTs appear to be the most effective is in back-end use cases. In other words, blockchain is helpful for making behind-the-scenes transactions in humanitarian assistance more efficient. It can eliminate bank transaction fees, and this leads to savings. Agencies can also use blockchain to create efficiencies and benefits for record keeping and auditability. This situation is not unique to blockchain. A recent DIAL baseline study of the global ICT4D ecosystem also found that in the social sector, the main benefits of ICTs were going to organizations, not to vulnerable populations.

“This is all fine,” according to one Salon participant, “but one must be clear that the benefits accrue to the agencies, not the ‘beneficiaries,’ who may not even know that DLTs are being used.” On the one hand, having a seamless backend built on blockchain where users don’t even know that blockchain is involved sounds ideal, However, this can be somewhat problematic. “Are agencies getting meaningful and responsible consent for using blockchain? If executives don’t even understand what the blockchain is, how do you explain that to people more generally?”

Because there is not a simple, accessible way of developing blockchain solutions and there are not a lot of user-friendly interfaces for the general population, for at least the next few years, humanitarian applications of blockchain will likely only be useful for back-office operations. This means that is is up to humanitarian organizations to re-invest any money saved by blockchain into program funding, so that “beneficiaries” are accruing the benefits.

What other “social” use cases are there for blockchain?

In the wider social sector and development sector, there are plenty of potential use cases, but again, very little documented evidence of their short- and long-term impacts. (Author’s note: I am not talking about financial and private sector use cases, I’m referring very specifically to social sectors and the international development and humanitarian sector). For example, Oxfam is tracing supply chains of rice, however this is a one-off pilot and it’s unclear whether it can scale. IBM has a variety of supply chain examples. Land registries and sustainable fishing are also being explored as are digital ID, birth registration and civil registries.

According to one Salon participant, “supply chain is the low-hanging fruit of blockchain – just recording something, tracking it, and referencing it. It’s all basically a ledger, a spreadsheet. Even digital ID – it’s a supply chain of movement. Provenance is a good way to use a blockchain solution.” Other areas where blockchain is said to have potential is in situations where election transparency is needed and also “smart contracts” where one needs complex contracts and there is a lack of trust amongst the parties. In general, where there is a recurring need for anonymized, disaggregated data, blockchain could be a solution.

The important thing, however, is having a very clear definition of the problem before deciding that blockchain is the solution. “A lot of times people don’t know what their problem is, and the problem is not one that can be fixed with blockchain.” Additionally, accuracy (”garbage in, garbage out”) remains a problem that blockchain on its own cannot solve. “If the off-chain process isn’t accurate, If you’re looking at human rights abuses of migrant workers, but everything is being fudged. If your supply chain is blurry, or if the information being put on the blockchain is not verified, then you have a separate problem to figure out before thinking about blockchain.”

What about ethics and consent and the Digital Principles?

Are the Digital Principles are being used as a way to guide ethical, responsible and sustainable blockchain use in the humanitarian space, asked one Salon participant. The general impression in the room was that no. “Deep crypto in the private sector is a black hole in the blockchain space,” according to one person, and the gap between the world of blockchain in the private sector and the world of blockchain in the humanitarian sector is huge. (See this write up, for a taste of one segment of the crypto-world.) “The majority of private sector blockchain enthusiasts who are working on humanitarian issues have not heard of any principles. They are operating with no principles, and sometimes it’s largely for PR because the blockchain hype cycle means they will get a lot of good press from it. You get someone who read an article in Vice about a problem in a place they’ve never heard of, and they decide that blockchain is the solution…. They are often re-inventing the wheel, and fire, and also electricity — they think that no one has ever thought about this problem before.”

Most in the room considered that this type of uninformed application of blockchain is irresponsible, and that these parallel worlds and conversations need to come together. “The humanitarian space has decades of experience with things that have been tried and haven’t worked – but people on the tech side think no one has ever tried solving these problems. We need to improve the dialogue and communication. There is a wealth of knowledge to share, and a huge learning curve on both sides.”

Additionally, one Salon participant pointed out the importance of bringing ethics into the discussion. “It’s not about just using a blockchain. It’s about what the problem is that you’re trying to solve, and does blockchain help address that problem? There are a lot of problems that blockchain is not appropriate for. Do you have the technical capacity or an accessible online environment? That’s important.”

On top of that, “it’s important for people to know that their information is being used in a particular way by a particular technology. We need to grapple with that, or we end up experimenting on people who are already marginalized or vulnerable to begin with. How do we do that? It’s like the Facebook moment. That same thing for blockchain – if you don’t know what’s going on and how your information is being used, it’s problematic.”

A third point is the massive environmental disadvantage in a public blockchain. Currently, the computing power used to verify and validate transactions that happen on public chains is immense. That is part of the ethical challenge related to blockchain. “You can’t get around the massive environmental aspect. And that makes it ironic for blockchain to be used to track carbon offsets.” (Note: there are blockchain companies who say they are working on reducing the environmental impact of blockchain with “pilots coming very soon” but it remains to be seen whether this is true or whether it’s another part of the hype cycle.)

What should donors be doing?

In addition to taking into consideration the ethical, intellectual property, environmental, sustainability, ownership, and consent aspects mentioned above and being guided by the Digital Principles, it was suggested that donors make sure they do their homework and conduct thorough due diligence on potential partners and grantees. “The vetting process needs to be heightened with blockchain because of all the hype around it. Companies come and go. They are here one day and disappear the next.” There was deep suspicion in the room because of the many blockchain outfits that are hyped up and do not actually have the staff to truly do blockchain for humanitarian purposes and use this angle just to get investments.

“Before investing, It would be important to talk with someone like Larissa [our lead discussant] who has done vetting,” said one Salon participant.  “Don’t fall for the marketing. Do a lot of due diligence and demand evidence. Show us the evidence or we’re not funding you. If you’re saying you want to work with a vulnerable or marginalized population, do you have contact with them right now? Do you know them right now? Or did you just read about them in Vice?”

Recommendations outlined in the GAHI report include providing multi-year financing to humanitarian organizations to allow for the possibility of scaling, and asking for interoperability requirements and guidelines around transparency to be met so that there are not multiple silos governing the sector.

So, are we there yet?

Nope. But at least we’re starting to talk about evidence and learning!

Resources

In addition to the GAHI report, the following resources may be useful:

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂