Feeds:
Posts
Comments

Archive for the ‘wait… what?’ Category

On Thursday September 19, we gathered at the OSF offices for the Technology Salon on “Automated Decision Making in Aid: What could possibly go wrong?” with lead discussants Jon Truong, and Elyse Voegeli, two of the creators of Automating NYC; and Genevieve Fried and Varoon Mathur, Fellows at the AI Now Institute at NYU.

To start off, we asked participants whether they were optimistic or skeptical about the role of Automated Decision-making Systems (ADS) in the aid space. The response was mixed: about half skeptics and half optimists, most of whom qualified their optimism as “cautious optimism” or “it depends on who I’m talking to” or “it depends on the day and the headlines” or “if we can get the data, governance, and device standards in place.”

What are ADS?

Our next task was to define ADS. (One reason that the New York City ADS task force was unable to advance is that its members were unable to agree on the definition of an ADS).

One discussant explained that NYC’s provisional definition was something akin to:

  • Any system that uses data algorithms or computer programs to replace or assist a human decision-making process.

This may seem straightforward, yet, as she explained, “if you go too broad you might include something like ‘spellcheck’ which feels like overkill. On the other hand, spellcheck is a good case for considering how complex things can get. What if spellcheck only recognized Western names? That would be an example of encoding bias into the ADS. However, the degree of harm that could come from spellcheck as compared to using ADS for predictive policing is very different. Defining ADS is complex.”

Other elements of the definition of an ADS are that it includes computational implementation of an algorithm. Algorithms are basically clear instructions or criteria followed in order to make a decision. Algorithms can be manual. ADS include the power of computation, noted another discussant. And perhaps a computer and complex system should be included as well, and a decision-making point or cut off; for example, an algorithm that determines who gets a loan. It is also important to consider statistical modeling and forecasting, which allow for prediction.

Using data and criteria for making decisions is nothing new, and it’s often done without specific systems or computers. People make plenty of very bad decisions without computers, and the addition of computers and algorithms is sometimes considered a more objective approach, because instructions can be set and run by a computer.

Why are there issues with ADS?

In practice things are not as clear cut as they might seem, explained one of our discussants. We live in a world where people are treated differently because of their demographic identity, and curation of data can represent some populations over others or misrepresent certain populations because of how they have been treated historically. These current and historic biases make their way into the algorithms, which are created by humans, and this encodes human biases into an ADS. When feeding existing data into a computer so that it can learn, we bring our historical biases into decision-making. The data we feed into an ADS may not reflect changing demographics or shifts in the data, and algorithms may not reflect ongoing institutional policy changes.

As another person said, “systems are touted as being neutral, but they are subject to human fallacies. We live in a world that is full of injustice, and that is reflected in a data set or in an algorithm. The speed of the system, once it’s computerized, replicates injustices more quickly and at greater scale.” When people or institutions believe that the involvement of a computer means the system is neutral, we have a problem. “We need to take ADS with a grain of salt, similar to how we tell children not to believe everything they see on the Internet.”

Many people are unaware of how an algorithm works. Yet over time, we tend to rely on algorithms and believe in them as unbiased truth. When ADS are not monitored, tested, and updated, this becomes problematic. ADS can begin to make decisions for people rather than supporting people in making decisions, and this can go very wrong, for example when decisions are unquestioningly made based on statistical forecasting models.

Are there ways to curb these issues with ADS?

Consistent monitoring. ADS should also be monitored constantly over time by humans. One Salon participant suggested setting up checkpoints in the decision-making process to alert humans that something is afoul. Another suggested that research and proof of concept are critical. For example, running the existing human-only system alongside the ADS and comparing the decisions over time help to flag differences that can then be examined to see which of the processes is working better and to adjust or discontinue the ADS if it is incorrect. (In some cases, this process may actually flag biases in the human system). Random checks can be set up as can control situations where some decisions are made without using an ADS so that results can be compared between the two.

Recourse and redress. There should be simple and accessible ways for people affected by ADS to raise issues and make complaints. All ADS can make mistakes – there can be false positives (where an error points falsely to a match or the presence of a condition) and false negatives (where an error points to the absence of a match or a condition when indeed it is present). So there needs to be recourse for people affected by errors or in cases where biased data is leading to further discrimination or harm. Anyone creating an ADS needs to build in a way for mistakes to be managed and corrected.

Education and awareness. A person may not be aware that an ADS has affected them, and they likely won’t understand how an ADS works. Even people using ADS for decisions about others often forget that it’s an ADS deciding. This is similar to how people forget that their newsfeed on Facebook is based on their historical choices in content and their ‘likes’ and is not a neutral serving of objective content.

Improving the underlying data. Algorithms will only get better when there are constant feedback loops and new data that help the computer learn, said one Salon participant. Currently most algorithms are trained on highly biased samples that do not reflect marginalized groups and communities. For example, there is very little data about many of the people participating in or eligible for aid and development programs.

So we need proper data sets that are continually updated if we are to use ADS in aid work. This is a problem, however, if the data that is continually fed into the ADS remains biased. One person shared this example: If some communities are policed more because of race, economic status, etc., there will continually be more data showing that people in those communities are committing crimes. In whiter or wealthier communities, where there is less policing, less people are arrested. If we update our data continually without changing the fact that some communities are policed more than others (thus will appear to have higher crime rates), we are simply creating a feedback loop that confirms our existing biases.

Privacy concerns also enter the picture. We may want to avoid collecting data on race, gender, ethnicity or economic status so that we don’t expose people to discrimination, stigma, or harm. For example, in the case of humanitarian work or conflict zones, sensitive data can make people or groups a target for governments or unfriendly actors. However, it’s hard to make decisions that benefit people if their data is missing. It ends up being a catch 22.

Transparency is another way to improve ADS. “In the aid sector, we never tell people how decisions are made, regardless of whether those are human or machine-made decisions,” said one Salon participant. When the underlying algorithm is obscured, it cannot be reviewed for value judgments. Some compared this to some of the current non-algorithmic decision-making processes in the aid system (which are also not transparent) and suggested that aid systems could get more intelligent if they began to surface their own specific biases.

The objectives of the ADS can be reviewed. Is the system used to further marginalize or discriminate against certain populations, or can this be turned on its head? asked one discussant. ADS could be used to try to determine which police officers might commit violence against civilians rather than to predict which people might commit a crime. (See the Algorithmic Justice League’s work). 

ADS in the aid system – limited to the powerful few?

Because of the underlying challenges with data (quality, standards, lack of) in the aid sector, ADS is still a challenge. One area where data is available and where ADS are being built and used is in supply chain management, for example, at massive UN agencies like the World Food Program.

Some questioned whether this exacerbates concentration of power in these large agencies, running counter to agreed-upon sector goals to decentralize power and control to smaller, local organizations who are ‘on the ground’ and working directly in communities. Does ADS then bring even more hierarchy, bias, and exclusion into an already problematic system of power and privilege? Could there be ways of using ADS differently in the aid system that would not replicate existing power structures? Could ADS itself be used to help people see their own biases? “Could we build that into an ADS? Could we have a read out of decisions we came to and then see what possible biases were?” asked one person.

How can we improve trust in ADS?

Most aid workers, national organizations, and affected communities have a limited understanding of ADS, leading to lower levels of trust in ADS and the decisions they produce. Part of the issue is the lack of participation and involvement in the design, implementation, validation, and vetting of ADS. On the other hand, one Salon participant pointed out that given all the issues with bias and exclusion, “maybe they would trust an ADS even less if they understood how an ADS works.”

Involving both users of an ADS and the people affected by ADS decisions is crucial. This needs to happen early in the process, said one person. It shouldn’t be limited to having people complain or report once the ADS has wronged them. They need to be at the table when the system is being developed and trialed.

If trust is to be built, the explainability of an algorithm needs consideration. “How can you explain the algorithm to people who are affected by it? Humanitarian workers cannot describe an ADS if they don’t understand it. We need to find ways to explain ADS to a non-technical audience so that they can be involved,” said one person. “We’ve shown sophisticated models to leaders, and they defaulted to spreadsheets.”

This brought up the need for change management if ADS are introduced. Involving and engaging decision-makers in the design and creation of ADS systems is a critical step for their adoption. This means understanding how decisions are made currently and based on what factors. Technology and data teams need to be in the room to understand the open and hidden nature of decision-making.

Isn’t decision making without ADS also highly biased and obscured?

People are often resistant to talking about or sharing how decisions have been made in the past, however, because those decisions may have been biased or inconsistent, based on faulty data, or made for political or other reasons.

As one person pointed out, both government and the aid system are deeply politicized and suffer from local biases, corruption and elite capture. A spatial analysis of food distribution in two countries, for example, showed extreme biases along local political leader lines. A related analysis of the road network and aid distribution allowed a clear view into the unfairness of food distribution and efficiency losses.

Aid agencies themselves make highly-biased decisions all the time, it was noted. Decisions are often political, situational, or made to enhance the reputation of an individual or agency. These decisions are usually not fully documented. Is this any less transparent than the ‘black box’ of an algorithm? Not to mention that agencies have countless dashboards that are aimed at helping them make efficient, unbiased decisions, yet recommendations based on the data may run counter to what is needed politically or for other reasons in a given moment.

Could (should) the humanitarian sector assume greater leadership on ADS?

Most ADS are built by private sector partners. When they are sold to the public or INGO sector, these companies indemnify themselves against liability and keep their trade secrets. It becomes impossible to hold them to account for any harm produced. One person asked whether the humanitarian sector could lead by bringing in different incentives – transparency, multi-stakeholder design, participation, and a focus on wellbeing? Could we try this and learn from it and develop and document processes whereby this could be done at scale? Could the aid sector open source how ADS are designed and created so that data scientists and others could improve?

Some were skeptical about whether the aid sector would be capable of this. “Theoretically we could do this,” said one person, “but it would then likely be concentrated in the hands of these few large agencies. In order to have economies of scale, it will have to be them because automation requires large scale. If that is to happen, then the smaller organizations will have to trust the big ones, but currently the small organizations don’t trust the big ones to manage or protect data.” And what about the involvement of governments, said another person, we would need to consider the role of the public sector.

“I like the idea of the humanitarian sector leading,” added one person, “but aid agencies don’t have the greatest track record for putting their constituencies in the driving seat. That’s not how it works. A lot of people are trying to correct that, but aid sector employees are not the people who will be affected by these systems in the end. We could think about working with organizations who have the outreach capacity to do work with these groups, but again, these organizations are not made up of the affected people. We have to remember that.”

How can we address governance and accountability?

When you bring in government, private sector, aid agencies, software developers, data, and the like, said another person, you will have issues of intellectual property, ownership, and governance. What are the local laws related to data transmission and storage? Is it enough to open source just the code or ADS framework without any data in it? If you work with local developers and force them to open source the algorithm, what does that mean for them and their own sustainability as local businesses?

Legal agreements? Another person suggested that we focus on open sourcing legal agreements rather than algorithms. “There are always risks, duties, and liabilities listed in contracts and legal agreements. The private sector in particular will always play the indemnity card. And that means there is no commercial incentive to fix the tools that are being used. What if we pivoted this conversation to commercial liability? If a model is developed in Manhattan, it won’t work in Malawi — a company has a commercial duty to flag and recognize that. This type of issue is hidden if we focus the conversation on open software or open models. It’s rare that all the technology will be open and transparent. What we should push for is open contracting, and that could help a lot with governance.”

Certification? Others suggested that we adapt existing audit systems like the LEED certification (which allows engineers and architects to audit whether buildings are actually environmentally sustainable) or the IRB process (external boards that review research to flag ethical issues). “What if there were a team of data scientists and others who could audit ADS and determine the flaws and biases?” suggested one person. “That way the entire thing wouldn’t need to be open, but it could still be audited independently”. This was questioned, however, in that a stamp of approval on a single system could lead people to believe that every system designed by a particular group would pass the test.

Ethical frameworks could be a tool, yet which framework? A recent article cited 84 different ethical frameworks for Artificial Intelligence.

Regulation? Self-regulation has failed, said one person. Why aren’t we talking about actual regulation? The General Data Protection Regulation (GDPR) in Europe has a specific article (Article 22) about ADS that states that people have a right to know when ADS are used to made decisions that affect them, the right to contest decisions made by ADS, and right to request that humans review ADS decisions.

SPHERE Standards / Core Humanitarian Standard? Because of the legal complexities of working across multiple countries and with different entities in different jurisdictions (including some like the UN who are exempt from the law), an add-on to the SPHERE standards might be considered, said one person. Or something linked to the Core Humanitarian Standard (CHS), which includes a certification process. Donors will often ask whether an agency is CHS certified.

So, is there any good to come from ADS?

We tend to judge ADS with higher standards than we judge humans, said one Salon participant. Loan officers have been making biased decisions for years. How can we apply the standards of impartiality and transparency to both ADS and human decision making? ADS may be able to fix some of our current faulty and biased decisions. This may be useful for large systems, where we can’t afford to deploy humans at scale. Let’s find some potential bright spots for ADS.

Some positive examples shared by participants included:

  • Human rights organizations are using satellite imagery to identify areas that have been burned or otherwise destroyed during conflict. This application of automated decision making doesn’t deal directly with people or allocation of resources, it supports human rights research.
  • In California, ADS has been used to expunge the records of people convicted for marijuana-related violations now that marijuana has been legalized. This example supports justice and fairness.
  • During Hurricane Irma, an organization in the Virgin Islands used an excel spreadsheet to track whether people met the criteria for assistance. Aid workers would interview people and the sheet would calculate automatically whether they were eligible. This was not high tech or sexy, but it was automated and fast. The government created the criteria and these were open and transparently communicated to people ahead of time so that if they didn’t receive benefits, they were clear about why.
  • Flood management is an area where there is a lot of data and forecasting. Governments have been using ADS to evacuate people before it’s too late. This sector can gain in efficiency with ADS, which could be expanded to other weather-based hazards. Because it is a straightforward use case that involves satellites and less personal data it may be a less political space, making deployment easier.
  • Drones also use ADS to stitch together hundreds of thousands of photos to create large images of geographical areas. Though drone data still needs to be ground truthed, it is less of an ethical minefield than when personal or household level data is collected, said one participant. Other participants, however, had issues with the portrayal of drones as less of an ethical minefield, citing surveillance, privacy, and challenges with the ownership and governance of the final knowledge product, the data for which was likely collected without people’s consent.

How can the humanitarian sector prepare for ADS?

In conclusion, one participant summed up that decision making has always been around. As ADS is explored more in-depth with groups like the one at this Salon and as we delve into the ethics and improve on ADS, there is great potential. ADS will probably never totally replace humans but can supplement humans to make better decisions.

How are we in the humanitarian sector preparing people at all levels of the system to engage with these systems, design them ethically, reduce harm, and make them more transparent? How are we working to build capacities at the local level to understand and use ADS? How are we figuring out ways to ensure that the populations who will be affected by ADS are aware of what is happening? How are we ensuring recourse and redress in the case of bad decisions or bias? What jobs might be created (rather than eliminated) with the introduction of more ADS?

ADS are not going to go away, and the humanitarian sector doesn’t have to wait until they are perfected to get involved in shaping and improving them so that they support our work in ethical and useful ways rather than in harmful or unethical ways.

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Advertisements

Read Full Post »

Our Technology Salon on Digital ID (“Will Digital Identities Support or Control Us”) took place at the OSF offices on June 3 with lead discussants Savita Bailur and Emrys Schoemaker from Caribou Digital and Aiden Slavin from ID2020.

In general, Salon Participants noted the potential positives of digital ID, such as improved access to services, better service delivery, accountability, and better tracking of beneficiaries. However, they shared concerns about potential negative impacts, such as surveillance and discrimination, disregard for human rights and privacy, lack of trust in government and others running digital ID systems, harm to marginalized communities, lack of policy and ethical frameworks, complexities of digital ID systems and their associated technological requirements, and low capacity within NGOs to protect data and to deal with unintended consequences.

What do we mean by digital identity (digital ID)?

Arriving at a basic definition of digital ID is difficult due to its interrelated aspects. To begin with: What is identity? A social identity arises from a deep sense of who we are and where we come from. A person’s social identity is a critical part of how they experience an ID system. Analog ID systems have been around for a very long time and digitized versions build on them.

The three categories below (developed by Omidyar) are used by some to differentiate among types of ID systems:

  • Issued ID includes state or national issued identification like birth certificates, driver’s licenses, and systems such as India’s biometric ID system (Aadhar), built on existing analog models of ID systems and controlled by institutions.
  • De facto ID is an emerging category of ID that is formed through data trails that people leave behind when using digital devices, including credit scoring based on mobile phone use or social media use. De facto ID is somewhat outside of an individual’s control, as it is often based on analysis of passive data for which individuals have not given consent to collect or use in this way. De facto ID also includes situations where refugees are tracked via cellphone data records (CDRs). De facto ID is a new and complex way of being identified and categorized.
  • Self-asserted ID is linked to the decentralization of ID systems. It is based on the possession of forms of ID that prove who we are that we manage ourselves. A related term is self-managed ID, which recognizes that there is no ID that is “self-asserted” because our identity is relational and always relies on others recognizing us as who we are and who we believe ourselves to be.

(Also see this glossary of Digital ID definitions.)

As re-identification technologies are becoming more and more sophisticated, the line between de-facto and official, issued IDs is blurring, noted one participant. Others said they prefer using a broad umbrella term “Identity in the Digital Age” to cover the various angles.

Who is digital ID benefiting?

Salon Participants tended to think that digital ID is mainly of interest to institutions. Most IDs are developed, designed, managed, and issued by institutions. Thus the interests baked into the design of an ID system are theirs. Institutions tend to be excited about digital ID systems because they are interoperable, and helps them with beneficiary management, financial records, entry/exit across borders and the like.

This very interoperability, however, is what raises privacy, vulnerability, and data protection issues. Some of the most cutting-edge Digital ID systems are being tested on some of the most vulnerable populations in the world:  refugees in Jordan, Uganda, Lebanon, and Myanmar. These digital ID systems have created massive databases for analysis, e.g., the UNHCR’s Progress data base has 80 million records.

This brings with it a huge responsibility to protect. It also raises questions about the “one ID system to rule them all” idea. On the one hand, a single system can offer managerial control, reduce fraud, and improve tracking. Yet, as one person said, “what a horrifying prospect that an institution can have this much control! Should we instead be supporting one thousand ID systems to bloom?”

Can we trust institutions and governments to manage digital ID Systems?

One of the institutions positioning itself as the leader in Digital ID is the World Food Program (WFP). As one participant highlighted, this is an agency that has come under strong criticism for its partnership with Palantir and a lack of transparency around where data goes and who can access it. Seismic downstream effects that affect trust in the entire sector can be generated these kinds of partnerships. “This has caused a lot of angst in the sector. The WFP wants to have the single system to rule them all, whereas many of us would rather see an interoperable ecosystem.” Some organizations consider their large-scale systems to have more rigorous privacy, security, and informed consent measures than the WFP’s SCOPE system.

Trust is a critical component of a Digital ID system. The Estonian model, for example, offers visibility into which state departments are accessing a person’s data and when, which builds citizen’s trust in the system. Some Salon participants expressed concern over their own country governments running a Digital ID system. “In my country, we don’t trust institutions because we have a failed state,” said one person, “so people would never want the government to have their information in that way.” Another person said that in his country, the government is known for its corruption, and the idea that the government could manage an ID system with any kind of data integrity was laughable. “If these systems are not monitored or governed properly, they can be used to target certain segments of the population for outright repression. People do want greater financial inclusion, for example, but these ID systems can be easily weaponized and used against us.”

Fear and mistrust in digital ID systems is not universal, however. One Salon participant said that their research in Indonesia found that a digital ID was seen to be part of being a “good citizen,” even if local government was not entirely trusted. A Salon participant from China reported that in her experience, the digital ID system there has not been questioned much by citizens. Rather, it is seen as a convenient way for people to learn about new government policies and to carry out essential transactions more quickly.

What about data integrity and redress?

One big challenge with digital ID systems as they are currently managed is that there is very little attention to redress. “How do you fix errors in information? Where are the complaints mechanisms?” asked one participant. “We think of digital systems as being really flexible, but they are really hard to clean out,” said another. “You get all these faulty data crumbs that stick around. And they seem so far removed from the user. How do people get data errors fixed? No one cares about the integrity of the system. No one cares but you if your ID information is not correct. There is really very little incentive to address discrepancies and provide redress mechanisms.”

Another challenge is the integrity of the data that goes into the system. In some countries, people go back to their villages to get a birth certificate, at point at which data integrity can suffer due to faulty information or bribes, among other things. In one case, researchers spoke to a woman who changed her religion on her birth certificate thinking it would save her from discrimination when she moved to a new town. In another case, the village chief made a woman change her name to a Muslim name on her birth certificate because the village was majority Muslim.” There are power dynamics at the local level that can challenge the integrity of the ID system.

Do digital ID systems improve the lives of women and children?

There is a long-standing issue in many parts of the world with children not having a birth certificate, said one Salon discussant. “If you don’t have a legal ID, technically you don’t exist, so that first credential is really important.” As could probably be expected, however, fewer females than males have legal ID.

In a three-country research project, the men interviewed thought that women do not need ID as much as men did. However, when talking with women it was clear that they are the ones who are dealing with hospitals and schools and other institutions who require ID. The study found that In Bangladesh, when women did have ID, it was commonly held and controlled by their husbands. In one case study, a woman wanted to sign up as a cook for an online cooking service, but she needed an ID to do so. She had to ask her husband for the ID, explain what she needed it for, and get his permission in order to join the cooking service. In another, a woman wanted to provide beauty care services through an online app. She needed to produce her national ID and two photos to join up with the app and to create a bKash mobile money account. Her husband did not want her to have a bKash account, so she had to provide his account details, meaning that all of her earnings went to her husband (see more here on how ID helps women access work). In India, a woman wanted to escape her husband, so she moved from the countryside to Bangalore to work as a maid. Her in-laws retained all of her ID, and so she had to rely on her brother to set up everything for her in Bangalore.

Another Salon participant explained that in India also, micro-finance institutions had imposed a regulation that when a woman registered to be part of a project, she had to provide the name of a male member to qualify her identity. When it was time to repay the loan or if a woman missed a payment, her brother or husband would then receive a text about it. The question is how to create trust-based systems that do not reinforce patriarchal values and where individuals are clear about and have control over how information is shared?

“ID is embedded in your relationships and networks,” it was explained. “It creates a new set of dependencies and problems that we need to consider.” In order to understand the nuances in how ID and digital ID are impacting people, we need more of these micro-level stories. “What is actually happening? What does it mean when you become more identifiable?”

Is it OK to use digital ID systems for social control and social accountability? 

The Chinese social credit system, according to one Salon participant, includes a social control function. “If you have not repaid a loan, you are banned from purchasing a first-class air ticket or from checking into expensive hotels.” An application used In Nairobi called Tala also includes a social accountability function, explained another participant. “Tala is a social credit scoring app that gives small loans. You download an app with all your contacts, and it works out via algorithms if you are credit-worthy. If you are, you can get a small loan. If you stop paying your loans, however, Tala alerts everyone in your contact list. In this way, the app has digitized a social accountability function.”

The initial reaction from Salon Participants was shock, but it was pointed out that traditional Village Savings and Loans Associations (VSLAs) function the same way – through social sanction. “The difference here is transparency and consent,” it was noted. “In a community you might not have choice about whether everyone knows you defaulted on your small loan. But you are aware that this is what will happen. With Tala, people didn’t realize that the app had access to their contacts and that it would alert those contacts, so consent and transparency are the issues.”

The principle of informed consent in the humanitarian space poses a constant challenge. “Does a refugee who registers with UNHCR really have any choice? If they need food and have to provide minimal information to get it, is that consent? What if they have zero digital literacy?” Researcher Helen Nissenbaum, it was noted, has written that consent is problematic and that we should not pursue it. “It’s not really about individual consent. It’s about how we set standards and ensure transparency and accountability for how an individual’s information is used,” explained one Salon participant.

These challenges with data use and consent need to be considered beyond just individual privacy, however, as another participant noted. “There is all manner of vector-based data in the WFP’s system. Other agencies don’t have this kind of disaggregated data at the village level or lower. What happens if Palantir, via the WFP, is the first company in the world to have that low level disaggregation? And what happens with the digital ID of particularly vulnerable groups of people such as refugee communities or LGBTQI communities? How could these Digital IDs be used to discriminate or harm entire groups of people? What does it mean if a particular category or tag like ‘refugee’ or ‘low income’ follows you around forever?”

One Salon participant said that in Jordanian camps, refugees would register for one thing and be surprised at how their data then automatically popped up on the screen of a different partner organization. Other participants expressed concerns about how Digital ID systems and their implications could be explained to people with less digital experience or digital literacy. “Since the GDPR came into force, people have the right to an explanation if they are subject to an automated decision,” noted one person “But what does compliance look like? How would anyone ever understand what is going on?” This will become increasingly complex as technology advances and we begin to see things like digital phenotyping being used to serve up digital content or determine our benefits.

Can we please have better standards, regulations and incentives?

A final question raised about Digital ID systems was who should be implementing and managing them: UN agencies? Governments? Private Sector? Start-ups? At the moment the ecosystem includes all sorts of actors and feels a bit “Wild Wild West” due to insufficient control and regulation. At the same time, there are fears (as noted above) about a “one system to rule them all approach.” “So,” asked one person, “what should we be doing then? Should UN agencies be building in-house expertise? Should we be partnering better with the private sector? We debate this all the time internally and we can never agree.” Questions also remain about what happens with the biometric and other data that failed start-ups or discontinued digital ID systems hold. And is it a good idea to support government-controlled ID systems in countries with corrupt or failed governments, or those who will use these systems to persecute or exercise undue control over their populations?

As one person asked, “Why are we doing this? Why are we even creating these digital ID systems?”

Although there are huge concerns about Digital ID, the flip side is that a Digital ID system could potentially offer better security for sensitive information, at least in the case of humanitarian organizations. “Most organizations currently handle massive amounts of data in Excel sheets and Google docs with zero security,” said one person. “There is PII [personally identifiable information] flowing left, right, and center.” Where donors have required better data management standards, there has been improvement, but it requires massive investment, and who will pay for it?” Sadly, donors are currently not covering these costs. As a representative from one large INGO explained, “we want to avoid the use of Excel to track this stuff. We are hoping that our digital ID system will be more secure. We see this as a very good idea if you can nail down the security aspects.”

The EU’s General Data Protection Regulation (GDPR) is often quoted as the “gold standard,” yet implementation is complex and the GDPR is not specific enough, according to some Salon participants. Not to mention, “if you are UN, you don’t have to follow GDPR.” Many at the Salon felt that the GDPR has had very positive effects but called out the lack of incentive structures that would encourage full adoption. “No one does anything unless there is an enforcing function.” Others felt that the GDPR was too prescriptive about what to do, rather than setting limits on what not to do.

One effort to watch is the Pan Canadian Trust Foundation, mentioned as a good example of creating a functioning and decentralized ecosystem that could potentially address some of the above challenges.

The Salon ended with more questions than answers, however there is plenty of research and conversation happening about digital ID and a wide range of actors engaging with the topic. If you’d like to read more, check out this list of resources that we put together for the Salon and add any missing documents, articles, links and resources!

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

 

 

 

Read Full Post »

At our April Technology Salon we discussed the evidence and good practice base for blockchain and Distributed Ledger Technologies (DLTs) in the humanitarian sector. Our discussants were Larissa Fast (co-author with Giulio Coppi of the Global Alliance for Humanitarian Innovation/GAHI’s report on Humanitarian Blockchain, Senior Lecturer at HCRI, University of Manchester and Research Associate at the Humanitarian Policy Group) and Ariana Fowler (UNICEF Blockchain Strategist).

Though blockchain fans suggest DLTs can address common problems of humanitarian organizations, the extreme hype cycle has many skeptics who believe that blockchain and DLTs are simply overblown and for the most part useless for the sector. Until recently, evidence on the utility of blockchain/DLTs in the humanitarian sector has been slim to none, with some calling for the sector to step back and establish a measured approach and a learning agenda in order to determine if blockchain is worth spending time on. Others argue that evaluators misunderstand what to evaluate and how.

The GAHI report provides an excellent overview of blockchain and DLTs in the sector along with recommendations at the project, policy and system levels to address the challenges that would need to be overcome before DLTs can be ethically, safely, appropriately and effectively scaled in humanitarian contexts.

What’s blockchain? What’s a DLT?

We started with a basic explanation of DLTs and Blockchain and how they work. (See page 5 of the GAHI report for more detail).

The GAHI report aimed to get beyond the potential of Blockchain and DLTs to actual use cases — however, in the humanitarian sector there is still more potential than evidence. Although there were multiple use cases to choose from, the report authors chose to go in-depth on five, selected to provide a sense of the different ways that blockchain is specifically being used in the sector.

These use cases all currently have limited “nodes” (e.g., places where the data is stored) and only a few “controlling entities” (that determine what information is stored or put on the chain). They are all “private“ (as opposed to public) blockchains, meaning they are not taking advantage of DLT potential for dispersed information, and they end up being more like “a very expensive database.”

What’s the deal with private vs public blockchains?

Private versus public blockchains are an ideological sticking point in “deep blockchain culture,” noted one Salon participant. “’Cryptobros’ and blockchain fundamentalists think private blockchains are the Antichrist.” Private blockchains are considered an oxymoron and completely antithetical to the idea of blockchain.

So why are humanitarian organizations creating private blockchains? “They are being cautious about protecting data as they test out blockchain and DLTs. It’s a conscious choice to proceed in a controlled way, because once information is on the blockchain, it’s immutable — it cannot be removed.” When first trying out a DLT or blockchain, “Humanitarians tend to be cautious. They don’t want to play with the permanency of a public blockchain since they are working with vulnerable populations.”

Because of the blockchain hype cycle, however, there is some skepticism about organizations using private blockchains. “Are they setting up a private blockchain with one node so that they can say that they’re using blockchain just to get funding?”

An issue with private blockchains is that they are not open and transparent. The code is developed behind closed doors, meaning that it’s difficult to make it interoperable, whereas “with a public chain, you can check the code and interact with it.”

Does the humanitarian sector have the capacity to use blockchain?

As one person pointed out, knowledge and capacity around blockchain in the humanitarian sector is very low. There are currently very few people who understand both humanitarian work and the private sector/technology side of blockchain. “We desperately need intermediaries because people in the two sectors talk past each other. They use the same words to mean very different things, and this leads to misunderstandings.” This is a perpetual issue in the “humanitarian tech” space, and it often leads to applications that are not in the best interest of those on the receiving end of humanitarian work.

Capacity challenges also come up with regard to managing partnerships that involve intellectual properly. When cooperating with the private sector, organizations are normally required to sign an MOU that gives rights to the company. Often humanitarian agencies do not fully understand what they are signing up for. This can mean that the company uses the humanitarian collaboration to develop technologies that are later used in ways that the humanitarian agency considers unethical or disturbing. Having technology or blockchain expertise within an organization makes it possible to better negotiate those types of situations, but often only the larger INGOs can afford that type of expertise. Similarly, organizations lack expertise in the legal and regulatory space with regard to blockchain.

How will blockchain become locally owned? Should we wait for a user-friendly version?

Technology moves extremely fast, and organizations need a certain level of capacity to create it and maintain it. “I’m an engineer working in the humanitarian space,” said one Salon participant. “Blockchain is such a complex software solution that I’m very skeptical it will ever be at a stage where it could be locally owned and managed. Even with super basic SMS-based services we have maintenance issues and challenges handing off the tech. If in this room we are struggling to understand blockchain, how will this ever work in lower tech and lower resource areas?” Another participant asked a similar question with regard to handing off a blockchain solution to a local government.

Does the sector needs to wait for a simplified and “user friendly” version of blockchain before humanitarians get into the space? Some said yes, but other participants said that the technology is moving quickly, and that it is critical for humanitarians to “get in there” to try to slow it down. “Sometimes blockchain is not the solution. Sometimes a database is just fine. We need people to pump the brakes before things get out of control.”

“How can people learn about blockchain? How could a grassroots organization begin to set one up?” asked one person. There is currently no “Square Space for Blockchain,” and the technology remains complicated, but those with a strong drive could learn, according to one person. But although “coders might be able to teach themselves ‘light blockchain,’ there is definitely a barrier to entry.” This is a challenge with the whole area of blockchain. “It skipped the education step. We need a ‘learning revolution ‘if we want people to actually use it.”

Enabling environments for learning to use blockchain don’t exist in conflict zones. The knowledge is held by a few individuals, and this makes long-term support and maintenance of DLT and blockchain systems very difficult. How to localize and own the knowledge? How to ensure sustainability? The sector needs to think about what the “Blockchain 101” is. There needs to be more accompaniment, investment and support for the enabling environment if blockchain is to be useful and sustainable in the sector.

Are there any examples of humanitarian blockchain that are working?

The GAHI report talks about five cases in particular. Disberse was highlighted by one Salon participant as an example that seems to be working. Disberse is a private fin-tech company that uses blockchain, but it was started by former humanitarians. “This example works in part because there is a sense of commitment to the humanitarian sector alongside the technical expertise.”

In general, in the humanitarian space, the place where blockchain/ DLTs appear to be the most effective is in back-end use cases. In other words, blockchain is helpful for making behind-the-scenes transactions in humanitarian assistance more efficient. It can eliminate bank transaction fees, and this leads to savings. Agencies can also use blockchain to create efficiencies and benefits for record keeping and auditability. This situation is not unique to blockchain. A recent DIAL baseline study of the global ICT4D ecosystem also found that in the social sector, the main benefits of ICTs were going to organizations, not to vulnerable populations.

“This is all fine,” according to one Salon participant, “but one must be clear that the benefits accrue to the agencies, not the ‘beneficiaries,’ who may not even know that DLTs are being used.” On the one hand, having a seamless backend built on blockchain where users don’t even know that blockchain is involved sounds ideal, However, this can be somewhat problematic. “Are agencies getting meaningful and responsible consent for using blockchain? If executives don’t even understand what the blockchain is, how do you explain that to people more generally?”

Because there is not a simple, accessible way of developing blockchain solutions and there are not a lot of user-friendly interfaces for the general population, for at least the next few years, humanitarian applications of blockchain will likely only be useful for back-office operations. This means that is is up to humanitarian organizations to re-invest any money saved by blockchain into program funding, so that “beneficiaries” are accruing the benefits.

What other “social” use cases are there for blockchain?

In the wider social sector and development sector, there are plenty of potential use cases, but again, very little documented evidence of their short- and long-term impacts. (Author’s note: I am not talking about financial and private sector use cases, I’m referring very specifically to social sectors and the international development and humanitarian sector). For example, Oxfam is tracing supply chains of rice, however this is a one-off pilot and it’s unclear whether it can scale. IBM has a variety of supply chain examples. Land registries and sustainable fishing are also being explored as are digital ID, birth registration and civil registries.

According to one Salon participant, “supply chain is the low-hanging fruit of blockchain – just recording something, tracking it, and referencing it. It’s all basically a ledger, a spreadsheet. Even digital ID – it’s a supply chain of movement. Provenance is a good way to use a blockchain solution.” Other areas where blockchain is said to have potential is in situations where election transparency is needed and also “smart contracts” where one needs complex contracts and there is a lack of trust amongst the parties. In general, where there is a recurring need for anonymized, disaggregated data, blockchain could be a solution.

The important thing, however, is having a very clear definition of the problem before deciding that blockchain is the solution. “A lot of times people don’t know what their problem is, and the problem is not one that can be fixed with blockchain.” Additionally, accuracy (”garbage in, garbage out”) remains a problem that blockchain on its own cannot solve. “If the off-chain process isn’t accurate, If you’re looking at human rights abuses of migrant workers, but everything is being fudged. If your supply chain is blurry, or if the information being put on the blockchain is not verified, then you have a separate problem to figure out before thinking about blockchain.”

What about ethics and consent and the Digital Principles?

Are the Digital Principles are being used as a way to guide ethical, responsible and sustainable blockchain use in the humanitarian space, asked one Salon participant. The general impression in the room was that no. “Deep crypto in the private sector is a black hole in the blockchain space,” according to one person, and the gap between the world of blockchain in the private sector and the world of blockchain in the humanitarian sector is huge. (See this write up, for a taste of one segment of the crypto-world.) “The majority of private sector blockchain enthusiasts who are working on humanitarian issues have not heard of any principles. They are operating with no principles, and sometimes it’s largely for PR because the blockchain hype cycle means they will get a lot of good press from it. You get someone who read an article in Vice about a problem in a place they’ve never heard of, and they decide that blockchain is the solution…. They are often re-inventing the wheel, and fire, and also electricity — they think that no one has ever thought about this problem before.”

Most in the room considered that this type of uninformed application of blockchain is irresponsible, and that these parallel worlds and conversations need to come together. “The humanitarian space has decades of experience with things that have been tried and haven’t worked – but people on the tech side think no one has ever tried solving these problems. We need to improve the dialogue and communication. There is a wealth of knowledge to share, and a huge learning curve on both sides.”

Additionally, one Salon participant pointed out the importance of bringing ethics into the discussion. “It’s not about just using a blockchain. It’s about what the problem is that you’re trying to solve, and does blockchain help address that problem? There are a lot of problems that blockchain is not appropriate for. Do you have the technical capacity or an accessible online environment? That’s important.”

On top of that, “it’s important for people to know that their information is being used in a particular way by a particular technology. We need to grapple with that, or we end up experimenting on people who are already marginalized or vulnerable to begin with. How do we do that? It’s like the Facebook moment. That same thing for blockchain – if you don’t know what’s going on and how your information is being used, it’s problematic.”

A third point is the massive environmental disadvantage in a public blockchain. Currently, the computing power used to verify and validate transactions that happen on public chains is immense. That is part of the ethical challenge related to blockchain. “You can’t get around the massive environmental aspect. And that makes it ironic for blockchain to be used to track carbon offsets.” (Note: there are blockchain companies who say they are working on reducing the environmental impact of blockchain with “pilots coming very soon” but it remains to be seen whether this is true or whether it’s another part of the hype cycle.)

What should donors be doing?

In addition to taking into consideration the ethical, intellectual property, environmental, sustainability, ownership, and consent aspects mentioned above and being guided by the Digital Principles, it was suggested that donors make sure they do their homework and conduct thorough due diligence on potential partners and grantees. “The vetting process needs to be heightened with blockchain because of all the hype around it. Companies come and go. They are here one day and disappear the next.” There was deep suspicion in the room because of the many blockchain outfits that are hyped up and do not actually have the staff to truly do blockchain for humanitarian purposes and use this angle just to get investments.

“Before investing, It would be important to talk with someone like Larissa [our lead discussant] who has done vetting,” said one Salon participant.  “Don’t fall for the marketing. Do a lot of due diligence and demand evidence. Show us the evidence or we’re not funding you. If you’re saying you want to work with a vulnerable or marginalized population, do you have contact with them right now? Do you know them right now? Or did you just read about them in Vice?”

Recommendations outlined in the GAHI report include providing multi-year financing to humanitarian organizations to allow for the possibility of scaling, and asking for interoperability requirements and guidelines around transparency to be met so that there are not multiple silos governing the sector.

So, are we there yet?

Nope. But at least we’re starting to talk about evidence and learning!

Resources

In addition to the GAHI report, the following resources may be useful:

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

Read Full Post »

In the search for evidence of impact, donors and investors are asking that more and more data be generated by grantees and those they serve. Some of those driving this conversation talk about the “opportunity cost” of not collecting, opening and sharing as much data as possible. Yet we need to also talk about the real and tangible risks of data collecting and sharing and the long-term impacts of reduced data privacy and security rights, especially for the vulnerable individuals and groups with whom we work.

This week I’m at the Global Philanthropy Forum Conference in the heart of Silicon Valley speaking on a panel titled “Civil Liberties and Data Philanthropy: When NOT to Ask for More.” It’s often donor requests for innovation or for proof of impact that push implementors to collect more and more data. So donors and investors have a critical role to play in encouraging greater respect and protection of the data of vulnerable individuals and groups. Philanthropists, grantees, and investees can all help to reduce these risks by bringing a values-based responsible data approach to their work.

Here are three suggestions for philanthropists on how to contribute to more responsible data management:

1) Enhance your own awareness and expertise on the potential benefits and harms associated with data. 

  • Adopt processes that take a closer look at the possible risks and harms of collecting and holding data and how to mitigate them. Ensure those aspects are reviewed and considered during investments and grant making.
  • Conduct risk-benefits-harms assessments early in the program design and/or grant decision-making processes. This type of assessment helps lay out the benefits of collecting and using data, identifies the data-related harms we might we be enabling, and asks us to determine how we are intentionally mitigating harm during the design of our data collection, use and sharing. Importantly, this process also asks us to also identify who is benefiting from data collection and who is taking on the burden of risk. It then aims to assess whether the benefits of having data outweigh the potential harms. Risks-benefits-harms assessments also help us to ensure we are doing a contextual assessment, which is important because every situation is different. When these assessments are done in a participatory way, they tend to be even more useful and accurate ways to reduce risks in data collection and management.
  • Hire people within your teams who can help provide technical support to grantees when needed in a friendly — not a punitive — way. Building in a ‘data responsibility by design’ approach can help with that. We need to think about the role of data during the early stages of design. What data is collected? Why? How? By and from whom? What are the potential benefits, risks, and harms of gathering, holding, using and sharing that data? How can we reduce the amount of data that we collect and mitigate potential harms?
  • Be careful with data on your grantees. If you are working with organizations who (because of the nature of their mission) are at risk themselves, it’s imperative that you protect their privacy and don’t expose them to harm by collecting too much data from them or about them. Here’s a good guide for human rights donors on protecting sensitive data.

2) Use your power and influence to encourage grantees and investees to handle data more responsibly. If donors are going to push for more data collection, they should also be signaling to grantees and investees that responsible data management matters and encouraging them to think about it in proposals and more broadly in their work.

  • Strengthen grantee capacity as part of the process of raising data management standards. Lower-resourced organizations may not be able to meet higher data privacy requirements, so donors should think about how they can support rather than exclude smaller organizations with less capacity as we all work together to raise data management standards.
  • Invest holistically in both grants and grantees. This starts by understanding grantees’ operational, resource, and technical constraints as well as the real security risks posed to grantee staff, data collectors, and data subjects. For this to work, donors need to create genuinely safe spaces for grantees to voice their concerns and discuss constraints that may limit their ability to safely collect the data that donors are demanding.
  • Invest in grantees’ IT and other systems and provide operational funds that enable these systems to work. There is never enough funding for IT systems, and this puts the data of vulnerable people and groups at risk. One reason that organizations struggle to fund systems and improve data management is because they can’t bill overhead. Perverse incentives prevent investments in responsible data. Donors can work through this and help find solutions.
  • Don’t punish organizations that include budget for better data use, protection and security in their proposals. It takes money and staff and systems to manage data in secure ways. Yet stories abound in the sector about proposals that include these elements being rejected because they turn out to be more expensive. It’s critical to remember that safeguarding of all kinds takes resources!
  • Find out what kind of technical or systems support grantees/investees need to better uphold ethical data use and protection and explore ways that you can provide additional funds and resources to strengthen this area in those grantees and across the wider sector.
  • Remember that we are talking about long-term organizational behavior change. It is urgent to get moving on improving how we all handle data — but this will take some time. It’s not a quick fix because the skills are in short supply and high demand right now as a result of the GDPR and related laws that are emerging in other countries around the world.
  • Don’t ask grantees to collect data that might make vulnerable individuals or groups wary of them. Data is an extension of an individual. Trust in how an organization collects and manages an individual’s data leads to trust in an organization itself. Organizations need to be trusted in order to do our work, and collection of highly sensitive data, misuse of data or a data breach can really break that trust compact and reduce an organization’s impact.

3) Think about the responsibility you have for what you do, what you fund, and the type of society that we live in. Support awareness and compliance with new regulations and legislation that can protect privacy. Don’t use “innovation” as an excuse for putting historically marginalized individuals and groups at risk or for allowing our societies to advance in ways that only benefit the wealthiest. Question the current pathway of the “Fourth Industrial Revolution” and where it may take us.

I’m sure I’m leaving out some things. What do you think donors and the wider philanthropic community can do to enhance responsible data management and digital safeguarding?

 

 

 

Read Full Post »

The recently announced World Food Programme (WFP) partnership with Palantir, IRIN’s article about it, reactions from the Responsible Data Forum, and WFP’s resulting statement inspired us to pull together a Technology Salon in New York City to discuss the ethics of humanitarian data sharing.

(See this crowdsourced document for more background on the WFP-Palantir partnership and resources for thinking about the ethics of data sharing. Also here is an overview of WFP’s SCOPE system for beneficiary identification, management and tracking.)

Our lead discussants were: Laura Walker McDonald, Global Alliance for Humanitarian Innovation; Mark Latonero, Research Lead for Data & Human Rights, Data & Society; Nathaniel Raymond, Jackson Institute of Global Affairs, Yale University; and Kareem Elbayar, Partnerships Manager, Centre for Humanitarian Data at the United Nations Office for the Coordination of Humanitarian Affairs. We were graciously hosted by The Gov Lab.

What are the concerns about humanitarian data sharing and with Palantir?

Some of the initial concerns expressed by Salon participants about humanitarian data sharing included: data privacy and the permanence of data; biases in data leading to unwarranted conclusions and assumptions; loss of stakeholder engagement when humanitarians move to big data and techno-centric approaches; low awareness and poor practices across humanitarian organizations on data privacy and security; tensions between security of data and utility of data; validity and reliability of data; lack of clarity about the true purposes of data sharing; the practice of ‘ethics outsourcing’ (testing things in places where there is a perceived ‘lower ethical standard;’ and less accountability); use of humanitarian data to target and harm aid recipients; disempowerment and extractive approaches to data; lack of checks and balances for safe and productive data sharing; difficulty of securing meaningful consent; and the links between data and surveillance by malicious actors, governments, private sector, military or intelligence agencies.

Palantir’s relationships and work with police, the CIA, ICE, the NSA, the US military and wider intelligence community are one of the main concerns about this partnership. Some ask whether a company can legitimately serve philanthropy, development, social, human rights and humanitarian sectors while also serving the military and intelligence communities and whether it is ethical for those in the former to engage in partnerships with companies who serve the latter. Others ask if WFP and others who partner with Palantir are fully aware of the company’s background, and if so, why these partnerships have been able to pass through due diligence processes. Yet others wonder if a company like Palantir can be trusted, given its background.

Below is a summary of the key points of the discussion, which happened on February 28, 2019. (Technology Salons are Chatham House affairs, so I have not attributed quotes in this post.)

Why were we surprised by this partnership/type of partnership?

Our first discussant asked why this partnership was a surprise to many. He emphasized the importance of stakeholder conversations, transparency, and wider engagement in the lead-up to these kinds of partnerships. “And I don’t mean in order to warm critics up to the idea, but rather to create a safe and trusted ecosystem. Feedback and accountability are really key to this.” He also highlighted that humanitarian organizations are not experts in advanced technologies and that it’s normal for them to bring in experts in areas that are not their forte. However, we need to remember that tech companies are not experts in humanitarian work and put the proper checks and balances in place. Bringing in a range of multidisciplinary expertise and distributed intelligence is necessary in a complex information environment. One possible approach is creating technology advisory boards. Another way to ensure more transparency and accountability is to conduct a human rights impact assessment. The next year will be a major test for these kinds of partnerships, given the growing concerns, he said.

One Salon participant said that the fact that the humanitarian sector engages in partnerships with the private sector is not a surprise at all, as the sector has worked through Public-Private Partnerships (PPPs) for several years now and they can bring huge value. The surprise is that WFP chose Palantir as the partner. “They are not the only option, so why pick them?” Another person shared that the WFP partnership went through a full legal review, and so it was not a surprise to everyone. However, communication around the partnership was not well planned or thought out and the process was not transparent and open. Others pointed out that although a legal review covers some bases, it does not assess the potential negative social impact or risk to ‘beneficiaries.’ For some the biggest surprise was WFP’s own surprise at the pushback on this particular partnership and its unsatisfactory reaction to the concerns raised about it. The response from responsible data advocates and the press attention to the WFP-Palantir partnership might be a turning point for the sector to encourage more awareness of the risks in working with certain types of companies. As many noted, this is not only a problem for WFP, it’s something that plagues the wider sector and needs to be addressed urgently.

Organizations need think beyond reputational harm and consider harm to beneficiaries

“We spend too much time focusing on avoiding risk to institutions and too little time thinking about how to mitigate risk to beneficiaries,” said one person. WFP, for example, has some of the best policies and procedures out there, yet this partnership still passed their internal test. That is a scary thought, because it implies that other agencies who have weaker policies might be agreeing to even more risky partnerships. Are these policies and risk assessments, then, covering all the different types of risk that need consideration? Many at the Salon felt that due diligence and partnership policies focus almost exclusively on organizational and reputational risk with very little attention to the risk that vulnerable populations might face. It’s not just a question of having policies, however, said one person. “Look at the Oxfam Safeguarding situation. Oxfam had some of the best safeguarding policies, yet there were egregious violations that were not addressed by having a policy. It’s a question of power and how decisions get made, and where decision-making power lies and who is involved and listened to.” (Note: one person contacted me pre-Salon to say that there was pushback by WFP country-level representatives about the Palantir partnership, but that it still went ahead. This brings up the same issue of decision-making power, and who has power to decide on these partnerships and why are voices from the frontlines not being heard? Additionally, are those whose data is captured and put into these large data systems ever consulted about what they think?)

Organizations need to assess wider implications, risks, and unintended negative consequences

It’s not only WFP that is putting information into SCOPE, said one person. “Food insecure people have no choice about whether to provide their data if they wish to receive food.” Thus, the question of truly ‘informed consent’ arises. Implementing partners don’t have a lot of choice either, he said. “Implementing agencies are forced to input beneficiary data into SCOPE if they want to work in particular zones or countries.” This means that WFP’s systems and partnerships have an impact on the entire humanitarian community, and therefore these partnerships and systems need to be more broadly consulted about with the wider sector.  The optical and reputational impact to organizations aside from WFP is significant, as they may disagree with the Palantir partnership but they are now associated with it by default. This type of harm goes beyond the fear of exploitation of the data in WFP’s “data lake.” It becomes a risk to personnel on the ground who are then seen as collaborating with a CIA contractor by putting beneficiary biometric data into SCOPE. This can also deter food-insecure people from accessing benefits. Additionally, association with CIA or US military has led to humanitarian agencies and workers being targeted, attacked and killed. That is all in addition to the question on whether these kinds of partnerships violate humanitarian principles, such as that of impartiality.

“It’s critical to understand the role of rumor in humanitarian contexts,” said one discussant. “Affected populations are trying to figure out what is happening and there is often a lot of rumor going around.”  So, if Palantir has a reputation for giving data to the CIA, people may hear about that and then be afraid to access services for fear of having their data given to the CIA. This can lead to retaliation against humanitarians and humanitarian organizations and escalate their risk of operating. Risk assessments need to go beyond the typical areas of reputation or financial risk. We also need to think about how these partnerships can affect humanitarian access and community trust and how rumors can have wide ripple effects.

The whole sector needs to put better due diligence systems in place. As it is now, noted one person, often it’s someone who doesn’t know much about data who writes up a short summary of the partnership, and there is limited review. “We’ve been struggling for 10 years to get our offices to use data. Now we’re in a situation where they’re just picking up a bunch of data and handing it over to private companies.”

UN immunities and privileges lead to a lack of accountability

The fact that UN agencies have immunities and privileges, means that laws such as the EU’s General Data Protection Regulation (GDPR) do not apply to them and they are left to self-regulate. Additionally, there is no common agreement among UN Agencies on how GDPR applies, and each UN agency interprets it on their own. As one person noted “There is a troubling sense of exceptionalism and lack of accountability in some of these agencies because ‘a beneficiary cannot take me to court.’” An interesting point, however, is that while UN agencies are immune, those contracted as their data processors are not immune — so data processors beware!

Demographically Identifiable Information (DII) can lead to serious group harm

The WFP has stated that personally identifiable information (PII) is not technically accessible to Palantir via this partnership. However, some at the Salon consider that the WFP failed in their statement about the partnership when they used the absence of PII as a defense. Demographically Identifiable Information (DII) and the activity patterns that are visible even in commodity data can be extrapolated as training data for future data modeling. “This is prospective modeling of action-based intelligence patterns as part of multiple screeners of intel,” said one discussant. He went on to explain that privacy discussions have moved from centering on property rights in the 19th Century, to individual rights in the 20th Century, to group rights in the 21st Century. We can use existing laws to emphasize protection of groups and to highlight the risks of DII leading to group harm, he said, as there are well-known cases that exemplify the notion of group harms (Plessy v Ferguson, Brown v Board of Education). Even in logistics data (which is the kind of data that WFP says Palantir will access) that contains no PII, it’s very simple to identify groups. “I can look at supply chain information and tell you where there are lactating mothers. If you don’t want refugees to give birth in the country they have arrived to, this information can be used for targeting.”

Many in the sector do not trust a company like Palantir

Though it is not clear who was in the room when WFP made the decision to partner with Palantir, the overall sector has concerns that the people making these decisions are not assessing partnerships from all angles: legal, privacy, programmatic, ethical, data use and management, social, protection, etc. Technologists and humanitarian practitioners are often not included in making these decisions, said one participant. “It’s the people with MBAs. They trust a tech company to say ‘this is secure’ but they don’t have the expertise to actually know that. Not to mention that yes, something might be secure, but maybe it’s not ethical. Senior people are signing off without having a full view. We need a range of skill sets reviewing these kinds of partnerships and investments.”

Another question arises: What happens when there is scope creep? Is Palantir in essence “grooming” the sector to then abuse data it accesses once it’s trusted and “allowed in”? Others pointed out that the grooming has already happened and Palantir is already on the inside. They first began partnering with the sector via the Clinton Global Initiative meetings back in 2013 and they are very active at World Economic Forum meetings. “This is not something coming out of the Trump administration, it was happening long before that,” said one person, and the company is already “in.” Another person said “Palantir lobbied their way into this, and they’ve gotten past the point of reputational challenge.” Palantir has approached many humanitarian agencies, including all the UN agencies, added a third person. Now that they have secured this contract with the WFP, the door to future work with a lot of other agencies is open and this is very concerning.

We’re in a new political economy: data brokerage.

“Humanitarians have lost their Geneva values and embraced Silicon Valley values” said one discussant. They are becoming data brokers within a colonial data paradigm. “We are making decisions in hierarchies of power, often extralegally,” he said. “We make decisions about other people’s data without their involvement, and we need to be asking: is it humanitarian to commodify for monetary or reasons of value the data of beneficiaries? When is it ethical to trade beneficiary data for something of value?” Another raised the issue of incentives. “Where are the incentives stacked? There is no incentive to treat beneficiaries better. All the incentives are on efficiency and scale and attracting donors.”

Can this example push the wider sector to do better?

One participant hoped there could be a net gain out of the WFP-Palantir case. “It’s a bad situation. But it’s a reckoning for the whole space. Most agencies don’t have these checks and balances in place. But people are waking up to it in a serious way. There’s an opportunity to step into. It’s hard inside of bureaucratic organizations, but it’s definitely an opportunity to start doing better.”

Another said that we need more transparency across the sector on these partnerships. “What is our process for evaluating something like this? Let’s just be transparent. We need to get these data partnership policies into the open. WFP could have simply said ‘here is our process’. But they didn’t. We should be working with an open and transparent model.” Overall, there is a serious lack of clarity on what data sharing agreements look like across the sector. One person attending the Salon said that their organization has been trying to understand current practice with regard to data sharing, and it’s been very difficult to get any examples, even redacted ones.

What needs to happen? 

In closing we discussed what needs to happen next. One person noted that in her research on Responsible Data, she found a total lack of capacity in terms of technology at non-profit organizations. “It’s the Economist Syndrome. Someone’s boss reads something on the bus and decides they need a blockchain,” someone quipped. In terms of responsible data approaches, research shows that organizations are completely overwhelmed. “They are keeping silent about their low capacity out of fear they will face consequences,” said one person, “and with GDPR, even more so”. At the wider level, we are still focusing on PII as the issue without considering DII and group rights, and this is a mistake, said another.

Organizations have very low capacity, and we are siloed. “Program officers do not have tech capacity. Tech people are kept in offices or ‘labs’ on their own and there is not a lot of porosity. We need protection advisors, lawyers, digital safety advisors, data protection officers, information management specialists, IT all around the table for this,” noted one discussant. Also, she said, though we do need principles and standards, it’s important that organizations adapt these so that they are their own principles and standards. “We need to adapt these boiler plate standards to our organizations. This has to happen based on our own organizational values.  Not everyone is rights-based, not everyone is humanitarian.” So organizations need to take the time to review and adapt standards, policies and procedures to their own vision and mission and to their own situations, contexts and operations and to generate awareness and buy-in. In conclusion, she said, “if you are not being responsible with data, you are already violating your existing values and codes. Responsible Data is already in your values, it’s a question of living it.”

Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

Karen Palmer is a digital filmmaker and storyteller from London who’s doing a dual residence at ThoughtWorks in Manhattan and TED New York to further develop a project called RIOT, described as an ‘emotionally responsive, live-action film with 3D sound.’ The film uses artificial intelligence, machine learning, various biometric readings, and facial recognition to take a person through a personalized journey during dangerous riot.

Karen Palmer, the future of immersive filmmaking, Future of Storytelling (FoST) 

Karen describes RIOT as ‘bespoke film that reflects your reality.’ As you watch the film, the film is also watching you and adapting to your experience of viewing it. Using a series of biometric readings (the team is experimenting with eye tracking, facial recognition, gait analysis, infrared to capture body temperature, and an emerging technology that tracks heart rate by monitoring the capillaries under a person’s eyes) the film shifts and changes. The biometrics and AI create a “choose your own adventure” type of immersive film experience, except that the choice is made by your body’s reactions to different scenarios. A unique aspect of Karen’s work is that the viewer doesn’t need to wear any type of gear for the experience. The idea is to make RIOT as seamless and immersive as possible. Read more about Karen’s ideas and how the film is shaping up in this Fast Company article and follow along with the project on the RIOT project blog.

When we talked about her project, the first thing I thought of was “The Feelies” in Aldous Huxley’s 1932 classic ‘Brave New World.’ Yet the feelies were pure escapism, and Karen’s work aims to draw people in to a challenging experience where they face their own emotions.

On Friday, December 15, I had the opportunity to facilitate a Salon discussion with a number of people from related disciplines who are intrigued by RIOT and the various boundaries it tests and explores. We had perspectives from people working in the areas of digital storytelling and narrative, surveillance and activism, media and entertainment, emotional intelligence, digital and immersive theater, brand experience, 3D sound and immersive audio, agency and representation, conflict mediation and non-state actors, film, artificial intelligence, and interactive design.

Karen has been busy over the past month as interest in the project begins to swell. In mid-November, at Montreal’s Phi Centre’s Lucid Realities exhibit, she spoke about how digital storytelling is involving more and more of our senses, bringing an extra layer of power to the experience. This means that artists and creatives have an added layer of responsibility. (Research suggests, for example, that the brain has trouble deciphering between virtual reality [VR] and actual reality, and children under the age of 8 have had problems differentiating between a VR experience and actual memory.)

At a recent TED Talk, Karen described the essence of her work as creating experiences where the participant becomes aware of how their emotions affect the narrative of the film while they are in it, and this helps them to see how their emotions affect the narrative of their life. Can this help to create new neural pathways in the brain, she asks. Can it help a person to see how their own emotions are impacting on them but also how others are reading their emotions and reacting to those emotions in real life?

Race and sexuality are at the forefront in the US – and the Trump elections further heightened the tensions. Karen believes it’s ever more important to explore different perspectives and fears in the current context where the potential for unrest is growing. Karen hopes that RIOT can be ‘your own personal riot training tool – a way to become aware of your own reactions and of moving through your fear.’

Core themes that we discussed on Friday include:

How can we harness the power of emotion? Despite our lives being emotionally hyper-charged, (especially right now in the US), we keep using facts and data to try to change hearts and minds. This approach is ineffective. In addition, people are less trusting of third-party sources because of the onslaught of misinformation, disinformation and false information. Can we use storytelling to help us get through this period? Can immersive storytelling and creative use of 3D sound help us to trust more, to engage and to witness? Can it help us to think about how we might react during certain events, like police violence? (See Tahera Aziz’ project [re]locate about the murder of Stephen Lawrence in South London in 1993). Can it help us to better understand various perspectives? The final version of RIOT aims to bring in footage from several angles, such as CCTV from a looted store, a police body cam, and someone’s mobile phone footage shot as they ran past, in an effort to show an array of perspectives that would help viewers see things in different lights.

How do we catch the questions that RIOT stirs up in people’s minds? As someone experiences RIOT, they will have all sorts of emotions and thoughts, and these will depend on a their identity and lived experiences. At one showing of RIOT, a young white boy said he learned that if he’s feeling scared he should try to stay calm. He also said that when the cop yelled at him in the film, he assumed that he must have done something wrong. A black teenager might have had an entirely different reaction to the police. RIOT is bringing in scent, haze, 3D sound, and other elements which have started to affect people more profoundly. Some have been moved to tears or said that the film triggered anger and other strong emotions for them.

Does the artist have a responsibility to accompany people through the full emotional experience? In traditional VR experiences, a person waits in line, puts on a VR headset, experiences something profound (and potentially something triggering), then takes off the headset and is rushed out so that the next person can try it. Creators of these new and immersive media experiences are just now becoming fully aware of how to manage the emotional side of the experiences and they don’t yet have a good handle on what their responsibilities are toward those who are going through them. How do we debrief people afterwards? How do we give them space to process what has been triggered? How do we bring people into the co-creation process so that we better understand what it means to tell or experience these stories? The Columbia Digital Storytelling Lab is working on gaining a better understanding of all this and the impact it can have on people.

How do we create the grammar and frameworks for talking about this? The technologies and tactics for this type of digital immersive storytelling are entirely new and untested. Creators are only now becoming more aware of the consequences of the experiences that they are creating ‘What am I making? Why? How will people go through it? How will they leave? What are the structures and how do I make it safe for them?’ The artist can open someone up to an intense experience, but then they are often just ushered out, reeling, and someone else is rushed in. It’s critical to build time for debriefing into the experience and to have some capacity for managing the emotions and reactions that could be triggered.

SAFE Lab, for example, works with students and the community in Chicago, Harlem, and Brooklyn on youth-driven solutions to de-escalation of violence. The project development starts with the human experience and the tech comes in later. Youth are part of the solution space, but along the way they learn hard and soft skills related to emerging tech. The Lab is testing a debriefing process also. The challenge is that this is a new space for everyone; and creation, testing and documentation are happening simultaneously. Rather than just thinking about a ‘user journey,’ creators need to think about the emotionality of the full experience. This means that as opposed to just doing an immersive film – neuroscience, sociology, behavioral psychology, and lots of other fields and research are included in the dialogue. It’s a convergence of industries and sectors.

What about algorithmic bias? It’s not possible to create an unbiased algorithm, because humans all have bias. Even if you could create an unbiased algorithm, as soon as you started inputting human information into it, it would become biased. Also, as algorithms become more complex, it becomes more and more difficult to understand how they arrive to decisions. This results in black boxes that are putting out decisions that even the humans that build them can’t understand. The RIOT team is working with Dr. Hongying Meng of Brunel University London, an expert in the creation of facial and emotion detection algorithms, to develop an open source algorithm for RIOT. Even if the algorithm itself isn’t neutral, the process by which it computes will be transparent.

Most algorithms are not open. Because the majority of private companies have financial goals rather than social goals in using or creating algorithms, they have little incentive for being transparent about how an algorithm works or what biases are inherent. Ad agencies want to track how a customer reacts to a product. Facebook wants to generate more ad revenue so it adjusts what news you see on your feed. The justice system wants to save money and time by using sentencing algorithms. Yet the biases in their algorithms can cause serious harm in multiple ways. (See this 2016 report from ProPublica). The problem with these commercial algorithms is that they are opaque and the biases in them are not shared. This lack of transparency is considered by some to be more problematic than the bias itself.

Should there be a greater push for regulation of algorithms? People who work in surveillance are often ignored because they are perceived as paranoid. Yet fears that AI will be totally controlled by the military, the private sector and tech companies in ways that are hidden and opaque are real and it’s imperative to find ways to bring the actual dangers home to people. This could be partly accomplished through narrative and stories. (See John Oliver’s interview with Edward Snowden) Could artists create projects that drive conversations around algorithmic bias, help the public see the risks, and push for greater regulation? (Also of note: the New York City government recently announced that it will start a task force to look more deeply into algorithmic bias).

How is the RIOT team developing its emotion recognition algorithm? The RIOT team is collecting data to feed into the algorithm by capturing facial emotions and labeling them. The challenge is that one person may think someone looks calm, scared, or angry and another person may read it a different way. They are also testing self-reported emotions to reduce bias. The purpose of the RIOT facial detection algorithm is to measure what the person is actually feeling and how others perceive that the person is feeling. For example, how would a police officer read your face? How would a fellow protester see you? The team is developing the algorithm with the specific bias that is needed for the narrative itself. The process will be documented in a peer-reviewed research paper that considers these issues from the angle of state control of citizens. Other angles to explore would be how algorithms and biometrics are used by societies of control and/or by non-state actors such as militia in the Middle East or by right wing and/or white supremacist groups in the US. (See this article on facial recognition tools being used to identify sexual orientation)

Stay tuned to hear more…. We’ll be meeting again in the new year to go more in-depth on topics such as responsibly guiding people through VR experiences; exploring potential unintended consequences of these technologies and experiences, especially for certain racial groups; commercial applications for sensory storytelling and elements of scale; global applications of these technologies; practical development and testing of algorithms; prototyping, ideation and foundational knowledge for algorithm development.

Garry Haywood of Kinicho from also wrote his thoughts up from the day.

Read Full Post »

On November 14 Technology Salon NYC met to discuss issues related to the role of film and video in development and humanitarian work. Our lead discussants were Ambika Samarthya from Praekelt.org; Lina Srivastava of CIEL, and Rebekah Stutzman, from Digital Green’s DC office.

How does film support aid and development work?

Lina proposed that there are three main reasons for using video, film, and/or immersive media (such as virtual reality or augmented reality) in humanitarian and development work:

  • Raising awareness about an issue or a brand and serving as an entry point or a way to frame further actions.
  • Community-led discussion/participatory media, where people take agency and ownership and express themselves through media.
  • Catalyzing movements themselves, where film, video, and other visual arts are used to feed social movements.

Each of the above is aimed at a different audience. “Raising awareness” often only scratches the surface of an issue and can have limited impact if done on its own without additional actions. Community-led efforts tend to go deeper and focus on the learning and impact of the process (rather than the quality of the end product) but they usually reach fewer people (thus have a higher cost per person and less scale). When using video for catalyzing moments, the goal is normally bringing people into a longer-term advocacy effort.

In all three instances, there are issues with who controls access to tools/channels, platforms, and distribution channels. Though social media has changed this to an extent, there are still gatekeepers that impact who gets to be involved and whose voice/whose story is highlighted, funders who determine which work happens, and algorithms that dictate who will see the end products.

Participants suggested additional ways that video and film are used, including:

  • Social-emotional learning, where video is shown and then discussed to expand on new ideas and habits or to encourage behavior change.
  • Personal transformation through engaging with video.

Becky shared Digital Green’s approach, which is participatory and where community members to use video to help themselves and those around them. The organization supports community members to film videos about their agricultural practices, and these are then taken to nearby communities to share and discuss. (More on Digital Green here). Video doesn’t solve anyone’s development problem all by itself, Becky emphasized. If an agricultural extensionist is no good, having a video as part of their training materials won’t solve that. “If they have a top-down attitude, don’t engage, don’t answer questions, etc., or if people are not open to changing practices, video or no video, it won’t work.”

How can we improve impact measurement?

Questions arose from Salon participants around how to measure impact of film in a project or wider effort. Overall, impact measurement in the world of film for development is weak, noted one discussant, because change takes a long time and it is hard to track. We are often encouraged to focus on the wrong things like “vanity measurements” such as “likes” and “clicks,” but these don’t speak to longer-term and deeper impact of a film and they are often inappropriate in terms of who the audience is for the actual films (E.g., are we interested in impact on the local audience who is being impacted by the problem or the external audience who is being encouraged to care about it?)

Digital Green measures behavior change based on uptake of new agriculture practices. “After the agriculture extension worker shows a video to a group, they collect data on everyone that’s there. They record the questions that people ask, the feedback about why they can’t implement a particular practice, and in that way they know who is interested in trying a new practice.” The organization sets indicators for implementing the practice. “The extension worker returns to the community to see if the family has implemented a, b, c and if not, we try to find out why. So we have iterative improvement based on feedback from the video.” The organization does post their videos on YouTube but doesn’t know if the content there is having an impact. “We don’t even try to follow it up as we feel online video is much less relevant to our audience.” An organization that is working with social-emotional learning suggested that RCTs could be done to measure which videos are more effective. Others who work on a more individual or artistic level said that the immediate feedback and reactions from viewers were a way to gauge impact.

Donors often have different understandings of useful metrics. “What is a valuable metric? How can we gather it? How much do you want us to spend gathering it?” commented one person. Larger, longer-term partners who are not one-off donors will have a better sense of how to measure impact in reasonable ways. One person who formerly worked at a large public television station noted that it was common to have long conversation about measurement, goals, and aligning to the mission. “But we didn’t go by numbers, we focused on qualitative measurement.” She highlighted the importance of having these conversations with donors and asking them “why are you partnering with us?” Being able to say no to donors is important, she said. “If you are not sharing goals and objectives you shouldn’t be working together. Is gathering these stories a benefit to the community ? If you can’t communicate your actual intent, it’s very complicated.”

The goal of participatory video is less about engaging external (international) audiences or branding and advocacy. Rather it focuses on building skills and capacities through the process of video making. Here, the impact measurement is more related to individual, and often self-reported, skills such as confidence, finding your voice, public speaking, teamwork, leadership skills, critical thinking and media literacy. The quality of video production in these cases may be low, and videos unsuitable for widespread circulation, however the process and product can be catalysts for local-level change and locally-led advocacy on themes and topics that are important to the video-makers.

Participatory video suffers from low funding levels because it doesn’t reach the kind of scale that is desired by funders, though it can often contribute to deep, personal and community-level change. Some felt that even if community-created videos were of high production quality and translated to many languages, large-scale distribution is not always feasible because they are developed in and speak to/for hyper-local contexts, thus their relevance can be limited to smaller geographic areas. Expectation management with donors can go a long way towards shifting perspectives and understanding of what constitutes “impact.”

Should we re-think compensation?

Ambika noted that there are often challenges related to incentives and compensation when filming with communities for organizational purposes (such as branding or fundraising). Organizations are usually willing to pay people for their time in places such New York City and less inclined to do so when working with a rural community that is perceived to benefit from an organization’s services and projects. Perceptions by community members that a filmmaker is financially benefiting from video work can be hard to overcome, and this means that conflict may arise during non-profit filmmaking aimed at fundraising or building a brand. Even when individuals and communities are aware that they will not be compensated directly, there is still often some type of financial expectation, noted one Salon participant, such as the purchase of local goods and products.

Working closely with gatekeepers and community leaders can help to ease these tensions. When filmmaking takes several hours or days, however, participants may be visibly stressed or concerned about household or economic chores that are falling to the side during filming, and this can be challenging to navigate, noted one media professional. Filming in virtual reality can exacerbate this problem, since VR filming is normally over-programmed and repetitive in an effort to appear realistic.

One person suggested a change in how we approach incentives. “We spent about two years in a community filming a documentary about migration. This was part of a longer research project. We were not able to compensate the community, but we were able to invest directly in some of the local businesses and to raise funds for some community projects.” It’s difficult to understand why we would not compensate people for their time and their stories, she said. “This is basically their intellectual property, and we’re stealing it. We need a sector rethink.” Another person agreed, “in the US everyone gets paid and we have rules and standards for how that happens. We should be developing these for our work elsewhere.”

Participatory video tends to have less of a challenge with compensation. “People see the videos, the videos are for their neighbors. They are sharing good agricultural or nutrition approaches with people that they already know. They sometimes love being in the videos and that is partly its own reward. Helping people around them is also an incentive,” said one person.

There were several other rabbit holes to explore in relation to film and development, so look for more Salons in 2018!

To close out the year right, join us for ICT4Drinks on December 14th at Flatiron Hall from 7-9pm. If you’re signed up for Technology Salon emails, you’ll find the invitation in your inbox!

Salons run under Chatham House Rule so no attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

Older Posts »