Feeds:
Posts
Comments

Posts Tagged ‘humanitarian’

 

Read Full Post »

On Thursday September 19, we gathered at the OSF offices for the Technology Salon on “Automated Decision Making in Aid: What could possibly go wrong?” with lead discussants Jon Truong, and Elyse Voegeli, two of the creators of Automating NYC; and Genevieve Fried and Varoon Mathur, Fellows at the AI Now Institute at NYU.

To start off, we asked participants whether they were optimistic or skeptical about the role of Automated Decision-making Systems (ADS) in the aid space. The response was mixed: about half skeptics and half optimists, most of whom qualified their optimism as “cautious optimism” or “it depends on who I’m talking to” or “it depends on the day and the headlines” or “if we can get the data, governance, and device standards in place.”

What are ADS?

Our next task was to define ADS. (One reason that the New York City ADS task force was unable to advance is that its members were unable to agree on the definition of an ADS).

One discussant explained that NYC’s provisional definition was something akin to:

  • Any system that uses data algorithms or computer programs to replace or assist a human decision-making process.

This may seem straightforward, yet, as she explained, “if you go too broad you might include something like ‘spellcheck’ which feels like overkill. On the other hand, spellcheck is a good case for considering how complex things can get. What if spellcheck only recognized Western names? That would be an example of encoding bias into the ADS. However, the degree of harm that could come from spellcheck as compared to using ADS for predictive policing is very different. Defining ADS is complex.”

Other elements of the definition of an ADS are that it includes computational implementation of an algorithm. Algorithms are basically clear instructions or criteria followed in order to make a decision. Algorithms can be manual. ADS include the power of computation, noted another discussant. And perhaps a computer and complex system should be included as well, and a decision-making point or cut off; for example, an algorithm that determines who gets a loan. It is also important to consider statistical modeling and forecasting, which allow for prediction.

Using data and criteria for making decisions is nothing new, and it’s often done without specific systems or computers. People make plenty of very bad decisions without computers, and the addition of computers and algorithms is sometimes considered a more objective approach, because instructions can be set and run by a computer.

Why are there issues with ADS?

In practice things are not as clear cut as they might seem, explained one of our discussants. We live in a world where people are treated differently because of their demographic identity, and curation of data can represent some populations over others or misrepresent certain populations because of how they have been treated historically. These current and historic biases make their way into the algorithms, which are created by humans, and this encodes human biases into an ADS. When feeding existing data into a computer so that it can learn, we bring our historical biases into decision-making. The data we feed into an ADS may not reflect changing demographics or shifts in the data, and algorithms may not reflect ongoing institutional policy changes.

As another person said, “systems are touted as being neutral, but they are subject to human fallacies. We live in a world that is full of injustice, and that is reflected in a data set or in an algorithm. The speed of the system, once it’s computerized, replicates injustices more quickly and at greater scale.” When people or institutions believe that the involvement of a computer means the system is neutral, we have a problem. “We need to take ADS with a grain of salt, similar to how we tell children not to believe everything they see on the Internet.”

Many people are unaware of how an algorithm works. Yet over time, we tend to rely on algorithms and believe in them as unbiased truth. When ADS are not monitored, tested, and updated, this becomes problematic. ADS can begin to make decisions for people rather than supporting people in making decisions, and this can go very wrong, for example when decisions are unquestioningly made based on statistical forecasting models.

Are there ways to curb these issues with ADS?

Consistent monitoring. ADS should also be monitored constantly over time by humans. One Salon participant suggested setting up checkpoints in the decision-making process to alert humans that something is afoul. Another suggested that research and proof of concept are critical. For example, running the existing human-only system alongside the ADS and comparing the decisions over time help to flag differences that can then be examined to see which of the processes is working better and to adjust or discontinue the ADS if it is incorrect. (In some cases, this process may actually flag biases in the human system). Random checks can be set up as can control situations where some decisions are made without using an ADS so that results can be compared between the two.

Recourse and redress. There should be simple and accessible ways for people affected by ADS to raise issues and make complaints. All ADS can make mistakes – there can be false positives (where an error points falsely to a match or the presence of a condition) and false negatives (where an error points to the absence of a match or a condition when indeed it is present). So there needs to be recourse for people affected by errors or in cases where biased data is leading to further discrimination or harm. Anyone creating an ADS needs to build in a way for mistakes to be managed and corrected.

Education and awareness. A person may not be aware that an ADS has affected them, and they likely won’t understand how an ADS works. Even people using ADS for decisions about others often forget that it’s an ADS deciding. This is similar to how people forget that their newsfeed on Facebook is based on their historical choices in content and their ‘likes’ and is not a neutral serving of objective content.

Improving the underlying data. Algorithms will only get better when there are constant feedback loops and new data that help the computer learn, said one Salon participant. Currently most algorithms are trained on highly biased samples that do not reflect marginalized groups and communities. For example, there is very little data about many of the people participating in or eligible for aid and development programs.

So we need proper data sets that are continually updated if we are to use ADS in aid work. This is a problem, however, if the data that is continually fed into the ADS remains biased. One person shared this example: If some communities are policed more because of race, economic status, etc., there will continually be more data showing that people in those communities are committing crimes. In whiter or wealthier communities, where there is less policing, less people are arrested. If we update our data continually without changing the fact that some communities are policed more than others (thus will appear to have higher crime rates), we are simply creating a feedback loop that confirms our existing biases.

Privacy concerns also enter the picture. We may want to avoid collecting data on race, gender, ethnicity or economic status so that we don’t expose people to discrimination, stigma, or harm. For example, in the case of humanitarian work or conflict zones, sensitive data can make people or groups a target for governments or unfriendly actors. However, it’s hard to make decisions that benefit people if their data is missing. It ends up being a catch 22.

Transparency is another way to improve ADS. “In the aid sector, we never tell people how decisions are made, regardless of whether those are human or machine-made decisions,” said one Salon participant. When the underlying algorithm is obscured, it cannot be reviewed for value judgments. Some compared this to some of the current non-algorithmic decision-making processes in the aid system (which are also not transparent) and suggested that aid systems could get more intelligent if they began to surface their own specific biases.

The objectives of the ADS can be reviewed. Is the system used to further marginalize or discriminate against certain populations, or can this be turned on its head? asked one discussant. ADS could be used to try to determine which police officers might commit violence against civilians rather than to predict which people might commit a crime. (See the Algorithmic Justice League’s work). 

ADS in the aid system – limited to the powerful few?

Because of the underlying challenges with data (quality, standards, lack of) in the aid sector, ADS is still a challenge. One area where data is available and where ADS are being built and used is in supply chain management, for example, at massive UN agencies like the World Food Program.

Some questioned whether this exacerbates concentration of power in these large agencies, running counter to agreed-upon sector goals to decentralize power and control to smaller, local organizations who are ‘on the ground’ and working directly in communities. Does ADS then bring even more hierarchy, bias, and exclusion into an already problematic system of power and privilege? Could there be ways of using ADS differently in the aid system that would not replicate existing power structures? Could ADS itself be used to help people see their own biases? “Could we build that into an ADS? Could we have a read out of decisions we came to and then see what possible biases were?” asked one person.

How can we improve trust in ADS?

Most aid workers, national organizations, and affected communities have a limited understanding of ADS, leading to lower levels of trust in ADS and the decisions they produce. Part of the issue is the lack of participation and involvement in the design, implementation, validation, and vetting of ADS. On the other hand, one Salon participant pointed out that given all the issues with bias and exclusion, “maybe they would trust an ADS even less if they understood how an ADS works.”

Involving both users of an ADS and the people affected by ADS decisions is crucial. This needs to happen early in the process, said one person. It shouldn’t be limited to having people complain or report once the ADS has wronged them. They need to be at the table when the system is being developed and trialed.

If trust is to be built, the explainability of an algorithm needs consideration. “How can you explain the algorithm to people who are affected by it? Humanitarian workers cannot describe an ADS if they don’t understand it. We need to find ways to explain ADS to a non-technical audience so that they can be involved,” said one person. “We’ve shown sophisticated models to leaders, and they defaulted to spreadsheets.”

This brought up the need for change management if ADS are introduced. Involving and engaging decision-makers in the design and creation of ADS systems is a critical step for their adoption. This means understanding how decisions are made currently and based on what factors. Technology and data teams need to be in the room to understand the open and hidden nature of decision-making.

Isn’t decision making without ADS also highly biased and obscured?

People are often resistant to talking about or sharing how decisions have been made in the past, however, because those decisions may have been biased or inconsistent, based on faulty data, or made for political or other reasons.

As one person pointed out, both government and the aid system are deeply politicized and suffer from local biases, corruption and elite capture. A spatial analysis of food distribution in two countries, for example, showed extreme biases along local political leader lines. A related analysis of the road network and aid distribution allowed a clear view into the unfairness of food distribution and efficiency losses.

Aid agencies themselves make highly-biased decisions all the time, it was noted. Decisions are often political, situational, or made to enhance the reputation of an individual or agency. These decisions are usually not fully documented. Is this any less transparent than the ‘black box’ of an algorithm? Not to mention that agencies have countless dashboards that are aimed at helping them make efficient, unbiased decisions, yet recommendations based on the data may run counter to what is needed politically or for other reasons in a given moment.

Could (should) the humanitarian sector assume greater leadership on ADS?

Most ADS are built by private sector partners. When they are sold to the public or INGO sector, these companies indemnify themselves against liability and keep their trade secrets. It becomes impossible to hold them to account for any harm produced. One person asked whether the humanitarian sector could lead by bringing in different incentives – transparency, multi-stakeholder design, participation, and a focus on wellbeing? Could we try this and learn from it and develop and document processes whereby this could be done at scale? Could the aid sector open source how ADS are designed and created so that data scientists and others could improve?

Some were skeptical about whether the aid sector would be capable of this. “Theoretically we could do this,” said one person, “but it would then likely be concentrated in the hands of these few large agencies. In order to have economies of scale, it will have to be them because automation requires large scale. If that is to happen, then the smaller organizations will have to trust the big ones, but currently the small organizations don’t trust the big ones to manage or protect data.” And what about the involvement of governments, said another person, we would need to consider the role of the public sector.

“I like the idea of the humanitarian sector leading,” added one person, “but aid agencies don’t have the greatest track record for putting their constituencies in the driving seat. That’s not how it works. A lot of people are trying to correct that, but aid sector employees are not the people who will be affected by these systems in the end. We could think about working with organizations who have the outreach capacity to do work with these groups, but again, these organizations are not made up of the affected people. We have to remember that.”

How can we address governance and accountability?

When you bring in government, private sector, aid agencies, software developers, data, and the like, said another person, you will have issues of intellectual property, ownership, and governance. What are the local laws related to data transmission and storage? Is it enough to open source just the code or ADS framework without any data in it? If you work with local developers and force them to open source the algorithm, what does that mean for them and their own sustainability as local businesses?

Legal agreements? Another person suggested that we focus on open sourcing legal agreements rather than algorithms. “There are always risks, duties, and liabilities listed in contracts and legal agreements. The private sector in particular will always play the indemnity card. And that means there is no commercial incentive to fix the tools that are being used. What if we pivoted this conversation to commercial liability? If a model is developed in Manhattan, it won’t work in Malawi — a company has a commercial duty to flag and recognize that. This type of issue is hidden if we focus the conversation on open software or open models. It’s rare that all the technology will be open and transparent. What we should push for is open contracting, and that could help a lot with governance.”

Certification? Others suggested that we adapt existing audit systems like the LEED certification (which allows engineers and architects to audit whether buildings are actually environmentally sustainable) or the IRB process (external boards that review research to flag ethical issues). “What if there were a team of data scientists and others who could audit ADS and determine the flaws and biases?” suggested one person. “That way the entire thing wouldn’t need to be open, but it could still be audited independently”. This was questioned, however, in that a stamp of approval on a single system could lead people to believe that every system designed by a particular group would pass the test.

Ethical frameworks could be a tool, yet which framework? A recent article cited 84 different ethical frameworks for Artificial Intelligence.

Regulation? Self-regulation has failed, said one person. Why aren’t we talking about actual regulation? The General Data Protection Regulation (GDPR) in Europe has a specific article (Article 22) about ADS that states that people have a right to know when ADS are used to made decisions that affect them, the right to contest decisions made by ADS, and right to request that humans review ADS decisions.

SPHERE Standards / Core Humanitarian Standard? Because of the legal complexities of working across multiple countries and with different entities in different jurisdictions (including some like the UN who are exempt from the law), an add-on to the SPHERE standards might be considered, said one person. Or something linked to the Core Humanitarian Standard (CHS), which includes a certification process. Donors will often ask whether an agency is CHS certified.

So, is there any good to come from ADS?

We tend to judge ADS with higher standards than we judge humans, said one Salon participant. Loan officers have been making biased decisions for years. How can we apply the standards of impartiality and transparency to both ADS and human decision making? ADS may be able to fix some of our current faulty and biased decisions. This may be useful for large systems, where we can’t afford to deploy humans at scale. Let’s find some potential bright spots for ADS.

Some positive examples shared by participants included:

  • Human rights organizations are using satellite imagery to identify areas that have been burned or otherwise destroyed during conflict. This application of automated decision making doesn’t deal directly with people or allocation of resources, it supports human rights research.
  • In California, ADS has been used to expunge the records of people convicted for marijuana-related violations now that marijuana has been legalized. This example supports justice and fairness.
  • During Hurricane Irma, an organization in the Virgin Islands used an excel spreadsheet to track whether people met the criteria for assistance. Aid workers would interview people and the sheet would calculate automatically whether they were eligible. This was not high tech or sexy, but it was automated and fast. The government created the criteria and these were open and transparently communicated to people ahead of time so that if they didn’t receive benefits, they were clear about why.
  • Flood management is an area where there is a lot of data and forecasting. Governments have been using ADS to evacuate people before it’s too late. This sector can gain in efficiency with ADS, which could be expanded to other weather-based hazards. Because it is a straightforward use case that involves satellites and less personal data it may be a less political space, making deployment easier.
  • Drones also use ADS to stitch together hundreds of thousands of photos to create large images of geographical areas. Though drone data still needs to be ground truthed, it is less of an ethical minefield than when personal or household level data is collected, said one participant. Other participants, however, had issues with the portrayal of drones as less of an ethical minefield, citing surveillance, privacy, and challenges with the ownership and governance of the final knowledge product, the data for which was likely collected without people’s consent.

How can the humanitarian sector prepare for ADS?

In conclusion, one participant summed up that decision making has always been around. As ADS is explored more in-depth with groups like the one at this Salon and as we delve into the ethics and improve on ADS, there is great potential. ADS will probably never totally replace humans but can supplement humans to make better decisions.

How are we in the humanitarian sector preparing people at all levels of the system to engage with these systems, design them ethically, reduce harm, and make them more transparent? How are we working to build capacities at the local level to understand and use ADS? How are we figuring out ways to ensure that the populations who will be affected by ADS are aware of what is happening? How are we ensuring recourse and redress in the case of bad decisions or bias? What jobs might be created (rather than eliminated) with the introduction of more ADS?

ADS are not going to go away, and the humanitarian sector doesn’t have to wait until they are perfected to get involved in shaping and improving them so that they support our work in ethical and useful ways rather than in harmful or unethical ways.

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

At our April Technology Salon we discussed the evidence and good practice base for blockchain and Distributed Ledger Technologies (DLTs) in the humanitarian sector. Our discussants were Larissa Fast (co-author with Giulio Coppi of the Global Alliance for Humanitarian Innovation/GAHI’s report on Humanitarian Blockchain, Senior Lecturer at HCRI, University of Manchester and Research Associate at the Humanitarian Policy Group) and Ariana Fowler (UNICEF Blockchain Strategist).

Though blockchain fans suggest DLTs can address common problems of humanitarian organizations, the extreme hype cycle has many skeptics who believe that blockchain and DLTs are simply overblown and for the most part useless for the sector. Until recently, evidence on the utility of blockchain/DLTs in the humanitarian sector has been slim to none, with some calling for the sector to step back and establish a measured approach and a learning agenda in order to determine if blockchain is worth spending time on. Others argue that evaluators misunderstand what to evaluate and how.

The GAHI report provides an excellent overview of blockchain and DLTs in the sector along with recommendations at the project, policy and system levels to address the challenges that would need to be overcome before DLTs can be ethically, safely, appropriately and effectively scaled in humanitarian contexts.

What’s blockchain? What’s a DLT?

We started with a basic explanation of DLTs and Blockchain and how they work. (See page 5 of the GAHI report for more detail).

The GAHI report aimed to get beyond the potential of Blockchain and DLTs to actual use cases — however, in the humanitarian sector there is still more potential than evidence. Although there were multiple use cases to choose from, the report authors chose to go in-depth on five, selected to provide a sense of the different ways that blockchain is specifically being used in the sector.

These use cases all currently have limited “nodes” (e.g., places where the data is stored) and only a few “controlling entities” (that determine what information is stored or put on the chain). They are all “private“ (as opposed to public) blockchains, meaning they are not taking advantage of DLT potential for dispersed information, and they end up being more like “a very expensive database.”

What’s the deal with private vs public blockchains?

Private versus public blockchains are an ideological sticking point in “deep blockchain culture,” noted one Salon participant. “’Cryptobros’ and blockchain fundamentalists think private blockchains are the Antichrist.” Private blockchains are considered an oxymoron and completely antithetical to the idea of blockchain.

So why are humanitarian organizations creating private blockchains? “They are being cautious about protecting data as they test out blockchain and DLTs. It’s a conscious choice to proceed in a controlled way, because once information is on the blockchain, it’s immutable — it cannot be removed.” When first trying out a DLT or blockchain, “Humanitarians tend to be cautious. They don’t want to play with the permanency of a public blockchain since they are working with vulnerable populations.”

Because of the blockchain hype cycle, however, there is some skepticism about organizations using private blockchains. “Are they setting up a private blockchain with one node so that they can say that they’re using blockchain just to get funding?”

An issue with private blockchains is that they are not open and transparent. The code is developed behind closed doors, meaning that it’s difficult to make it interoperable, whereas “with a public chain, you can check the code and interact with it.”

Does the humanitarian sector have the capacity to use blockchain?

As one person pointed out, knowledge and capacity around blockchain in the humanitarian sector is very low. There are currently very few people who understand both humanitarian work and the private sector/technology side of blockchain. “We desperately need intermediaries because people in the two sectors talk past each other. They use the same words to mean very different things, and this leads to misunderstandings.” This is a perpetual issue in the “humanitarian tech” space, and it often leads to applications that are not in the best interest of those on the receiving end of humanitarian work.

Capacity challenges also come up with regard to managing partnerships that involve intellectual properly. When cooperating with the private sector, organizations are normally required to sign an MOU that gives rights to the company. Often humanitarian agencies do not fully understand what they are signing up for. This can mean that the company uses the humanitarian collaboration to develop technologies that are later used in ways that the humanitarian agency considers unethical or disturbing. Having technology or blockchain expertise within an organization makes it possible to better negotiate those types of situations, but often only the larger INGOs can afford that type of expertise. Similarly, organizations lack expertise in the legal and regulatory space with regard to blockchain.

How will blockchain become locally owned? Should we wait for a user-friendly version?

Technology moves extremely fast, and organizations need a certain level of capacity to create it and maintain it. “I’m an engineer working in the humanitarian space,” said one Salon participant. “Blockchain is such a complex software solution that I’m very skeptical it will ever be at a stage where it could be locally owned and managed. Even with super basic SMS-based services we have maintenance issues and challenges handing off the tech. If in this room we are struggling to understand blockchain, how will this ever work in lower tech and lower resource areas?” Another participant asked a similar question with regard to handing off a blockchain solution to a local government.

Does the sector needs to wait for a simplified and “user friendly” version of blockchain before humanitarians get into the space? Some said yes, but other participants said that the technology is moving quickly, and that it is critical for humanitarians to “get in there” to try to slow it down. “Sometimes blockchain is not the solution. Sometimes a database is just fine. We need people to pump the brakes before things get out of control.”

“How can people learn about blockchain? How could a grassroots organization begin to set one up?” asked one person. There is currently no “Square Space for Blockchain,” and the technology remains complicated, but those with a strong drive could learn, according to one person. But although “coders might be able to teach themselves ‘light blockchain,’ there is definitely a barrier to entry.” This is a challenge with the whole area of blockchain. “It skipped the education step. We need a ‘learning revolution ‘if we want people to actually use it.”

Enabling environments for learning to use blockchain don’t exist in conflict zones. The knowledge is held by a few individuals, and this makes long-term support and maintenance of DLT and blockchain systems very difficult. How to localize and own the knowledge? How to ensure sustainability? The sector needs to think about what the “Blockchain 101” is. There needs to be more accompaniment, investment and support for the enabling environment if blockchain is to be useful and sustainable in the sector.

Are there any examples of humanitarian blockchain that are working?

The GAHI report talks about five cases in particular. Disberse was highlighted by one Salon participant as an example that seems to be working. Disberse is a private fin-tech company that uses blockchain, but it was started by former humanitarians. “This example works in part because there is a sense of commitment to the humanitarian sector alongside the technical expertise.”

In general, in the humanitarian space, the place where blockchain/ DLTs appear to be the most effective is in back-end use cases. In other words, blockchain is helpful for making behind-the-scenes transactions in humanitarian assistance more efficient. It can eliminate bank transaction fees, and this leads to savings. Agencies can also use blockchain to create efficiencies and benefits for record keeping and auditability. This situation is not unique to blockchain. A recent DIAL baseline study of the global ICT4D ecosystem also found that in the social sector, the main benefits of ICTs were going to organizations, not to vulnerable populations.

“This is all fine,” according to one Salon participant, “but one must be clear that the benefits accrue to the agencies, not the ‘beneficiaries,’ who may not even know that DLTs are being used.” On the one hand, having a seamless backend built on blockchain where users don’t even know that blockchain is involved sounds ideal, However, this can be somewhat problematic. “Are agencies getting meaningful and responsible consent for using blockchain? If executives don’t even understand what the blockchain is, how do you explain that to people more generally?”

Because there is not a simple, accessible way of developing blockchain solutions and there are not a lot of user-friendly interfaces for the general population, for at least the next few years, humanitarian applications of blockchain will likely only be useful for back-office operations. This means that is is up to humanitarian organizations to re-invest any money saved by blockchain into program funding, so that “beneficiaries” are accruing the benefits.

What other “social” use cases are there for blockchain?

In the wider social sector and development sector, there are plenty of potential use cases, but again, very little documented evidence of their short- and long-term impacts. (Author’s note: I am not talking about financial and private sector use cases, I’m referring very specifically to social sectors and the international development and humanitarian sector). For example, Oxfam is tracing supply chains of rice, however this is a one-off pilot and it’s unclear whether it can scale. IBM has a variety of supply chain examples. Land registries and sustainable fishing are also being explored as are digital ID, birth registration and civil registries.

According to one Salon participant, “supply chain is the low-hanging fruit of blockchain – just recording something, tracking it, and referencing it. It’s all basically a ledger, a spreadsheet. Even digital ID – it’s a supply chain of movement. Provenance is a good way to use a blockchain solution.” Other areas where blockchain is said to have potential is in situations where election transparency is needed and also “smart contracts” where one needs complex contracts and there is a lack of trust amongst the parties. In general, where there is a recurring need for anonymized, disaggregated data, blockchain could be a solution.

The important thing, however, is having a very clear definition of the problem before deciding that blockchain is the solution. “A lot of times people don’t know what their problem is, and the problem is not one that can be fixed with blockchain.” Additionally, accuracy (”garbage in, garbage out”) remains a problem that blockchain on its own cannot solve. “If the off-chain process isn’t accurate, If you’re looking at human rights abuses of migrant workers, but everything is being fudged. If your supply chain is blurry, or if the information being put on the blockchain is not verified, then you have a separate problem to figure out before thinking about blockchain.”

What about ethics and consent and the Digital Principles?

Are the Digital Principles are being used as a way to guide ethical, responsible and sustainable blockchain use in the humanitarian space, asked one Salon participant. The general impression in the room was that no. “Deep crypto in the private sector is a black hole in the blockchain space,” according to one person, and the gap between the world of blockchain in the private sector and the world of blockchain in the humanitarian sector is huge. (See this write up, for a taste of one segment of the crypto-world.) “The majority of private sector blockchain enthusiasts who are working on humanitarian issues have not heard of any principles. They are operating with no principles, and sometimes it’s largely for PR because the blockchain hype cycle means they will get a lot of good press from it. You get someone who read an article in Vice about a problem in a place they’ve never heard of, and they decide that blockchain is the solution…. They are often re-inventing the wheel, and fire, and also electricity — they think that no one has ever thought about this problem before.”

Most in the room considered that this type of uninformed application of blockchain is irresponsible, and that these parallel worlds and conversations need to come together. “The humanitarian space has decades of experience with things that have been tried and haven’t worked – but people on the tech side think no one has ever tried solving these problems. We need to improve the dialogue and communication. There is a wealth of knowledge to share, and a huge learning curve on both sides.”

Additionally, one Salon participant pointed out the importance of bringing ethics into the discussion. “It’s not about just using a blockchain. It’s about what the problem is that you’re trying to solve, and does blockchain help address that problem? There are a lot of problems that blockchain is not appropriate for. Do you have the technical capacity or an accessible online environment? That’s important.”

On top of that, “it’s important for people to know that their information is being used in a particular way by a particular technology. We need to grapple with that, or we end up experimenting on people who are already marginalized or vulnerable to begin with. How do we do that? It’s like the Facebook moment. That same thing for blockchain – if you don’t know what’s going on and how your information is being used, it’s problematic.”

A third point is the massive environmental disadvantage in a public blockchain. Currently, the computing power used to verify and validate transactions that happen on public chains is immense. That is part of the ethical challenge related to blockchain. “You can’t get around the massive environmental aspect. And that makes it ironic for blockchain to be used to track carbon offsets.” (Note: there are blockchain companies who say they are working on reducing the environmental impact of blockchain with “pilots coming very soon” but it remains to be seen whether this is true or whether it’s another part of the hype cycle.)

What should donors be doing?

In addition to taking into consideration the ethical, intellectual property, environmental, sustainability, ownership, and consent aspects mentioned above and being guided by the Digital Principles, it was suggested that donors make sure they do their homework and conduct thorough due diligence on potential partners and grantees. “The vetting process needs to be heightened with blockchain because of all the hype around it. Companies come and go. They are here one day and disappear the next.” There was deep suspicion in the room because of the many blockchain outfits that are hyped up and do not actually have the staff to truly do blockchain for humanitarian purposes and use this angle just to get investments.

“Before investing, It would be important to talk with someone like Larissa [our lead discussant] who has done vetting,” said one Salon participant.  “Don’t fall for the marketing. Do a lot of due diligence and demand evidence. Show us the evidence or we’re not funding you. If you’re saying you want to work with a vulnerable or marginalized population, do you have contact with them right now? Do you know them right now? Or did you just read about them in Vice?”

Recommendations outlined in the GAHI report include providing multi-year financing to humanitarian organizations to allow for the possibility of scaling, and asking for interoperability requirements and guidelines around transparency to be met so that there are not multiple silos governing the sector.

So, are we there yet?

Nope. But at least we’re starting to talk about evidence and learning!

Resources

In addition to the GAHI report, the following resources may be useful:

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

Read Full Post »

The recently announced World Food Programme (WFP) partnership with Palantir, IRIN’s article about it, reactions from the Responsible Data Forum, and WFP’s resulting statement inspired us to pull together a Technology Salon in New York City to discuss the ethics of humanitarian data sharing.

(See this crowdsourced document for more background on the WFP-Palantir partnership and resources for thinking about the ethics of data sharing. Also here is an overview of WFP’s SCOPE system for beneficiary identification, management and tracking.)

Our lead discussants were: Laura Walker McDonald, Global Alliance for Humanitarian Innovation; Mark Latonero, Research Lead for Data & Human Rights, Data & Society; Nathaniel Raymond, Jackson Institute of Global Affairs, Yale University; and Kareem Elbayar, Partnerships Manager, Centre for Humanitarian Data at the United Nations Office for the Coordination of Humanitarian Affairs. We were graciously hosted by The Gov Lab.

What are the concerns about humanitarian data sharing and with Palantir?

Some of the initial concerns expressed by Salon participants about humanitarian data sharing included: data privacy and the permanence of data; biases in data leading to unwarranted conclusions and assumptions; loss of stakeholder engagement when humanitarians move to big data and techno-centric approaches; low awareness and poor practices across humanitarian organizations on data privacy and security; tensions between security of data and utility of data; validity and reliability of data; lack of clarity about the true purposes of data sharing; the practice of ‘ethics outsourcing’ (testing things in places where there is a perceived ‘lower ethical standard;’ and less accountability); use of humanitarian data to target and harm aid recipients; disempowerment and extractive approaches to data; lack of checks and balances for safe and productive data sharing; difficulty of securing meaningful consent; and the links between data and surveillance by malicious actors, governments, private sector, military or intelligence agencies.

Palantir’s relationships and work with police, the CIA, ICE, the NSA, the US military and wider intelligence community are one of the main concerns about this partnership. Some ask whether a company can legitimately serve philanthropy, development, social, human rights and humanitarian sectors while also serving the military and intelligence communities and whether it is ethical for those in the former to engage in partnerships with companies who serve the latter. Others ask if WFP and others who partner with Palantir are fully aware of the company’s background, and if so, why these partnerships have been able to pass through due diligence processes. Yet others wonder if a company like Palantir can be trusted, given its background.

Below is a summary of the key points of the discussion, which happened on February 28, 2019. (Technology Salons are Chatham House affairs, so I have not attributed quotes in this post.)

Why were we surprised by this partnership/type of partnership?

Our first discussant asked why this partnership was a surprise to many. He emphasized the importance of stakeholder conversations, transparency, and wider engagement in the lead-up to these kinds of partnerships. “And I don’t mean in order to warm critics up to the idea, but rather to create a safe and trusted ecosystem. Feedback and accountability are really key to this.” He also highlighted that humanitarian organizations are not experts in advanced technologies and that it’s normal for them to bring in experts in areas that are not their forte. However, we need to remember that tech companies are not experts in humanitarian work and put the proper checks and balances in place. Bringing in a range of multidisciplinary expertise and distributed intelligence is necessary in a complex information environment. One possible approach is creating technology advisory boards. Another way to ensure more transparency and accountability is to conduct a human rights impact assessment. The next year will be a major test for these kinds of partnerships, given the growing concerns, he said.

One Salon participant said that the fact that the humanitarian sector engages in partnerships with the private sector is not a surprise at all, as the sector has worked through Public-Private Partnerships (PPPs) for several years now and they can bring huge value. The surprise is that WFP chose Palantir as the partner. “They are not the only option, so why pick them?” Another person shared that the WFP partnership went through a full legal review, and so it was not a surprise to everyone. However, communication around the partnership was not well planned or thought out and the process was not transparent and open. Others pointed out that although a legal review covers some bases, it does not assess the potential negative social impact or risk to ‘beneficiaries.’ For some the biggest surprise was WFP’s own surprise at the pushback on this particular partnership and its unsatisfactory reaction to the concerns raised about it. The response from responsible data advocates and the press attention to the WFP-Palantir partnership might be a turning point for the sector to encourage more awareness of the risks in working with certain types of companies. As many noted, this is not only a problem for WFP, it’s something that plagues the wider sector and needs to be addressed urgently.

Organizations need think beyond reputational harm and consider harm to beneficiaries

“We spend too much time focusing on avoiding risk to institutions and too little time thinking about how to mitigate risk to beneficiaries,” said one person. WFP, for example, has some of the best policies and procedures out there, yet this partnership still passed their internal test. That is a scary thought, because it implies that other agencies who have weaker policies might be agreeing to even more risky partnerships. Are these policies and risk assessments, then, covering all the different types of risk that need consideration? Many at the Salon felt that due diligence and partnership policies focus almost exclusively on organizational and reputational risk with very little attention to the risk that vulnerable populations might face. It’s not just a question of having policies, however, said one person. “Look at the Oxfam Safeguarding situation. Oxfam had some of the best safeguarding policies, yet there were egregious violations that were not addressed by having a policy. It’s a question of power and how decisions get made, and where decision-making power lies and who is involved and listened to.” (Note: one person contacted me pre-Salon to say that there was pushback by WFP country-level representatives about the Palantir partnership, but that it still went ahead. This brings up the same issue of decision-making power, and who has power to decide on these partnerships and why are voices from the frontlines not being heard? Additionally, are those whose data is captured and put into these large data systems ever consulted about what they think?)

Organizations need to assess wider implications, risks, and unintended negative consequences

It’s not only WFP that is putting information into SCOPE, said one person. “Food insecure people have no choice about whether to provide their data if they wish to receive food.” Thus, the question of truly ‘informed consent’ arises. Implementing partners don’t have a lot of choice either, he said. “Implementing agencies are forced to input beneficiary data into SCOPE if they want to work in particular zones or countries.” This means that WFP’s systems and partnerships have an impact on the entire humanitarian community, and therefore these partnerships and systems need to be more broadly consulted about with the wider sector.  The optical and reputational impact to organizations aside from WFP is significant, as they may disagree with the Palantir partnership but they are now associated with it by default. This type of harm goes beyond the fear of exploitation of the data in WFP’s “data lake.” It becomes a risk to personnel on the ground who are then seen as collaborating with a CIA contractor by putting beneficiary biometric data into SCOPE. This can also deter food-insecure people from accessing benefits. Additionally, association with CIA or US military has led to humanitarian agencies and workers being targeted, attacked and killed. That is all in addition to the question on whether these kinds of partnerships violate humanitarian principles, such as that of impartiality.

“It’s critical to understand the role of rumor in humanitarian contexts,” said one discussant. “Affected populations are trying to figure out what is happening and there is often a lot of rumor going around.”  So, if Palantir has a reputation for giving data to the CIA, people may hear about that and then be afraid to access services for fear of having their data given to the CIA. This can lead to retaliation against humanitarians and humanitarian organizations and escalate their risk of operating. Risk assessments need to go beyond the typical areas of reputation or financial risk. We also need to think about how these partnerships can affect humanitarian access and community trust and how rumors can have wide ripple effects.

The whole sector needs to put better due diligence systems in place. As it is now, noted one person, often it’s someone who doesn’t know much about data who writes up a short summary of the partnership, and there is limited review. “We’ve been struggling for 10 years to get our offices to use data. Now we’re in a situation where they’re just picking up a bunch of data and handing it over to private companies.”

UN immunities and privileges lead to a lack of accountability

The fact that UN agencies have immunities and privileges, means that laws such as the EU’s General Data Protection Regulation (GDPR) do not apply to them and they are left to self-regulate. Additionally, there is no common agreement among UN Agencies on how GDPR applies, and each UN agency interprets it on their own. As one person noted “There is a troubling sense of exceptionalism and lack of accountability in some of these agencies because ‘a beneficiary cannot take me to court.’” An interesting point, however, is that while UN agencies are immune, those contracted as their data processors are not immune — so data processors beware!

Demographically Identifiable Information (DII) can lead to serious group harm

The WFP has stated that personally identifiable information (PII) is not technically accessible to Palantir via this partnership. However, some at the Salon consider that the WFP failed in their statement about the partnership when they used the absence of PII as a defense. Demographically Identifiable Information (DII) and the activity patterns that are visible even in commodity data can be extrapolated as training data for future data modeling. “This is prospective modeling of action-based intelligence patterns as part of multiple screeners of intel,” said one discussant. He went on to explain that privacy discussions have moved from centering on property rights in the 19th Century, to individual rights in the 20th Century, to group rights in the 21st Century. We can use existing laws to emphasize protection of groups and to highlight the risks of DII leading to group harm, he said, as there are well-known cases that exemplify the notion of group harms (Plessy v Ferguson, Brown v Board of Education). Even in logistics data (which is the kind of data that WFP says Palantir will access) that contains no PII, it’s very simple to identify groups. “I can look at supply chain information and tell you where there are lactating mothers. If you don’t want refugees to give birth in the country they have arrived to, this information can be used for targeting.”

Many in the sector do not trust a company like Palantir

Though it is not clear who was in the room when WFP made the decision to partner with Palantir, the overall sector has concerns that the people making these decisions are not assessing partnerships from all angles: legal, privacy, programmatic, ethical, data use and management, social, protection, etc. Technologists and humanitarian practitioners are often not included in making these decisions, said one participant. “It’s the people with MBAs. They trust a tech company to say ‘this is secure’ but they don’t have the expertise to actually know that. Not to mention that yes, something might be secure, but maybe it’s not ethical. Senior people are signing off without having a full view. We need a range of skill sets reviewing these kinds of partnerships and investments.”

Another question arises: What happens when there is scope creep? Is Palantir in essence “grooming” the sector to then abuse data it accesses once it’s trusted and “allowed in”? Others pointed out that the grooming has already happened and Palantir is already on the inside. They first began partnering with the sector via the Clinton Global Initiative meetings back in 2013 and they are very active at World Economic Forum meetings. “This is not something coming out of the Trump administration, it was happening long before that,” said one person, and the company is already “in.” Another person said “Palantir lobbied their way into this, and they’ve gotten past the point of reputational challenge.” Palantir has approached many humanitarian agencies, including all the UN agencies, added a third person. Now that they have secured this contract with the WFP, the door to future work with a lot of other agencies is open and this is very concerning.

We’re in a new political economy: data brokerage.

“Humanitarians have lost their Geneva values and embraced Silicon Valley values” said one discussant. They are becoming data brokers within a colonial data paradigm. “We are making decisions in hierarchies of power, often extralegally,” he said. “We make decisions about other people’s data without their involvement, and we need to be asking: is it humanitarian to commodify for monetary or reasons of value the data of beneficiaries? When is it ethical to trade beneficiary data for something of value?” Another raised the issue of incentives. “Where are the incentives stacked? There is no incentive to treat beneficiaries better. All the incentives are on efficiency and scale and attracting donors.”

Can this example push the wider sector to do better?

One participant hoped there could be a net gain out of the WFP-Palantir case. “It’s a bad situation. But it’s a reckoning for the whole space. Most agencies don’t have these checks and balances in place. But people are waking up to it in a serious way. There’s an opportunity to step into. It’s hard inside of bureaucratic organizations, but it’s definitely an opportunity to start doing better.”

Another said that we need more transparency across the sector on these partnerships. “What is our process for evaluating something like this? Let’s just be transparent. We need to get these data partnership policies into the open. WFP could have simply said ‘here is our process’. But they didn’t. We should be working with an open and transparent model.” Overall, there is a serious lack of clarity on what data sharing agreements look like across the sector. One person attending the Salon said that their organization has been trying to understand current practice with regard to data sharing, and it’s been very difficult to get any examples, even redacted ones.

What needs to happen? 

In closing we discussed what needs to happen next. One person noted that in her research on Responsible Data, she found a total lack of capacity in terms of technology at non-profit organizations. “It’s the Economist Syndrome. Someone’s boss reads something on the bus and decides they need a blockchain,” someone quipped. In terms of responsible data approaches, research shows that organizations are completely overwhelmed. “They are keeping silent about their low capacity out of fear they will face consequences,” said one person, “and with GDPR, even more so”. At the wider level, we are still focusing on PII as the issue without considering DII and group rights, and this is a mistake, said another.

Organizations have very low capacity, and we are siloed. “Program officers do not have tech capacity. Tech people are kept in offices or ‘labs’ on their own and there is not a lot of porosity. We need protection advisors, lawyers, digital safety advisors, data protection officers, information management specialists, IT all around the table for this,” noted one discussant. Also, she said, though we do need principles and standards, it’s important that organizations adapt these so that they are their own principles and standards. “We need to adapt these boiler plate standards to our organizations. This has to happen based on our own organizational values.  Not everyone is rights-based, not everyone is humanitarian.” So organizations need to take the time to review and adapt standards, policies and procedures to their own vision and mission and to their own situations, contexts and operations and to generate awareness and buy-in. In conclusion, she said, “if you are not being responsible with data, you are already violating your existing values and codes. Responsible Data is already in your values, it’s a question of living it.”

Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

This is a cross-post from Tom Murphyeditor of the aid blog A View From the Cave. The original article can be found on Humanosphere. The post summarizes discussions at our November 21st New York City Technology Salon: Are Mobile Money Cash Grants the Future of Development?  If you’d like to join us for future Salons, sign up here.

by Tom Murphy

Decades ago, some of the biggest NGOs simply gave away money to individuals in communities. People lined up and were just given cash.

The once popular form of aid went out of fashion, but it is now making a comeback.

Over time, coordination became extremely difficult. Traveling from home to home costs time and money for the NGO and the same problem exists for recipients when they have to go to a central location. More significant was the shift in development thinking that said giving hand outs was causing long term damage.

The backlash against ‘welfare queens’ in the US, UK and elsewhere during the 1980s was reflected in international development programming. Problem was that it was all based on unproven theories of change and anecdotal evidence, rather than hard evidence.

Half a decade later, new research shows that just giving people money can be an effective way to build assets and even incomes. The findings were covered by major players like NPR and the Economist.

While exciting and promising, cash transfers are not a new tool in the development utility belt.

Various forms of transfers have emerged over the past decade. Food vouchers were used by the World Food Programme when responding to the 2011 famine in the Horn of Africa. Like food stamps in the US, people could go buy food from local markets and get exactly what they need while supporting the local economy.

The differences have sparked a sometimes heated debate within the development community as to what the findings about cash transfers mean going forward. A Technology Salon hosted conversation at ThoughtWorks in New York City last week, featured some of the leading researchers and players in the cash transfer sector.

The salon style conversation featured Columbia University and popular aid blogger Chris Blattman, GiveDirectly co-founder and UCSD researcher Paul Neihaus and Plan USA CEO Tessie San Martin. The ensuing discussion, operating under the Chatham House Rule of no attribution, featured representatives from large NGOs, microfinance organizations and UN agencies.

Research from Kenya, Uganda and Liberia show both the promise and shortcomings of cash transfers. For example, giving out cash in addition to training was successful in generating employment in Northern Uganda. Another program, with the backing of the Ugandan government, saw success with the cash alone.

Cash transfers have been argued as the new benchmark for development and aid programs. Advocates in the discussion made the case that programs should be evaluated in terms of impact and cost-effectiveness against just giving people cash.

That idea saw some resistance. The research from Liberia, for example, showed that money given to street youth would not be wasted, but it was not sufficient to generate long-lasting employment or income. There are capacity problems and much larger issues that probably cannot be addressed by cash alone.

An additional concern is the unintended negative consequences caused by cash transfers. One example given was that of refugees in Syria. Money was distributed to families labeled for rent. Despite warnings not to label the transfer, the program went ahead.

As a result, rents increased. The money intended to help reduce the cost incurred by rent was rendered largely useless. One participant raised the concern that cash transfers in such a setting could be ‘taxed’ by rebels or government fighters. There is a potential that aid organizations could help fund fighting by giving unrestricted cash.

The discussion made it clear that the applications of cash transfers are far more nuanced than they might appear. Kenya saw success in part because of the ease of sending money to people through mobile phones. Newer programs in India, for example, rely on what are essentially ATM cards.

Impacts, admitted practitioners, can go beyond simple incomes. There has been care to make sure that implementing cash transfer programs to not dramatically change social structures in ways that cause problems for the community and recipients. In one case, giving women cash allowed for them to participate in the local markets, a benefit to everyone except for the existing shop oligarchs.

Governments in low and middle-income countries are seeing increasing pressure to establish social programs. The success of cash transfer programs in Brazil and Mexico indicate that it can be an effective way to lift people out of poverty. Testing is underway to bring about more efficient and context appropriate cash transfer schemes.

An important component in the re-emergence of cash transfers is looking back to previous efforts, said one NGO official. The individual’s organization is systematically looking back at communities where the NGO used to work in order to see what happened ten years later. The idea is to learn what impacts may or may not have been on that community in order to inform future initiatives.

“Lots of people have concerns about cash, but we should have concerns about all the programs we are doing,” said a participant.

The lessons from the cash transfer research shows that there is increasing need for better evidence across development and aid programs. Researchers in the group argued that the ease of doing evaluations is improving.

Read the “Storified” version of the Technology Salon on Mobiles and Cash Transfers here.

Read Full Post »

This is a guest post from Anna Crowe, Research Officer on the Privacy in the Developing World Project, and  Carly Nyst, Head of International Advocacy at Privacy International, a London-based NGO working on issues related to technology and human rights, with a focus on privacy and data protection. Privacy International’s new report, Aiding Surveillance, which covers this topic in greater depth was released this week.

by Anna Crowe and Carly Nyst

NOV 21 CANON 040

New technologies hold great potential for the developing world, and countless development scholars and practitioners have sung the praises of technology in accelerating development, reducing poverty, spurring innovation and improving accountability and transparency.

Worryingly, however, privacy is presented as a luxury that creates barriers to development, rather than a key aspect to sustainable development. This perspective needs to change.

Privacy is not a luxury, but a fundamental human right

New technologies are being incorporated into development initiatives and programmes relating to everything from education to health and elections, and in humanitarian initiatives, including crisis response, food delivery and refugee management. But many of the same technologies being deployed in the developing world with lofty claims and high price tags have been extremely controversial in the developed world. Expansive registration systems, identity schemes and databases that collect biometric information including fingerprints, facial scans, iris information and even DNA, have been proposed, resisted, and sometimes rejected in various countries.

The deployment of surveillance technologies by development actors, foreign aid donors and humanitarian organisations, however, is often conducted in the complete absence of the type of public debate or deliberation that has occurred in developed countries. Development actors rarely consider target populations’ opinions when approving aid programmes. Important strategy documents such as the UN Office for Humanitarian Affairs’ Humanitarianism in a Networked Age and the UN High-Level Panel on the Post-2015 Development Agenda’s A New Global Partnership: Eradicate Poverty and Transfer Economies through Sustainable Development give little space to the possible impact adopting new technologies or data analysis techniques could have on individuals’ privacy.

Some of this trend can be attributed to development actors’ systematic failure to recognise the risks to privacy that development initiatives present. However, it also reflects an often unspoken view that the right to privacy must necessarily be sacrificed at the altar of development – that privacy and development are conflicting, mutually exclusive goals.

The assumptions underpinning this view are as follows:

  • that privacy is not important to people in developing countries;
  • that the privacy implications of new technologies are not significant enough to warrant special attention;
  • and that respecting privacy comes at a high cost, endangering the success of development initiatives and creating unnecessary work for development actors.

These assumptions are deeply flawed. While it should go without saying, privacy is a universal right, enshrined in numerous international human rights treaties, and matters to all individuals, including those living in the developing world. The vast majority of developing countries have explicit constitutional requirements to ensure that their policies and practices do not unnecessarily interfere with privacy. The right to privacy guarantees individuals a personal sphere, free from state interference, and the ability to determine who has information about them and how it is used. Privacy is also an “essential requirement for the realization of the right to freedom of expression”. It is not an “optional” right that only those living in the developed world deserve to see protected. To presume otherwise ignores the humanity of individuals living in various parts of the world.

Technologies undoubtedly have the potential to dramatically improve the provision of development and humanitarian aid and to empower populations. However, the privacy implications of many new technologies are significant and are not well understood by many development actors. The expectations that are placed on technologies to solve problems need to be significantly circumscribed, and the potential negative implications of technologies must be assessed before their deployment. Biometric identification systems, for example, may assist in aid disbursement, but if they also wrongly exclude whole categories of people, then the objectives of the original development intervention have not been achieved. Similarly, border surveillance and communications surveillance systems may help a government improve national security, but may also enable the surveillance of human rights defenders, political activists, immigrants and other groups.

Asking for humanitarian actors to protect and respect privacy rights must not be distorted as requiring inflexible and impossibly high standards that would derail development initiatives if put into practice. Privacy is not an absolute right and may be limited, but only where limitation is necessary, proportionate and in accordance with law. The crucial aspect is to actually undertake an analysis of the technology and its privacy implications and to do so in a thoughtful and considered manner. For example, if an intervention requires collecting personal data from those receiving aid, the first step should be to ask what information is necessary to collect, rather than just applying a standard approach to each programme. In some cases, this may mean additional work. But this work should be considered in light of the contribution upholding human rights and the rule of law make to development and to producing sustainable outcomes. And in some cases, respecting privacy can also mean saving lives, as information falling into the wrong hands could spell tragedy.

A new framing

While there is an increasing recognition among development actors that more attention needs to be paid to privacy, it is not enough to merely ensure that a programme or initiative does not actively harm the right to privacy; instead, development actors should aim to promote rights, including the right to privacy, as an integral part of achieving sustainable development outcomes. Development is not just, or even mostly, about accelerating economic growth. The core of development is building capacity and infrastructure, advancing equality, and supporting democratic societies that protect, respect and fulfill human rights.

The benefits of development and humanitarian assistance can be delivered without unnecessary and disproportionate limitations on the right to privacy. The challenge is to improve access to and understanding of technologies, ensure that policymakers and the laws they adopt respond to the challenges and possibilities of technology, and generate greater public debate to ensure that rights and freedoms are negotiated at a societal level.

Technologies can be built to satisfy both development and privacy.

Download the Aiding Surveillance report.

Read Full Post »

The use of social media and new technologies in disaster and crisis situations will likely be framed forevermore as ‘Before Haiti’ (BH) and ‘After Haiti’ (AH). The terrible earthquakes that devastated the country and the resulting flood of humanitarian interventions that followed mark the moment that the world woke up to the role that new technology and new media can and do play in the immediate local response to a crisis as well as the ongoing emergency response once international volunteers and aid organizations arrive to the scene.

Still left in the dark? How people in emergencies use communication to survive — and how humanitarian agencies can help,’ is a new policy briefing by BBC Media Action. It does a fantastic job of highlighting the ways that new media is changing disaster response and the fact that the humanitarian industry is not doing enough to keep pace.

The briefing asks: ‘Are humanitarian agencies prepared to respond to, help and engage with those who are communicating with them and who demand better information?’ As the report shows, the answer is ‘not really’.

Many humanitarian agencies continue to see ‘communication’ as something done to raise money or boost the profile of their own efforts, says the report. Yet the sector increasingly needs a clear, strategic focus that responds to the information and communication needs of disaster-affected populations. ‘While responders tend to see communication as a process either of delivering information (‘messaging’) or extracting it, disaster survivors seem to see the ability to communicate and the process of communication itself as every bit as important as the information delivered.’ Communicating with affected populations can improve programming and response, not to mention, as highlighted in the Listening Project and referred to in the BBC report, ‘Listening is seen as an act of respect.’

The briefing is not one-sided, however, and it does a good job of understanding some of the common perceptions around the use of new technologies. There is ICT and social media hype on the one hand (often the position of international ICT specialists, based on very little hard evidence). And there is strong questioning of the value of these tools at all on the other hand (often the position of aid workers who are turned off by the hype and want to see proven practice if they are to consider investing time and resources in new ICTs). The briefing paper provides examples that may serve to convince those who are still unconvinced or uninterested.

Many of us see the reality that social media tools are still quite out of reach of the poorest members of a community or country. But writing new media and technologies off entirely is ‘too crude an analysis,’ says the briefing paper, ‘and one that is being challenged by data from the field.’ In Thailand, for example, use of social media increased by 20 percent in both metropolitan and rural areas when the 2010 floods began. ‘Communities now expect — and demand — interaction,’ the paper emphasizes.

Throughout the paper there are good examples of ways that in the immediate aftermath of a disaster local individuals and companies are quickly creating and rolling out communication platforms that are more relevant, feasible and sustainable than some of the initiatives that have been led by international agencies.

An important point that the report makes is that the most successful use of new media in disaster response combines multiple communication channels — new and old — allowing information to flow in and out, to and from different populations with different media habits and levels of access. For example, local journalists and radio stations may pull information from Twitter and Facebook and broadcast it via the radio. They may also be broadcasting online, meaning that they not only reach their local population, but they also reach the diaspora community. ‘Good communication work,’ the paper notes, ‘is, by definition, multi-platform. It is about integrating media, non mass media and technology tools in a manner that is rooted firmly in an understanding of how communities approach information issues.’

So ‘while aid agencies hesitate, local communities are using communications technology to reshape the way they prepare for and respond to emergencies.’ This hesitation remains in the face of growing evidence, according to the paper, that agencies who invest in meaningful communication with disaster-affected communities garner a range of benefits. ‘The communication process becomes central to effective relationships, mitigating conflict and identifying and preventing rumours and misunderstandings.’ Doing it right, however requires good program design, which in turn, requires aid agencies to focus time and expertise on improving communications with affected populations, which, of course, requires budget allocated to these kinds of activities.

For those agencies that want to look more closely at integrating some of these tools and approaches, the report gives several great examples. The recommendations section is also very useful.

One of the recommendations I think it is especially good to highlight is ‘analyse the communications landscape’ (see InfoAsAid’s materials). I would, however, have liked to see mention of involving members of local communities in a participatory analysis of that landscape. In addition to the technical analysis, it’s important to know who trusts which information sources, and why. For example, the information sources trusted by men may be different than those that women trust. The same goes for children, youth and adolescents. Access is also a critical issue here, especially in terms of reaching more marginalized groups, as the population affected by a disaster is not homogeneous and there will be hierarchies and nuances that should be considered in order to avoid leaving certain people or groups out. I would have liked more mention of the need to take a specialized approach and make a focused effort regarding communication with children and adolescents, with women, with certain ethnic groups or persons with disability, as they may be left out if care is not taken to include them.

Another recommendation I found very useful was ‘think of the whole population, not just beneficiaries.’ The paper cautions that working only with the population that directly benefits from an individual agencies’ funding is a positive angle for transparency and accountability, but it ‘seems to have a profoundly limiting effect in practice.’ Beneficiaries need information about more than just one aid agency, and those who are not benefiting from a particular agency’s programs also have information needs.

The report mentions that sometimes in the aftermath of a disaster, when people use new media to communicate, confidentiality and privacy risks are not fully considered. I’d like to see this aspect addressed a bit more in the future as there is a lack of documented good practice around privacy and protection. (Note: I’ll be involved in some research on ICTs, migration and child protection over the next year, so watch this space if it’s a topic of interest.) Humanitarian agencies may have strong policies here, but this is an area that others may not consider in their rush to offer support following a disaster.

Overall the report is extremely useful for those considering how to improve two-way (or multi-way) communication in a post-disaster situation and I very much enjoyed reading it as it sparked learning and fresh ideas, and documented some good work being done by local as well as international groups.

I don’t have a lot of pity or patience for aid agencies who are ‘left behind’ because they refuse to move with the changing times or because their approaches are not effective or relevant. There are several forces that are ‘disrupting’ humanitarian work these days, and  the report does a great job of identifying some of them. It also highlights the importance of local actors, local private sector and diaspora communities who have always played an important role in aid and development but have only recently been given more recognition.

International agencies still play a critical role in the humanitarian space, but they need to understand and adapt to the changes happening around them. This report can serve as a discussion piece and offer guidance for those agencies who want to use new media and technology to enable them to do a better job of listening to, working with and being accountable to disaster affected populations.

In summary, highly recommended reading!

Read Full Post »

This is a cross-post by Laura Pohl who works at the Bread for the World. The post is a summary of the latest Humanitarian Photography group meeting. The meetings are part of a series of meetings about photography in humanitarian work organized by CORE Group. (The original is posted here)

An employee at the Yemi Hanbok factory near Yanji, China, examines a traditional Korean skirt she is dyeing.The factory’s unique geographic position in China, near North Korea, means it can have work done cheaply in both countries and then sell to customers in South Korea. Photo © Laura Elizabeth Pohl

One of the most common questions I get asked at work is, “Can I use this picture?”

Oh, there are so many answers — and we talked about most of them at our latest Humanitarian Photography Group meeting this past Tuesday. Jim Stipe of Catholic Relief Services, Ann Hendrix-Jenkins of CORE Group and I organized the meeting. I led it, starting off with a talk on figuring out whether a photograph is yours or not. Sounds easy, right?

Well, if someone on your staff shot the picture then yes, the answer is easy: you can use it. But should you use it? (More on that later.) If your organization hired a freelancer to shoot pictures for you, then yes, you can use the pictures but possibly with restrictions. It depends on the type of contract between you and the photographer. The main point here is that the photographer usually still retains copyright to the picture and your organization is licensing the pictures. If this all sounds complicated and legalistic, don’t worry; we’ll be delving into copyright, contracts and licensing in a future meeting. Finally, if you don’t know who shot the photograph, where it came from or who owns the copyright, it’s best not to use the picture.

Next I talked a bit about Creative Commons licenses, which photographers use when they want to let the public publish their pictures without paying a licensing fee. CC-licensed photos abound on Flickr, a frequent source of pictures for nonprofit organizations. Just be sure the picture editor at your organization (or the person in this role) understands all six CC licenses and when a picture is fully copyrighted, all rights reserved. When I was a freelance photographer, an NGO once published my copyrighted photographs from Flickr without my permission. I found the copyright violation and sent the organization a stern letter and invoice for the pictures. The organization paid up. They never apologized, though. The organization had tasked their intern with finding pictures. It turns out he didn’t understand copyright.

Finally, I talked about content and technical criteria for publishing pictures, showing about 20 “bad” photos that Jim and I found. There are many reasons photographs shouldn’t see the light of day but I think the best way to summarize the photo usage criteria listed below is, “Do unto others as you would have them do unto you.” That out-of-focus photograph of 10 bored-looking people sitting in a meeting? You don’t have to use it. That portrait of a person with a strange expression on his face? Just don’t publish it. As Jim often says, “It’s better not to use a photograph than to use a bad photograph.”

  • Dignity – No emaciated children, flies in eyes, violence or stereotypical images.
  • Safety – Could the picture put the photo subjects in danger, even if they agreed to be photographed?
  • Context – No pretending photos of one thing are of something else. Sometimes photos published as a collection make sense together but one photo on its own from the collection doesn’t make sense.
  • Caption – Who’s in the picture? Where was it taken? Why is this picture important? Even if you can’t publish this information because of safety concerns, it’s important to have the information for your internal records.
  • Credit – Who took the picture? It’s just like attributing a quote.
  • Third effect – Two pictures paired together can take on a new meaning.
  • Focus – Is the picture sharp or blurry?
  • Framing – Is the photo framed well?
  • Exposure – Is the picture properly exposed?
  • Color – Does it look natural?

People had great comments and questions at the end. (By the way, Jim did a nice job tweeting the meeting at #NGOphoto.) A woman named Aubrey said her organization works on a lot of construction projects, roads, latrines, etc. They often shoot before and after photographs to show the impact of their projects. Another person asked about setting up or staging shots. Some people wanted to know more about contracts and how to work with country programs to set up photo shoots. These are all meaty topics. I can’t wait to get into them more in future meetings. We’re taking a break for the summer but we’ll meet again in September, so get in touch if you’d like to join us.

You can find summaries of other meetings here:

April: On the ethics of photos in aid and development work

May: Aid, ethics, photography and informed consent

Read Full Post »

Ivan and Massilau working on some mapping in Inhambane, Mozambique.

I had the pleasure of working with Iván Sánchez Ortega in Mozambique earlier this month, and I learned a ton about the broader world of GIS, GPS, FOSS, Ubuntu and Open Street Maps from him. We also shared a few beers, not to mention a harrowing plane ride complete with people screaming and everyone imagining we were going to die! But I suppose it’s all in a day’s work.

Below is a cross-post by Iván about Maps for Mozambique. You can find the original post here, and a version in Spanish here. Note: the opinions expressed below belong to Iván and not to his former, current or future employeers…..

*****

Last week was a small adventure. I went to Mozambique to make maps, as part of the Youth Empowerment through Arts and Media program. The main goal was to train youngsters in order for them to make a basic cartography of the surrounding rural communities.

This travel is part of the Humanitarian OpenStreetMap Team activities. After the successes of Kibera and Haiti, we want to check how much we can help by providing cartography.

Cartography in developing areas provides a great amount of situational awareness – in order to help, one needs to know where the help is needed. In the case of Mozambique rural communities, we’re talking about knowing who has a water well and access to healthcare and education, and who doesn’t.

The problem with rural Mozambique is that the population is very disperse. Each family unit lives in an isolated set of huts, away from the other families in the community. There is so much land available that the majority of the land is neither used or managed.

Which leads to think that, maybe, the successes at Kibera and Haiti are, in part, due to them being dense urban areas, where a kilometer square of information is very useful.

It has been repeated ad nauseam that geographic information is the infrastructure of infrastructures. Large-scale humanitarian problems can’t be tackled without cartographic support – without it, there isn’t situational awareness, nor will coordinating efforts be possible, something very important in an era when aid can get in the way of helping. However, even with agile surveying techniques and massively crowdsourced work, the cost of surveying large areas is still big. And, as in all the other problems, technology isn’t the silver bullet.

—-

That said, the way one has to go to reach the rural communities doesn’t have anything to do with the occidentalized stereotypical image of rural sub-Saharan Africa. There are no lions, nor children with inflated bellies due to starvation.

There is, however, the image of a developed country but in which the public agencies work at half throttle. Mass transit, garbage collection, urbanism, civil protection, environment, job market, education, social security. Everything’s there, but everything works at a much lower level than one could expect. To give out an example, the Administraçao Nacional de Estradas (national roads administration) plans switching of one-way lanes over hand-drawn sketches.

The reasons that explain the situation of the country are not simple, not by far, but in general terms they can be resumed in two: the war of independence of 1964-1975 and the civil war of 1977-1992. Living is not bad, but also not good, and part of the population is expecting international humanitarian aid to magically solve all of their problems.

When one stops to think, the situation eerily reminds of the Spanish movie Welcome, Mr. Marshall. Only that everyone’s black, they don’t dance sevillanas, and instead of railroads they expect healthcare and education.

Wait a moment. A reconstruction 20 years after a civil war, external aid, and the need of cartography for a full country. This reminds me to the 1956-57 Spain general flight, popularly known among cartographers as the American flight.

These aerial photographs, made in collaboration with the U.S. Army Map Service, had a great influence in the topographic maps of that period, and even today they are an invaluable resource to study changes in land use.

—–

Which is, then, the best solution? To inject geospatial technology may be a short-term gain, long-term pain in the form of 9000€/seat software licenses. Mr. Marshall won’t come with a grand orthophotogrameric flight. Military mapping agencies won’t implement SDIs (spatial data infrastructures) overnight. Training aid workers and locals into surveying is possible, but slow and expensive, although it might be the only doable thing.

Related posts on Wait… What?

Mapping Magaiça and other photos from Mozambique

Being a girl in Cumbana

Putting Cumbana on the map — with ethics

Inhambane: land of palm trees and cellular networks

It’s all part of the ICT jigsaw — Plan Mozambique ICT4D workshops

Read Full Post »