Feeds:
Posts
Comments

Posts Tagged ‘aid’

On Thursday September 19, we gathered at the OSF offices for the Technology Salon on “Automated Decision Making in Aid: What could possibly go wrong?” with lead discussants Jon Truong, and Elyse Voegeli, two of the creators of Automating NYC; and Genevieve Fried and Varoon Mathur, Fellows at the AI Now Institute at NYU.

To start off, we asked participants whether they were optimistic or skeptical about the role of Automated Decision-making Systems (ADS) in the aid space. The response was mixed: about half skeptics and half optimists, most of whom qualified their optimism as “cautious optimism” or “it depends on who I’m talking to” or “it depends on the day and the headlines” or “if we can get the data, governance, and device standards in place.”

What are ADS?

Our next task was to define ADS. (One reason that the New York City ADS task force was unable to advance is that its members were unable to agree on the definition of an ADS).

One discussant explained that NYC’s provisional definition was something akin to:

  • Any system that uses data algorithms or computer programs to replace or assist a human decision-making process.

This may seem straightforward, yet, as she explained, “if you go too broad you might include something like ‘spellcheck’ which feels like overkill. On the other hand, spellcheck is a good case for considering how complex things can get. What if spellcheck only recognized Western names? That would be an example of encoding bias into the ADS. However, the degree of harm that could come from spellcheck as compared to using ADS for predictive policing is very different. Defining ADS is complex.”

Other elements of the definition of an ADS are that it includes computational implementation of an algorithm. Algorithms are basically clear instructions or criteria followed in order to make a decision. Algorithms can be manual. ADS include the power of computation, noted another discussant. And perhaps a computer and complex system should be included as well, and a decision-making point or cut off; for example, an algorithm that determines who gets a loan. It is also important to consider statistical modeling and forecasting, which allow for prediction.

Using data and criteria for making decisions is nothing new, and it’s often done without specific systems or computers. People make plenty of very bad decisions without computers, and the addition of computers and algorithms is sometimes considered a more objective approach, because instructions can be set and run by a computer.

Why are there issues with ADS?

In practice things are not as clear cut as they might seem, explained one of our discussants. We live in a world where people are treated differently because of their demographic identity, and curation of data can represent some populations over others or misrepresent certain populations because of how they have been treated historically. These current and historic biases make their way into the algorithms, which are created by humans, and this encodes human biases into an ADS. When feeding existing data into a computer so that it can learn, we bring our historical biases into decision-making. The data we feed into an ADS may not reflect changing demographics or shifts in the data, and algorithms may not reflect ongoing institutional policy changes.

As another person said, “systems are touted as being neutral, but they are subject to human fallacies. We live in a world that is full of injustice, and that is reflected in a data set or in an algorithm. The speed of the system, once it’s computerized, replicates injustices more quickly and at greater scale.” When people or institutions believe that the involvement of a computer means the system is neutral, we have a problem. “We need to take ADS with a grain of salt, similar to how we tell children not to believe everything they see on the Internet.”

Many people are unaware of how an algorithm works. Yet over time, we tend to rely on algorithms and believe in them as unbiased truth. When ADS are not monitored, tested, and updated, this becomes problematic. ADS can begin to make decisions for people rather than supporting people in making decisions, and this can go very wrong, for example when decisions are unquestioningly made based on statistical forecasting models.

Are there ways to curb these issues with ADS?

Consistent monitoring. ADS should also be monitored constantly over time by humans. One Salon participant suggested setting up checkpoints in the decision-making process to alert humans that something is afoul. Another suggested that research and proof of concept are critical. For example, running the existing human-only system alongside the ADS and comparing the decisions over time help to flag differences that can then be examined to see which of the processes is working better and to adjust or discontinue the ADS if it is incorrect. (In some cases, this process may actually flag biases in the human system). Random checks can be set up as can control situations where some decisions are made without using an ADS so that results can be compared between the two.

Recourse and redress. There should be simple and accessible ways for people affected by ADS to raise issues and make complaints. All ADS can make mistakes – there can be false positives (where an error points falsely to a match or the presence of a condition) and false negatives (where an error points to the absence of a match or a condition when indeed it is present). So there needs to be recourse for people affected by errors or in cases where biased data is leading to further discrimination or harm. Anyone creating an ADS needs to build in a way for mistakes to be managed and corrected.

Education and awareness. A person may not be aware that an ADS has affected them, and they likely won’t understand how an ADS works. Even people using ADS for decisions about others often forget that it’s an ADS deciding. This is similar to how people forget that their newsfeed on Facebook is based on their historical choices in content and their ‘likes’ and is not a neutral serving of objective content.

Improving the underlying data. Algorithms will only get better when there are constant feedback loops and new data that help the computer learn, said one Salon participant. Currently most algorithms are trained on highly biased samples that do not reflect marginalized groups and communities. For example, there is very little data about many of the people participating in or eligible for aid and development programs.

So we need proper data sets that are continually updated if we are to use ADS in aid work. This is a problem, however, if the data that is continually fed into the ADS remains biased. One person shared this example: If some communities are policed more because of race, economic status, etc., there will continually be more data showing that people in those communities are committing crimes. In whiter or wealthier communities, where there is less policing, less people are arrested. If we update our data continually without changing the fact that some communities are policed more than others (thus will appear to have higher crime rates), we are simply creating a feedback loop that confirms our existing biases.

Privacy concerns also enter the picture. We may want to avoid collecting data on race, gender, ethnicity or economic status so that we don’t expose people to discrimination, stigma, or harm. For example, in the case of humanitarian work or conflict zones, sensitive data can make people or groups a target for governments or unfriendly actors. However, it’s hard to make decisions that benefit people if their data is missing. It ends up being a catch 22.

Transparency is another way to improve ADS. “In the aid sector, we never tell people how decisions are made, regardless of whether those are human or machine-made decisions,” said one Salon participant. When the underlying algorithm is obscured, it cannot be reviewed for value judgments. Some compared this to some of the current non-algorithmic decision-making processes in the aid system (which are also not transparent) and suggested that aid systems could get more intelligent if they began to surface their own specific biases.

The objectives of the ADS can be reviewed. Is the system used to further marginalize or discriminate against certain populations, or can this be turned on its head? asked one discussant. ADS could be used to try to determine which police officers might commit violence against civilians rather than to predict which people might commit a crime. (See the Algorithmic Justice League’s work). 

ADS in the aid system – limited to the powerful few?

Because of the underlying challenges with data (quality, standards, lack of) in the aid sector, ADS is still a challenge. One area where data is available and where ADS are being built and used is in supply chain management, for example, at massive UN agencies like the World Food Program.

Some questioned whether this exacerbates concentration of power in these large agencies, running counter to agreed-upon sector goals to decentralize power and control to smaller, local organizations who are ‘on the ground’ and working directly in communities. Does ADS then bring even more hierarchy, bias, and exclusion into an already problematic system of power and privilege? Could there be ways of using ADS differently in the aid system that would not replicate existing power structures? Could ADS itself be used to help people see their own biases? “Could we build that into an ADS? Could we have a read out of decisions we came to and then see what possible biases were?” asked one person.

How can we improve trust in ADS?

Most aid workers, national organizations, and affected communities have a limited understanding of ADS, leading to lower levels of trust in ADS and the decisions they produce. Part of the issue is the lack of participation and involvement in the design, implementation, validation, and vetting of ADS. On the other hand, one Salon participant pointed out that given all the issues with bias and exclusion, “maybe they would trust an ADS even less if they understood how an ADS works.”

Involving both users of an ADS and the people affected by ADS decisions is crucial. This needs to happen early in the process, said one person. It shouldn’t be limited to having people complain or report once the ADS has wronged them. They need to be at the table when the system is being developed and trialed.

If trust is to be built, the explainability of an algorithm needs consideration. “How can you explain the algorithm to people who are affected by it? Humanitarian workers cannot describe an ADS if they don’t understand it. We need to find ways to explain ADS to a non-technical audience so that they can be involved,” said one person. “We’ve shown sophisticated models to leaders, and they defaulted to spreadsheets.”

This brought up the need for change management if ADS are introduced. Involving and engaging decision-makers in the design and creation of ADS systems is a critical step for their adoption. This means understanding how decisions are made currently and based on what factors. Technology and data teams need to be in the room to understand the open and hidden nature of decision-making.

Isn’t decision making without ADS also highly biased and obscured?

People are often resistant to talking about or sharing how decisions have been made in the past, however, because those decisions may have been biased or inconsistent, based on faulty data, or made for political or other reasons.

As one person pointed out, both government and the aid system are deeply politicized and suffer from local biases, corruption and elite capture. A spatial analysis of food distribution in two countries, for example, showed extreme biases along local political leader lines. A related analysis of the road network and aid distribution allowed a clear view into the unfairness of food distribution and efficiency losses.

Aid agencies themselves make highly-biased decisions all the time, it was noted. Decisions are often political, situational, or made to enhance the reputation of an individual or agency. These decisions are usually not fully documented. Is this any less transparent than the ‘black box’ of an algorithm? Not to mention that agencies have countless dashboards that are aimed at helping them make efficient, unbiased decisions, yet recommendations based on the data may run counter to what is needed politically or for other reasons in a given moment.

Could (should) the humanitarian sector assume greater leadership on ADS?

Most ADS are built by private sector partners. When they are sold to the public or INGO sector, these companies indemnify themselves against liability and keep their trade secrets. It becomes impossible to hold them to account for any harm produced. One person asked whether the humanitarian sector could lead by bringing in different incentives – transparency, multi-stakeholder design, participation, and a focus on wellbeing? Could we try this and learn from it and develop and document processes whereby this could be done at scale? Could the aid sector open source how ADS are designed and created so that data scientists and others could improve?

Some were skeptical about whether the aid sector would be capable of this. “Theoretically we could do this,” said one person, “but it would then likely be concentrated in the hands of these few large agencies. In order to have economies of scale, it will have to be them because automation requires large scale. If that is to happen, then the smaller organizations will have to trust the big ones, but currently the small organizations don’t trust the big ones to manage or protect data.” And what about the involvement of governments, said another person, we would need to consider the role of the public sector.

“I like the idea of the humanitarian sector leading,” added one person, “but aid agencies don’t have the greatest track record for putting their constituencies in the driving seat. That’s not how it works. A lot of people are trying to correct that, but aid sector employees are not the people who will be affected by these systems in the end. We could think about working with organizations who have the outreach capacity to do work with these groups, but again, these organizations are not made up of the affected people. We have to remember that.”

How can we address governance and accountability?

When you bring in government, private sector, aid agencies, software developers, data, and the like, said another person, you will have issues of intellectual property, ownership, and governance. What are the local laws related to data transmission and storage? Is it enough to open source just the code or ADS framework without any data in it? If you work with local developers and force them to open source the algorithm, what does that mean for them and their own sustainability as local businesses?

Legal agreements? Another person suggested that we focus on open sourcing legal agreements rather than algorithms. “There are always risks, duties, and liabilities listed in contracts and legal agreements. The private sector in particular will always play the indemnity card. And that means there is no commercial incentive to fix the tools that are being used. What if we pivoted this conversation to commercial liability? If a model is developed in Manhattan, it won’t work in Malawi — a company has a commercial duty to flag and recognize that. This type of issue is hidden if we focus the conversation on open software or open models. It’s rare that all the technology will be open and transparent. What we should push for is open contracting, and that could help a lot with governance.”

Certification? Others suggested that we adapt existing audit systems like the LEED certification (which allows engineers and architects to audit whether buildings are actually environmentally sustainable) or the IRB process (external boards that review research to flag ethical issues). “What if there were a team of data scientists and others who could audit ADS and determine the flaws and biases?” suggested one person. “That way the entire thing wouldn’t need to be open, but it could still be audited independently”. This was questioned, however, in that a stamp of approval on a single system could lead people to believe that every system designed by a particular group would pass the test.

Ethical frameworks could be a tool, yet which framework? A recent article cited 84 different ethical frameworks for Artificial Intelligence.

Regulation? Self-regulation has failed, said one person. Why aren’t we talking about actual regulation? The General Data Protection Regulation (GDPR) in Europe has a specific article (Article 22) about ADS that states that people have a right to know when ADS are used to made decisions that affect them, the right to contest decisions made by ADS, and right to request that humans review ADS decisions.

SPHERE Standards / Core Humanitarian Standard? Because of the legal complexities of working across multiple countries and with different entities in different jurisdictions (including some like the UN who are exempt from the law), an add-on to the SPHERE standards might be considered, said one person. Or something linked to the Core Humanitarian Standard (CHS), which includes a certification process. Donors will often ask whether an agency is CHS certified.

So, is there any good to come from ADS?

We tend to judge ADS with higher standards than we judge humans, said one Salon participant. Loan officers have been making biased decisions for years. How can we apply the standards of impartiality and transparency to both ADS and human decision making? ADS may be able to fix some of our current faulty and biased decisions. This may be useful for large systems, where we can’t afford to deploy humans at scale. Let’s find some potential bright spots for ADS.

Some positive examples shared by participants included:

  • Human rights organizations are using satellite imagery to identify areas that have been burned or otherwise destroyed during conflict. This application of automated decision making doesn’t deal directly with people or allocation of resources, it supports human rights research.
  • In California, ADS has been used to expunge the records of people convicted for marijuana-related violations now that marijuana has been legalized. This example supports justice and fairness.
  • During Hurricane Irma, an organization in the Virgin Islands used an excel spreadsheet to track whether people met the criteria for assistance. Aid workers would interview people and the sheet would calculate automatically whether they were eligible. This was not high tech or sexy, but it was automated and fast. The government created the criteria and these were open and transparently communicated to people ahead of time so that if they didn’t receive benefits, they were clear about why.
  • Flood management is an area where there is a lot of data and forecasting. Governments have been using ADS to evacuate people before it’s too late. This sector can gain in efficiency with ADS, which could be expanded to other weather-based hazards. Because it is a straightforward use case that involves satellites and less personal data it may be a less political space, making deployment easier.
  • Drones also use ADS to stitch together hundreds of thousands of photos to create large images of geographical areas. Though drone data still needs to be ground truthed, it is less of an ethical minefield than when personal or household level data is collected, said one participant. Other participants, however, had issues with the portrayal of drones as less of an ethical minefield, citing surveillance, privacy, and challenges with the ownership and governance of the final knowledge product, the data for which was likely collected without people’s consent.

How can the humanitarian sector prepare for ADS?

In conclusion, one participant summed up that decision making has always been around. As ADS is explored more in-depth with groups like the one at this Salon and as we delve into the ethics and improve on ADS, there is great potential. ADS will probably never totally replace humans but can supplement humans to make better decisions.

How are we in the humanitarian sector preparing people at all levels of the system to engage with these systems, design them ethically, reduce harm, and make them more transparent? How are we working to build capacities at the local level to understand and use ADS? How are we figuring out ways to ensure that the populations who will be affected by ADS are aware of what is happening? How are we ensuring recourse and redress in the case of bad decisions or bias? What jobs might be created (rather than eliminated) with the introduction of more ADS?

ADS are not going to go away, and the humanitarian sector doesn’t have to wait until they are perfected to get involved in shaping and improving them so that they support our work in ethical and useful ways rather than in harmful or unethical ways.

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Advertisements

Read Full Post »

The recently announced World Food Programme (WFP) partnership with Palantir, IRIN’s article about it, reactions from the Responsible Data Forum, and WFP’s resulting statement inspired us to pull together a Technology Salon in New York City to discuss the ethics of humanitarian data sharing.

(See this crowdsourced document for more background on the WFP-Palantir partnership and resources for thinking about the ethics of data sharing. Also here is an overview of WFP’s SCOPE system for beneficiary identification, management and tracking.)

Our lead discussants were: Laura Walker McDonald, Global Alliance for Humanitarian Innovation; Mark Latonero, Research Lead for Data & Human Rights, Data & Society; Nathaniel Raymond, Jackson Institute of Global Affairs, Yale University; and Kareem Elbayar, Partnerships Manager, Centre for Humanitarian Data at the United Nations Office for the Coordination of Humanitarian Affairs. We were graciously hosted by The Gov Lab.

What are the concerns about humanitarian data sharing and with Palantir?

Some of the initial concerns expressed by Salon participants about humanitarian data sharing included: data privacy and the permanence of data; biases in data leading to unwarranted conclusions and assumptions; loss of stakeholder engagement when humanitarians move to big data and techno-centric approaches; low awareness and poor practices across humanitarian organizations on data privacy and security; tensions between security of data and utility of data; validity and reliability of data; lack of clarity about the true purposes of data sharing; the practice of ‘ethics outsourcing’ (testing things in places where there is a perceived ‘lower ethical standard;’ and less accountability); use of humanitarian data to target and harm aid recipients; disempowerment and extractive approaches to data; lack of checks and balances for safe and productive data sharing; difficulty of securing meaningful consent; and the links between data and surveillance by malicious actors, governments, private sector, military or intelligence agencies.

Palantir’s relationships and work with police, the CIA, ICE, the NSA, the US military and wider intelligence community are one of the main concerns about this partnership. Some ask whether a company can legitimately serve philanthropy, development, social, human rights and humanitarian sectors while also serving the military and intelligence communities and whether it is ethical for those in the former to engage in partnerships with companies who serve the latter. Others ask if WFP and others who partner with Palantir are fully aware of the company’s background, and if so, why these partnerships have been able to pass through due diligence processes. Yet others wonder if a company like Palantir can be trusted, given its background.

Below is a summary of the key points of the discussion, which happened on February 28, 2019. (Technology Salons are Chatham House affairs, so I have not attributed quotes in this post.)

Why were we surprised by this partnership/type of partnership?

Our first discussant asked why this partnership was a surprise to many. He emphasized the importance of stakeholder conversations, transparency, and wider engagement in the lead-up to these kinds of partnerships. “And I don’t mean in order to warm critics up to the idea, but rather to create a safe and trusted ecosystem. Feedback and accountability are really key to this.” He also highlighted that humanitarian organizations are not experts in advanced technologies and that it’s normal for them to bring in experts in areas that are not their forte. However, we need to remember that tech companies are not experts in humanitarian work and put the proper checks and balances in place. Bringing in a range of multidisciplinary expertise and distributed intelligence is necessary in a complex information environment. One possible approach is creating technology advisory boards. Another way to ensure more transparency and accountability is to conduct a human rights impact assessment. The next year will be a major test for these kinds of partnerships, given the growing concerns, he said.

One Salon participant said that the fact that the humanitarian sector engages in partnerships with the private sector is not a surprise at all, as the sector has worked through Public-Private Partnerships (PPPs) for several years now and they can bring huge value. The surprise is that WFP chose Palantir as the partner. “They are not the only option, so why pick them?” Another person shared that the WFP partnership went through a full legal review, and so it was not a surprise to everyone. However, communication around the partnership was not well planned or thought out and the process was not transparent and open. Others pointed out that although a legal review covers some bases, it does not assess the potential negative social impact or risk to ‘beneficiaries.’ For some the biggest surprise was WFP’s own surprise at the pushback on this particular partnership and its unsatisfactory reaction to the concerns raised about it. The response from responsible data advocates and the press attention to the WFP-Palantir partnership might be a turning point for the sector to encourage more awareness of the risks in working with certain types of companies. As many noted, this is not only a problem for WFP, it’s something that plagues the wider sector and needs to be addressed urgently.

Organizations need think beyond reputational harm and consider harm to beneficiaries

“We spend too much time focusing on avoiding risk to institutions and too little time thinking about how to mitigate risk to beneficiaries,” said one person. WFP, for example, has some of the best policies and procedures out there, yet this partnership still passed their internal test. That is a scary thought, because it implies that other agencies who have weaker policies might be agreeing to even more risky partnerships. Are these policies and risk assessments, then, covering all the different types of risk that need consideration? Many at the Salon felt that due diligence and partnership policies focus almost exclusively on organizational and reputational risk with very little attention to the risk that vulnerable populations might face. It’s not just a question of having policies, however, said one person. “Look at the Oxfam Safeguarding situation. Oxfam had some of the best safeguarding policies, yet there were egregious violations that were not addressed by having a policy. It’s a question of power and how decisions get made, and where decision-making power lies and who is involved and listened to.” (Note: one person contacted me pre-Salon to say that there was pushback by WFP country-level representatives about the Palantir partnership, but that it still went ahead. This brings up the same issue of decision-making power, and who has power to decide on these partnerships and why are voices from the frontlines not being heard? Additionally, are those whose data is captured and put into these large data systems ever consulted about what they think?)

Organizations need to assess wider implications, risks, and unintended negative consequences

It’s not only WFP that is putting information into SCOPE, said one person. “Food insecure people have no choice about whether to provide their data if they wish to receive food.” Thus, the question of truly ‘informed consent’ arises. Implementing partners don’t have a lot of choice either, he said. “Implementing agencies are forced to input beneficiary data into SCOPE if they want to work in particular zones or countries.” This means that WFP’s systems and partnerships have an impact on the entire humanitarian community, and therefore these partnerships and systems need to be more broadly consulted about with the wider sector.  The optical and reputational impact to organizations aside from WFP is significant, as they may disagree with the Palantir partnership but they are now associated with it by default. This type of harm goes beyond the fear of exploitation of the data in WFP’s “data lake.” It becomes a risk to personnel on the ground who are then seen as collaborating with a CIA contractor by putting beneficiary biometric data into SCOPE. This can also deter food-insecure people from accessing benefits. Additionally, association with CIA or US military has led to humanitarian agencies and workers being targeted, attacked and killed. That is all in addition to the question on whether these kinds of partnerships violate humanitarian principles, such as that of impartiality.

“It’s critical to understand the role of rumor in humanitarian contexts,” said one discussant. “Affected populations are trying to figure out what is happening and there is often a lot of rumor going around.”  So, if Palantir has a reputation for giving data to the CIA, people may hear about that and then be afraid to access services for fear of having their data given to the CIA. This can lead to retaliation against humanitarians and humanitarian organizations and escalate their risk of operating. Risk assessments need to go beyond the typical areas of reputation or financial risk. We also need to think about how these partnerships can affect humanitarian access and community trust and how rumors can have wide ripple effects.

The whole sector needs to put better due diligence systems in place. As it is now, noted one person, often it’s someone who doesn’t know much about data who writes up a short summary of the partnership, and there is limited review. “We’ve been struggling for 10 years to get our offices to use data. Now we’re in a situation where they’re just picking up a bunch of data and handing it over to private companies.”

UN immunities and privileges lead to a lack of accountability

The fact that UN agencies have immunities and privileges, means that laws such as the EU’s General Data Protection Regulation (GDPR) do not apply to them and they are left to self-regulate. Additionally, there is no common agreement among UN Agencies on how GDPR applies, and each UN agency interprets it on their own. As one person noted “There is a troubling sense of exceptionalism and lack of accountability in some of these agencies because ‘a beneficiary cannot take me to court.’” An interesting point, however, is that while UN agencies are immune, those contracted as their data processors are not immune — so data processors beware!

Demographically Identifiable Information (DII) can lead to serious group harm

The WFP has stated that personally identifiable information (PII) is not technically accessible to Palantir via this partnership. However, some at the Salon consider that the WFP failed in their statement about the partnership when they used the absence of PII as a defense. Demographically Identifiable Information (DII) and the activity patterns that are visible even in commodity data can be extrapolated as training data for future data modeling. “This is prospective modeling of action-based intelligence patterns as part of multiple screeners of intel,” said one discussant. He went on to explain that privacy discussions have moved from centering on property rights in the 19th Century, to individual rights in the 20th Century, to group rights in the 21st Century. We can use existing laws to emphasize protection of groups and to highlight the risks of DII leading to group harm, he said, as there are well-known cases that exemplify the notion of group harms (Plessy v Ferguson, Brown v Board of Education). Even in logistics data (which is the kind of data that WFP says Palantir will access) that contains no PII, it’s very simple to identify groups. “I can look at supply chain information and tell you where there are lactating mothers. If you don’t want refugees to give birth in the country they have arrived to, this information can be used for targeting.”

Many in the sector do not trust a company like Palantir

Though it is not clear who was in the room when WFP made the decision to partner with Palantir, the overall sector has concerns that the people making these decisions are not assessing partnerships from all angles: legal, privacy, programmatic, ethical, data use and management, social, protection, etc. Technologists and humanitarian practitioners are often not included in making these decisions, said one participant. “It’s the people with MBAs. They trust a tech company to say ‘this is secure’ but they don’t have the expertise to actually know that. Not to mention that yes, something might be secure, but maybe it’s not ethical. Senior people are signing off without having a full view. We need a range of skill sets reviewing these kinds of partnerships and investments.”

Another question arises: What happens when there is scope creep? Is Palantir in essence “grooming” the sector to then abuse data it accesses once it’s trusted and “allowed in”? Others pointed out that the grooming has already happened and Palantir is already on the inside. They first began partnering with the sector via the Clinton Global Initiative meetings back in 2013 and they are very active at World Economic Forum meetings. “This is not something coming out of the Trump administration, it was happening long before that,” said one person, and the company is already “in.” Another person said “Palantir lobbied their way into this, and they’ve gotten past the point of reputational challenge.” Palantir has approached many humanitarian agencies, including all the UN agencies, added a third person. Now that they have secured this contract with the WFP, the door to future work with a lot of other agencies is open and this is very concerning.

We’re in a new political economy: data brokerage.

“Humanitarians have lost their Geneva values and embraced Silicon Valley values” said one discussant. They are becoming data brokers within a colonial data paradigm. “We are making decisions in hierarchies of power, often extralegally,” he said. “We make decisions about other people’s data without their involvement, and we need to be asking: is it humanitarian to commodify for monetary or reasons of value the data of beneficiaries? When is it ethical to trade beneficiary data for something of value?” Another raised the issue of incentives. “Where are the incentives stacked? There is no incentive to treat beneficiaries better. All the incentives are on efficiency and scale and attracting donors.”

Can this example push the wider sector to do better?

One participant hoped there could be a net gain out of the WFP-Palantir case. “It’s a bad situation. But it’s a reckoning for the whole space. Most agencies don’t have these checks and balances in place. But people are waking up to it in a serious way. There’s an opportunity to step into. It’s hard inside of bureaucratic organizations, but it’s definitely an opportunity to start doing better.”

Another said that we need more transparency across the sector on these partnerships. “What is our process for evaluating something like this? Let’s just be transparent. We need to get these data partnership policies into the open. WFP could have simply said ‘here is our process’. But they didn’t. We should be working with an open and transparent model.” Overall, there is a serious lack of clarity on what data sharing agreements look like across the sector. One person attending the Salon said that their organization has been trying to understand current practice with regard to data sharing, and it’s been very difficult to get any examples, even redacted ones.

What needs to happen? 

In closing we discussed what needs to happen next. One person noted that in her research on Responsible Data, she found a total lack of capacity in terms of technology at non-profit organizations. “It’s the Economist Syndrome. Someone’s boss reads something on the bus and decides they need a blockchain,” someone quipped. In terms of responsible data approaches, research shows that organizations are completely overwhelmed. “They are keeping silent about their low capacity out of fear they will face consequences,” said one person, “and with GDPR, even more so”. At the wider level, we are still focusing on PII as the issue without considering DII and group rights, and this is a mistake, said another.

Organizations have very low capacity, and we are siloed. “Program officers do not have tech capacity. Tech people are kept in offices or ‘labs’ on their own and there is not a lot of porosity. We need protection advisors, lawyers, digital safety advisors, data protection officers, information management specialists, IT all around the table for this,” noted one discussant. Also, she said, though we do need principles and standards, it’s important that organizations adapt these so that they are their own principles and standards. “We need to adapt these boiler plate standards to our organizations. This has to happen based on our own organizational values.  Not everyone is rights-based, not everyone is humanitarian.” So organizations need to take the time to review and adapt standards, policies and procedures to their own vision and mission and to their own situations, contexts and operations and to generate awareness and buy-in. In conclusion, she said, “if you are not being responsible with data, you are already violating your existing values and codes. Responsible Data is already in your values, it’s a question of living it.”

Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

This is a cross-post from Tom Murphyeditor of the aid blog A View From the Cave. The original article can be found on Humanosphere. The post summarizes discussions at our November 21st New York City Technology Salon: Are Mobile Money Cash Grants the Future of Development?  If you’d like to join us for future Salons, sign up here.

by Tom Murphy

Decades ago, some of the biggest NGOs simply gave away money to individuals in communities. People lined up and were just given cash.

The once popular form of aid went out of fashion, but it is now making a comeback.

Over time, coordination became extremely difficult. Traveling from home to home costs time and money for the NGO and the same problem exists for recipients when they have to go to a central location. More significant was the shift in development thinking that said giving hand outs was causing long term damage.

The backlash against ‘welfare queens’ in the US, UK and elsewhere during the 1980s was reflected in international development programming. Problem was that it was all based on unproven theories of change and anecdotal evidence, rather than hard evidence.

Half a decade later, new research shows that just giving people money can be an effective way to build assets and even incomes. The findings were covered by major players like NPR and the Economist.

While exciting and promising, cash transfers are not a new tool in the development utility belt.

Various forms of transfers have emerged over the past decade. Food vouchers were used by the World Food Programme when responding to the 2011 famine in the Horn of Africa. Like food stamps in the US, people could go buy food from local markets and get exactly what they need while supporting the local economy.

The differences have sparked a sometimes heated debate within the development community as to what the findings about cash transfers mean going forward. A Technology Salon hosted conversation at ThoughtWorks in New York City last week, featured some of the leading researchers and players in the cash transfer sector.

The salon style conversation featured Columbia University and popular aid blogger Chris Blattman, GiveDirectly co-founder and UCSD researcher Paul Neihaus and Plan USA CEO Tessie San Martin. The ensuing discussion, operating under the Chatham House Rule of no attribution, featured representatives from large NGOs, microfinance organizations and UN agencies.

Research from Kenya, Uganda and Liberia show both the promise and shortcomings of cash transfers. For example, giving out cash in addition to training was successful in generating employment in Northern Uganda. Another program, with the backing of the Ugandan government, saw success with the cash alone.

Cash transfers have been argued as the new benchmark for development and aid programs. Advocates in the discussion made the case that programs should be evaluated in terms of impact and cost-effectiveness against just giving people cash.

That idea saw some resistance. The research from Liberia, for example, showed that money given to street youth would not be wasted, but it was not sufficient to generate long-lasting employment or income. There are capacity problems and much larger issues that probably cannot be addressed by cash alone.

An additional concern is the unintended negative consequences caused by cash transfers. One example given was that of refugees in Syria. Money was distributed to families labeled for rent. Despite warnings not to label the transfer, the program went ahead.

As a result, rents increased. The money intended to help reduce the cost incurred by rent was rendered largely useless. One participant raised the concern that cash transfers in such a setting could be ‘taxed’ by rebels or government fighters. There is a potential that aid organizations could help fund fighting by giving unrestricted cash.

The discussion made it clear that the applications of cash transfers are far more nuanced than they might appear. Kenya saw success in part because of the ease of sending money to people through mobile phones. Newer programs in India, for example, rely on what are essentially ATM cards.

Impacts, admitted practitioners, can go beyond simple incomes. There has been care to make sure that implementing cash transfer programs to not dramatically change social structures in ways that cause problems for the community and recipients. In one case, giving women cash allowed for them to participate in the local markets, a benefit to everyone except for the existing shop oligarchs.

Governments in low and middle-income countries are seeing increasing pressure to establish social programs. The success of cash transfer programs in Brazil and Mexico indicate that it can be an effective way to lift people out of poverty. Testing is underway to bring about more efficient and context appropriate cash transfer schemes.

An important component in the re-emergence of cash transfers is looking back to previous efforts, said one NGO official. The individual’s organization is systematically looking back at communities where the NGO used to work in order to see what happened ten years later. The idea is to learn what impacts may or may not have been on that community in order to inform future initiatives.

“Lots of people have concerns about cash, but we should have concerns about all the programs we are doing,” said a participant.

The lessons from the cash transfer research shows that there is increasing need for better evidence across development and aid programs. Researchers in the group argued that the ease of doing evaluations is improving.

Read the “Storified” version of the Technology Salon on Mobiles and Cash Transfers here.

Read Full Post »

This is a guest post from Anna Crowe, Research Officer on the Privacy in the Developing World Project, and  Carly Nyst, Head of International Advocacy at Privacy International, a London-based NGO working on issues related to technology and human rights, with a focus on privacy and data protection. Privacy International’s new report, Aiding Surveillance, which covers this topic in greater depth was released this week.

by Anna Crowe and Carly Nyst

NOV 21 CANON 040

New technologies hold great potential for the developing world, and countless development scholars and practitioners have sung the praises of technology in accelerating development, reducing poverty, spurring innovation and improving accountability and transparency.

Worryingly, however, privacy is presented as a luxury that creates barriers to development, rather than a key aspect to sustainable development. This perspective needs to change.

Privacy is not a luxury, but a fundamental human right

New technologies are being incorporated into development initiatives and programmes relating to everything from education to health and elections, and in humanitarian initiatives, including crisis response, food delivery and refugee management. But many of the same technologies being deployed in the developing world with lofty claims and high price tags have been extremely controversial in the developed world. Expansive registration systems, identity schemes and databases that collect biometric information including fingerprints, facial scans, iris information and even DNA, have been proposed, resisted, and sometimes rejected in various countries.

The deployment of surveillance technologies by development actors, foreign aid donors and humanitarian organisations, however, is often conducted in the complete absence of the type of public debate or deliberation that has occurred in developed countries. Development actors rarely consider target populations’ opinions when approving aid programmes. Important strategy documents such as the UN Office for Humanitarian Affairs’ Humanitarianism in a Networked Age and the UN High-Level Panel on the Post-2015 Development Agenda’s A New Global Partnership: Eradicate Poverty and Transfer Economies through Sustainable Development give little space to the possible impact adopting new technologies or data analysis techniques could have on individuals’ privacy.

Some of this trend can be attributed to development actors’ systematic failure to recognise the risks to privacy that development initiatives present. However, it also reflects an often unspoken view that the right to privacy must necessarily be sacrificed at the altar of development – that privacy and development are conflicting, mutually exclusive goals.

The assumptions underpinning this view are as follows:

  • that privacy is not important to people in developing countries;
  • that the privacy implications of new technologies are not significant enough to warrant special attention;
  • and that respecting privacy comes at a high cost, endangering the success of development initiatives and creating unnecessary work for development actors.

These assumptions are deeply flawed. While it should go without saying, privacy is a universal right, enshrined in numerous international human rights treaties, and matters to all individuals, including those living in the developing world. The vast majority of developing countries have explicit constitutional requirements to ensure that their policies and practices do not unnecessarily interfere with privacy. The right to privacy guarantees individuals a personal sphere, free from state interference, and the ability to determine who has information about them and how it is used. Privacy is also an “essential requirement for the realization of the right to freedom of expression”. It is not an “optional” right that only those living in the developed world deserve to see protected. To presume otherwise ignores the humanity of individuals living in various parts of the world.

Technologies undoubtedly have the potential to dramatically improve the provision of development and humanitarian aid and to empower populations. However, the privacy implications of many new technologies are significant and are not well understood by many development actors. The expectations that are placed on technologies to solve problems need to be significantly circumscribed, and the potential negative implications of technologies must be assessed before their deployment. Biometric identification systems, for example, may assist in aid disbursement, but if they also wrongly exclude whole categories of people, then the objectives of the original development intervention have not been achieved. Similarly, border surveillance and communications surveillance systems may help a government improve national security, but may also enable the surveillance of human rights defenders, political activists, immigrants and other groups.

Asking for humanitarian actors to protect and respect privacy rights must not be distorted as requiring inflexible and impossibly high standards that would derail development initiatives if put into practice. Privacy is not an absolute right and may be limited, but only where limitation is necessary, proportionate and in accordance with law. The crucial aspect is to actually undertake an analysis of the technology and its privacy implications and to do so in a thoughtful and considered manner. For example, if an intervention requires collecting personal data from those receiving aid, the first step should be to ask what information is necessary to collect, rather than just applying a standard approach to each programme. In some cases, this may mean additional work. But this work should be considered in light of the contribution upholding human rights and the rule of law make to development and to producing sustainable outcomes. And in some cases, respecting privacy can also mean saving lives, as information falling into the wrong hands could spell tragedy.

A new framing

While there is an increasing recognition among development actors that more attention needs to be paid to privacy, it is not enough to merely ensure that a programme or initiative does not actively harm the right to privacy; instead, development actors should aim to promote rights, including the right to privacy, as an integral part of achieving sustainable development outcomes. Development is not just, or even mostly, about accelerating economic growth. The core of development is building capacity and infrastructure, advancing equality, and supporting democratic societies that protect, respect and fulfill human rights.

The benefits of development and humanitarian assistance can be delivered without unnecessary and disproportionate limitations on the right to privacy. The challenge is to improve access to and understanding of technologies, ensure that policymakers and the laws they adopt respond to the challenges and possibilities of technology, and generate greater public debate to ensure that rights and freedoms are negotiated at a societal level.

Technologies can be built to satisfy both development and privacy.

Download the Aiding Surveillance report.

Read Full Post »

Last Friday I had the opportunity to share a panel discussion on “Designing New Narratives: from poverty porn to agency” with Leah Chung, Maharam Fellow and RISD student, and Victor Dzidzienyo, Associate Dean of the College of Engineering Architecture and Computer Sciences at Howard University. The panel was part of the “A Better World by Design” Conference planned and run by a committee of students from Brown University and the Rhode Island School of Design, here in Providence.

Photo from http://www.affenstunde.com article on One Laptop Per Child.

I was responsible for setting the stage and moderating, wearing my Regarding Humanity hat. I showed a number of images and narratives that I find questionable – from aid agency fundraising campaigns to children receiving free shoes to famous musicians visiting Ethiopia to models posing in front of poor children to photos of them with imported technological “solutions”. The theme of the conference was “Pause and Effect.” My point was that we should pause and think about the long-term effects of these kinds of images and narratives on people we say we are helping, supporting or partnering with. Beyond fundraising, advocacy and branding for our organizations, what is the impact of these narratives? What long-term effects do they have when there is no strong competing narrative or variety of narratives that enable a more complex, nuanced and varied story?

Some of the ways that we can help change the narrative include:

Leah followed, sharing highlights from research she conducted this summer in Uganda on what Ugandans think about how they and Africans in general are represented in the Western media. With support from Hive Co-Lab, Leah and her research partner, Joseph Wanda, researched how people working in local NGOs and living in rural communities and informal settlement areas view ads like these:

People were asked which of these images they would prefer in a fundraising campaign. (Image courtesy of Leah Chung)

Image courtesy of Leah Chung

Perhaps not surprisingly, a large percentage of the adults interviewed said they preferred the sad photo, because it would be more effective at showing a story of need and raising funds. Interestingly, however, 66% of the children and adolescents interviewed preferred the happy one. Leah said  that a good number of people used the opportunity of her presence to include a story of their own needs and a personal appeal for funding or help during the interview process

Adults selected the sad image more often, whereas children and adolescents selected the happy one. (Image courtesy of Leah Chung)

Which of these images would you prefer for a fund-raising campaign? Image courtesy of Leah Chung

Overall, 76% of people Leah and Joseph interviewed said that they were not happy with the way that Africa is represented in the Western world.

The majority of people who participated in the study were unhappy with how Africa is represented in the "Western" world. (Image courtesy of Leah Chung)

Image courtesy of Leah Chung

Leah noted that through her research, she grew to understand that images are only the symptom of deeper dysfunction within the aid industry and its colonial legacy. She also noted that people all over the world hold stereotypes about others. She was viewed as someone bringing in resources to help, and called “Chinese,” although she is actually Korean.

Victor continued the topic by sharing his own story of living in DC as a child, and being one of the people that others wanted to come in to help. “For you as the outsider who comes in to save me, I have some questions,” he said. “For the folks who want to go on a ‘free trip’ to help, my question is: you are going there for what? What are the skill sets that you have that can make a difference? If you don’t have a skill set to offer, you should just stay home.” He recommended hiring local people for the various jobs needed during reconstruction after a disaster rather than sending over students with limited understanding of the local context and limited skills to work in it.

Victor emphasized the similarities in architectural design and designing programs aimed at helping after a flood or an earthquake. For both, a good understanding of the environment, the cultural context, the complexity of social structures, and the local beliefs and norms is required. He questioned whether academic institutions are doing enough to prepare students for working in these environments.

The ensuing comments and discussions made their way across a variety of related topics, with active participation from the room:

  • What can media and development professionals do to support agency? How can we move beyond satire and critique? The bullet points above are a start but what else can be done? We need to change our language, for one thing, and stop using phrases like “we are empowering people, giving them a voice, giving them agency.” We also need to remember that using poverty porn takes away agency from those who donate. The entire cycle is disempowering.
  • Local people are not passive in this: Communities and individuals can be very adept at manipulating this system. Local NGOs also have their own agendas and the aid industry also ties them in knots and makes it difficult for them to function, to be effective and to have a real impact.
  • How issues are framed and by whom matters. There is a great deal of exposure to the Western world and its viewpoints, and often the issues and narrative are framed by outsiders. Local work does not get the spotlight and credit, it’s normally sexy graphic design and social media campaigns like Kony 2012.
  • Should we help locally or internationally? The issues and problems in the world are global problems and they are interlinked at the global level, so where a person helps is not the issue. Location matters less than the underlying motives and levels of respect for people’s own agency, and level of ownership that local people have in the process. Going in to help a community in your own neighborhood or country that you do not understand or that you view as ‘lesser’ is not much different than doing that in a community abroad.
  • Poverty porn is a symptom of much larger issues in the international aid and development industry. The causes go much deeper and require a major shift in a number of areas.
  • Is it possible to change things or do we need to start over? Do we need a new model? It’s likely that international aid and development organizations will be disrupted and disintermediated by a number of forces and changes happening right now, from social entrepreneurs to global economic and power changes to technology to changes in “developing” country economies and attitudes. The problems are not going to go away and the market is not working for everyone, but the nature of how we address these issues will most likely change.
  • Poverty porn is profitable, how can we change this? How can we make the idea of agency and elevating other voices as profitable as poverty porn? How can we take a more comprehensive look at the system and where it’s not working? How can we change what the general public responds to and switch the general consciousness of people who care to a new way of looking at things? How can we re:see, re:listen and re:frame the narrative and get people excited about stories from people who know and live these issues? As intermediaries, our job is to provide platforms and to work to make these voices visible, not to tell other people’s stories. Can we engage people better by showing impact and change rather than miserable situations that victimize and provoke feelings of guilt?
  • Sharing and dialogue is one way that people can learn from each other and build strength in numbers to change things. Supporting “south-south” discussion and learning is key, as is discussion and dialogue between policy makers and practitioners.
  • People (we) need to be aware of their (our) own privilege. People give out of guilt. Until they (we) understand their (our) own power and privilege and step out of it, we will never move forward. Educational institutions confirm and allow people to benefit from their privilege. Going on a semester abroad to “help” people ends up looking good on a student’s resume and helping them, in the end, get a job, not really helping those they went to “help.” So the volunteer ends up getting wealthy from these situations, in a way.
  • Empathy matters, but how do we take it a step further than sleeping outside for a night to understand homelessness? Do these small efforts towards empathy add up to a larger awareness and behavior change, or are they meager attempts to experience life as “the other” without a real examination of power and privilege? How do we take this conversation a step wider also and look at how the West perpetrates and causes poverty by our own policies and consumption patterns?

Read Full Post »

policy forum

This past Monday I had the opportunity to join Engineers without Borders (EWB) in Calgary, Canada, at their Annual Policy Forum on Global Development to discuss “How can open government contribute to community and economic development?”

Morning panels covered some examples of open government initiatives from Finland, Ghana and Canada. In the afternoon we heard about some of the challenges with open data, open government and the International Aid Transparency Initiative. Table discussions followed both of the panels. The group was a mix of Canadian and African government representatives, people from organizations and groups working in different countries on open government and open data initiatives, and young people who are connected with EWB. The session was under Chatham House Rule in order to encourage frank conversation.

Drawing from such documents as the Open Government Partnership’s Open Government Declaration, Harlan Yu and David G. Robinson’s “The New Ambiguity of “Open Government,” Beth Noveck’s What’s in a Name? Open Gov and Good Gov and Nathaniel Heller, A Working Definition of ‘Open Government’, the following definition of Open Government was used to frame the discussions.

EWB Definition of Open Government

Below (in a very-much-longer-than-you-are-supposed-to-write-in-a-blogpost summary) are the highlights and points I found interesting and useful as related to Open Development, Open Data, Open Government and the International Aid Transparency Initiative (IATI)

1.  Participation thresholds need to be as low as possible for people to participate and engage in open government or open data initiatives. You need to understand well what engagement tools are most useful or comfortable for different groups. In some places, to engage the public you can use tools such as etherpad, wiki platforms, google docs, open tools and online collaboration spaces. In other places and with other populations, regardless of what country, you may be more successful with face-to-face methods or with traditional media like television and radio, but these need to be enhanced with different types of feedback methods like phone calls or surveys or going house to house so that your information is not only traveling one way. Community organizing skills are key to this work, regardless of whether the tools are digital or not.

2.  Literacy remains a huge challenge hindering access to information and citizen engagement in holding government accountable in many countries. This is why face-to-face engagement is important, as well as radio and more popular or broad-based communication channels. One participant asked “how can you make open government a rural, rather than an urban only, phenomenon?” This question resonated for participants from all countries.

3.  Language is still a critical issue. Language poses a big challenge for these kinds of initiatives, from the grassroots level to the global level, within and among countries, for citizens, governments, and anyone trying to share or collect data or information. It was noted that all the countries who have published data to IATI are publishing in English. All the IATI Standards are in English, as is the entire support system for IATI. As one participant noted, this begs the question of who the information in IATI is actually designed for and serving, and who are the expected users of it. Open data initiatives should consider the implications of language they publish in, both politically and practically.

4.  Open data can serve to empower the already empowered. As one speaker noted, “the idea that everyone has the potential to make use of open data is simply not true.” Access to digital infrastructure and educational resource may be missing, meaning that many do not have the ability to access, interpret or use data for their own purposes. Governments can also manipulate data and selectively release data that serves their own interests. Some questioned government motives, citing the example of a government that released “data” saying its unemployment rate was 10% when “everyone knew this to be false, and people grumbled but we did not feel empowered to challenge that statement.” Concern was expressed over the lack of an independent body or commission in some countries to oversee open data and open government processes. Some did not trust the government bodies who were currently in charge of collecting and opening information, saying that due to politics, they would never release any information that made their party or their government look bad.

5.  Privacy rights can be exploited if data is opened without data protection laws and effort to build capacity around how to make certain data anonymous. Citizens may also not be aware of what rights are being violated, so this should also be addressed.

6.  Too much open data discussion takes place without a power analysis, as one participant commented, making some of the ideas around open data and open government somewhat naïve. “Those who have the greatest stake will be the most determined to push their point of view and to make sure it prevails.”

7.  Open data needs to become open data 2.0. According to one participant, open data is still mostly one-way information delivery. In some cases there isn’t even any delivery – information is opened on a portal but no one knows it’s there or what it refers to or why it would be useful. When will open data, open government and open aid become more of a dialogue? When will data be released that answers questions that citizens have rather than the government deciding what it will release? The importance of working with community groups to strengthen their capacity to ask questions and build critical consciousness to question the data was emphasized. A counter point was that government is not necessarily there to start collecting information or creating data sets according to public demand. Governments collect certain data to help them function.

8.  Intermediaries working on open government should be careful of real or perceived bias. Non-profits have their own agendas, and ‘open data’ and ‘open information’ is not immune to being interpreted in non-objective ways. Those working on civic engagement initiatives need to be careful that they are not biased in their support for citizen initiatives. One presenter who works on a platform that encourages citizens to be involved in petitioning new laws for contemplation in Parliament said “Our software is open source so that anyone can set up a similar process to compete with us if they feel we are biased towards one or another type of agenda.”

9.  Technology-based engagement tools change who is participating. Whether in Finland, Canada, Ghana or Malawi, it’s critical to think about reaching those who are not active already online, those who are not the typical early adopters. To reach a broader public, one speaker noted “We are going to remote places, doing events in smaller towns and cities to see how people want to influence and take part in this. Making sure the website is accessible and understandable.”

10. Technological platforms are modifying how political parties and democratic processes operate. This may or may not be a good thing. Normally priorities arise and are discussed within political parties. Will people now bypass the party process and use ‘direct democracy’ channels if they are passionate about an issue but do not want to enter into negotiation around it? Will this weaken political processes or longer standing democratic processes? One speaker considered this change to be positive. People are not happy with being able to vote every 4 years and they want opportunities to participate in between elections cycles and direct voice in how priorities are decided. Others questioned whether bypassing official processes can lead to less participation and more apathy overall on national issues. Some questioned whether within fairly long-standing democracies, open data will have any real impact, considering existing levels of apathy and the lack of political participation.

11. Strong information, statistical, monitoring and evaluation systems are critical for open data and open government processes and to ensure more effective management of development results. This is still a challenge for some countries that need to review their mechanisms and improve their tools and processes for data collection and dissemination. If there is no data, or no current data, there is not much point in opening it. In addition, there are capacity and technical competency challenges within institutions in some countries. One participant mentioned a lack of current government geological information about gold and oil deposits that weakens government capacity to negotiate with the private sector extraction industry and ensure partnerships and earnings will contribute to national development. In addition more evidence is needed on the impact, use, and outcomes of open data. At the moment it’s quite difficult to say with any real authority what the outcomes and impact of open data and open government have been.

12. IATI (International Aid Transparency Initiative) needs more partners. Government representatives noted that they are opening their data, but they can only open the data they possess. In order for data on aid to be useful, more data is needed, especially that of NGOs who are implementing programs. Not many NGOs have published their information to the IATI standard at this point. “The really interesting thing will be when we can start mashing up and mapping out the different kinds of information,” as one speaker noted, “for example, this is the goal of the Open Aid Partnership. It will involve combining information from the donor, development indicators from the World Bank, and country information, and this will open up amazing possibilities once this is all geo-coded.” There are reporting challenges related to IATI and open government data, however, because at times countries and NGOs do not see the benefits of reporting – it feels like just one more top-down administrative burden. There are also issues with donor governments reporting their committed intentions and amounts, recipient governments reporting back, and communications with citizens on both sides (donor and recipient countries). One example that was reported to be enjoying some success was the multi-donor budget support initiative in Ghana, where development partners and government work together to establish development indicators and commitments. If the government delivers on the indicators, the development partners will then provide them with the funding. Development partners can also earmark funding to particular areas if there is government agreement.

13. We need more accountability towards ‘beneficiaries’.Currently many of these initiatives are perceived as being focused on donors and donor publics. As one participant noted, “the interesting thing is less about government and more about getting regular people involved in these processes. When you engage the public you’ll engage government leaders in thinking they will need to change to respond to what citizens are asking for.” Another noted that the essential issue is the link between transparency/accountability and citizens and their own governments. In addition, as one participant asked, “How can you strengthen capacity among citizens to ask the right questions about the data that’s being opened?” For example, citizens may ask about the number of schools being built, but not ask about the quality of education being provided. Public education was a strong focus of discussions around citizen engagement during the policy forum.

14. Should citizens be consulted on everything? however, was one big question. The public at large may not understand the ramifications of its own deep misunderstandings on particular issues and may be inputting from a viewpoint that lacks scientific evidence or fact. “It’s one thing to have an opinion about whether your child should be able to drink energy drinks before age 16, it’s another to input about technical programs like the best policy for green energy,” commented one group.

15. Can citizens really have greater participation if government is still in control of data? was another big question. An example was given of an open consultative process that became unwieldy for a local government, which then shut down the consultation process and changed the nature of the documents to ‘administrative’ and therefore no longer open. Others asked why governments pat themselves on the back over being part of the Open Government Partnership yet they do not have Freedom of Information Acts (FOIA) or they prosecute those who open data in alternative ways, such as Bradley Manning and Aaron Swartz.

16. If citizens don’t get a response from government (or if they don’t like the response, or feel it’s biased or manipulated), apathy and cynicism will increase. It’s important to make sure that ‘open government’ is not just a box that gets ticked off, but rather a long-term change in mentality of those in power and deeper expectations and efforts by citizens for openness and participation in conversations of national importance.

The conclusion was that Open Government is somewhat of a paradox, rooted in aims that are not necessarily new. Open Government strives to enable leaders in their communities to create change and transform their lives and those of people in their communities. It is a complex process that involves many actors and multiple conflicting goals and interests. It’s also something new that we are all learning about and experimenting with, but we are very impatient to know what works and what the impact is. In the room, the feeling was one of ‘radical pragmatism,’ as one participant put it. Open Government is a big idea that represents a big change. It’s something that can transform communities at the global level and there is a great deal of hope and excitement around it. At the same time, we need to acknowledge the challenges associated with it in order to address them and move things forward.

I’ll do a follow up post with the points I made during the panel as this post is clearly way too too long already. Kudos if you are still reading, and a huge thanks to the organizers and participants in the EWB policy forum.

Read Full Post »

The Technology Salon* hosted at IREX on Thursday, June 6, focused on what the International Aid Transparency Initiative (IATI) would mean for international development, especially for US-based NGOs and government contractors.

Tony Pipa, Deputy Assistant Administrator, Policy, Planning and Learning at USAID, started the Salon off by noting that IATI is an inter-agency US government commitment, not only a USAID commitment. USAID is the lead agency for developing the IATI implementation plan, building on existing agreements on transparency, enhancing the US Government’s commitments to transparency, openness and accountability. A key element of these efforts is the Foreign Assistance Dashboard which places the data into the public realm in a user friendly way, making it easier to understand visually and also more accessible and easy to find. The goal is not only transparency, but greater accountability. The US Government hopes to streamline reporting requirements, meeting multiple requirements for a range of international and national reporting standards. The goal for USAID is making aid more useful for development.

Steve Davenport from AidData followed, giving some background on IATI itself. IATI was initially sponsored largely by DFID, but has since grown as a partnership. Over 75% of development assistance is represented by signatories to IATI now. Eight donors are now publishing and twenty-three developing countries have signed on (involving partner countries at the local level as well). Different groups are conducting pilots to see how to implement as IATI gains more traction. For this reason, it would be a good move for US INGOs and contractors to get in front of the transparency and accountability curve rather than get hit by the wave. Better transparency allows organizations to better show their results. The IATI standard can lead to better coordination among the different actors, making it easier to broaden our collective impact. This is especially important now given that aid budgets are being reduced. IATI can be thought of as a group of people, a set of commitments, and an XML standard for moving data from point a to point b. Application developers are beginning to pick this up and develop tools that allow for new ways of visualizing the data, making it actionable and improving accessibility, which can lead to better accountability.

Larry Nowels (Consultant at Hewlett, ONE campaignspoke about Hewlett experience with IATI. Hewlett has made a large investment in transparency and accountability, supporting US and European organizations as well as startups in Africa and Asia over the past 10 years. Transparency is a key building block, so that governments and their citizens know what is being spent, where and on what, and how to make better decisions about resources and reducing waste. It also allows citizens to hold their governments accountable. Hewlett was one of the original signatories and the second publisher to the IATI standard. A key question remains: What’s in it for an organization that publishes according to the standard? For some teams, IATI makes all the sense in the world, but for others it seems to be a waste of resources. The Obama Administration (Open Government Directive, Open Government Partnership, Foreign Assistance Dashboard), all show a strong commitment to transparency. The tough part is implementation of IATI standards and details are still being worked out to find an ideal way.

Larry considers a central repository ideal, but there are issues with quality control and the Foreign Assistance Dashboard does not add data that was not already publicly available. In addition, many US Government agencies have not been added to the Dashboard yet, and getting them on board will be difficult if they are less dedicated than USAID or State. It’s critical to institutionalize IATI and related initiatives and internalize them, given that we cannot assume Obama’s will be a multi-term presidency. In the past 3 years, a number of bills around the theme of accountability and transparency have been introduced by both parties. The Poe-Berman Bill (HR 3159) provides a law to entrench the use of tools like the Dashboard. The Administration, especially the State Department, however, has not engaged Congress enough on these issues, and this has led to some roadblocks. White House pressure could help strengthen support for this initiative; however, there may be pushback by Republicans who generally oppose the US subscribing to international standards.

Discussion**

What is the overlap between the Open Government Partnership (OGP) and IATI?

What is the practical, on-the-ground use or application of IATI data? What does it look like when it is working how it should? What would it ideally look like 5 years from now?

  • There is enormous need for data sharing in a crisis – it is essential for coordinating and understanding the unfolding situations in real-time in order to save lives. There is much more scrutiny as well as a need for rapid coordination and response during a humanitarian crisis, so it requires a higher level of transparency than development work. One way that has been suggested for getting more organizations on board is to start sharing more information during crises and draw the lessons over to development.
  • A project in Mexico City has run investigative campaigns on spending. This has led to the prosecution and resignations of political figures and even some threats against staff, which demonstrates how unsettling this open information can be to the powers that be. It is not about transparency for transparency’s sake. It’s about having a tool that can be used to inform, interpret situations and hold governments and donors accountable. It opens the system up for sharing information.
  • Currently this type of information isn’t available to Country Governments for coordination. Countries need to plan their fiscal year budgets, but rely heavily on donors, and both run on a different fiscal calendar. If donor information were more readily available, countries could plan better.
  • On a 5-year horizon, we would ideally see aid tracking down to the beneficiary level. Tools like IATI can help collect data in more automated ways. Open data can help us track both where funding is allocated and also what is actually being implemented. Additional work is needed on this side; for example, training journalists to understand how to use this data, how to access it – handing them a data file isn’t a very useful thing in and of itself.

That’s great, but great for whom? What does it mean? Does this lead to better aid? Better spending? And what if it creates unrealistic timelines, where development becomes more like a for-profit company that must demonstrate impact within a fiscal quarter? We all know that development initiatives and impact take much longer than three months. Will IATI mean that we will stop doing things that take longer? Things that cannot be checked off on a checkbox? Will we actually lower the quality of our programs by doing this?

  • IATI, like any form of transparency, is only one element of a whole stream of things. The new USAID monitoring and evaluation system is a breakthrough for actually learning from evaluations and data. It’s a longer-term investment than Congress is used to, so it’s a matter of convincing Congress that it is worth the value. There is a better chance of USAID admitting failure in the future if the systems are in place to demonstrate these failures in hard data and learn from them. It’s about discovering why we failed – if we spend money and it doesn’t work, we can at least then identify weaknesses and build on them. Showing failures also demonstrates credibility and a willingness to move forward positively.
  • We can err on the side of openness and transparency and engage congress and the public, making a distinction between performance management and the long-term impact of development projects. There is no way of holding back on publishing information until it is in a format that will be readily understandable to congress and the public. This is a reality that we are going to have to live with; we have to put the data out and build on it. This can help to start important conversations. IATI is important for closing the loop, not just on public resources but also private resources (which is why Hewlett’s commitment is important). As private development resources increase, USAID becomes less dominant in the development landscape. Making sure data from many sources comes in a common format will make it easy to compare, and bring this data together to help understand what it going on. The way to visualize and think of it now is different because we are still in early days. IATI will begin to change the approach for how you evaluate impact.
  • IATI data itself does not tell the whole story, so it’s important to look at additional sources of information beyond it. IATI is only one part of the monitoring and evaluation effort, only one part of the transparency and accountability effort.

How do you overcome conflicts of interest? If development outcomes or data that is opened are not in the interest of the country government, how do we know the data can be trusted, or how does it feed back to the public in each country?

  • China’s investment in Africa, for example, may make it more difficult to understand aid flows in some ways. It will take a while to enforce the standards, particularly if it is done quickly, but we can draw the BRICs into the conversation and we are working with them on these topics.

The hard part is the implementation. So what are the time lines? How soon do we think we will see the US publish data to IATI?

  • At this time, the US Government hasn’t created an implementation timeline, so the first order of business is to get IATI institutionalized, and not to rush on this. It’s a larger issue than just USAID, so it must be done carefully and tactfully so it stays in place over the long term. USAID is working on getting data on the Dashboard to get the Obligation of Spending data up and project level data up. USAID is trying to balance this with consistency and quality control. How do you produce quality data when you are publishing regularly? These issues must be addressed while the systems are being developed. Once USAID puts data on the Dashboard, it will begin being converted to IATI data

IATI is still a donor-led initiative. NGOs involvement opens this data up to use by communities. Training individuals to use this information is not necessarily sufficient. Are there plans to build institutions or civil society organizations to support the data to be useful for communities and the general public?

  • The data can assist with the development of watchdog organizations who provide a platform for citizens to act together for accountability. Examples of organizations that are currently receiving funding to do this are Sodnet and Twaweza. There has also been support to think tanks throughout Africa to build the capacity of objective, independent policy analysts who write critiques of government initiatives.
  • There is a definite need to mainstream IATI and bring everyone together into one single conversation instead of setting up parallel structures.

So how do you build these institutions, watchdogs, etc? Will USAID really put out RFPs that offer funding to train people to criticize them?

  • This is where Hewlett and other organizations come in. They can run these trainings and build capacities. The Knight News Challenge is doing a lot of work around data-driven journalism, for example.

This is going to put a lot of pressure on people to be more efficient and might drive down resources in these spheres. There is a limited amount of incentive for organizations to involve themselves. Is there a way to incentivize it?

  • It will also drive some internal efficiencies, creating greater internal coherence within development organizations. It’s very hard to pinpoint impact within organizations because there isn’t an easy way to draw comparisons between projects, implementation strategies, etc. People always worry: What if we find something that makes us look bad? So IATI is just one part of a bigger effort to push for commitment to transparency across the board. Committing to IATI can lead to a mindset which focuses organizations on efficiency, transparency and accountability.
  • Filling out the Dashboard will be helpful in many respects, and it will make information more accessible to the general public, as well as congressional staffers, etc. It can serve multiple constituencies while making data more usable and transparent. USAID is going to be as aggressive as possible to get information on the dashboard into IATI format. There has not been a conversation about requiring implementing partners to meet IATI standards, but USAID itself is committed.

***

Thanks to IREX for hosting the Salon, our fantastic lead discussants and participants for stimulating discussion, Wayan Vota for inviting me to coordinate the Salon and Anna Shaw for sharing her Salon notes which were the basis for this blog post.

Sign up here if you’d like to be on the invitation list for future Salons.

*The Technology Salon is sponsored Inveneo in collaboration with IREX, Plan International USA, Jhpiego and ARM.

**The Salon runs by Chatham House Rule, so no attribution has been made for the discussion portion of the Salon.

Read Full Post »

Older Posts »