Feeds:
Posts
Comments

At our April Technology Salon we discussed the evidence and good practice base for blockchain and Distributed Ledger Technologies (DLTs) in the humanitarian sector. Our discussants were Larissa Fast (co-author with Giulio Coppi of the Global Alliance for Humanitarian Innovation/GAHI’s report on Humanitarian Blockchain, Senior Lecturer at HCRI, University of Manchester and Research Associate at the Humanitarian Policy Group) and Ariana Fowler (UNICEF Blockchain Strategist).

Though blockchain fans suggest DLTs can address common problems of humanitarian organizations, the extreme hype cycle has many skeptics who believe that blockchain and DLTs are simply overblown and for the most part useless for the sector. Until recently, evidence on the utility of blockchain/DLTs in the humanitarian sector has been slim to none, with some calling for the sector to step back and establish a measured approach and a learning agenda in order to determine if blockchain is worth spending time on. Others argue that evaluators misunderstand what to evaluate and how.

The GAHI report provides an excellent overview of blockchain and DLTs in the sector along with recommendations at the project, policy and system levels to address the challenges that would need to be overcome before DLTs can be ethically, safely, appropriately and effectively scaled in humanitarian contexts.

What’s blockchain? What’s a DLT?

We started with a basic explanation of DLTs and Blockchain and how they work. (See page 5 of the GAHI report for more detail).

The GAHI report aimed to get beyond the potential of Blockchain and DLTs to actual use cases — however, in the humanitarian sector there is still more potential than evidence. Although there were multiple use cases to choose from, the report authors chose to go in-depth on five, selected to provide a sense of the different ways that blockchain is specifically being used in the sector.

These use cases all currently have limited “nodes” (e.g., places where the data is stored) and only a few “controlling entities” (that determine what information is stored or put on the chain). They are all “private“ (as opposed to public) blockchains, meaning they are not taking advantage of DLT potential for dispersed information, and they end up being more like “a very expensive database.”

What’s the deal with private vs public blockchains?

Private versus public blockchains are an ideological sticking point in “deep blockchain culture,” noted one Salon participant. “’Cryptobros’ and blockchain fundamentalists think private blockchains are the Antichrist.” Private blockchains are considered an oxymoron and completely antithetical to the idea of blockchain.

So why are humanitarian organizations creating private blockchains? “They are being cautious about protecting data as they test out blockchain and DLTs. It’s a conscious choice to proceed in a controlled way, because once information is on the blockchain, it’s immutable — it cannot be removed.” When first trying out a DLT or blockchain, “Humanitarians tend to be cautious. They don’t want to play with the permanency of a public blockchain since they are working with vulnerable populations.”

Because of the blockchain hype cycle, however, there is some skepticism about organizations using private blockchains. “Are they setting up a private blockchain with one node so that they can say that they’re using blockchain just to get funding?”

An issue with private blockchains is that they are not open and transparent. The code is developed behind closed doors, meaning that it’s difficult to make it interoperable, whereas “with a public chain, you can check the code and interact with it.”

Does the humanitarian sector have the capacity to use blockchain?

As one person pointed out, knowledge and capacity around blockchain in the humanitarian sector is very low. There are currently very few people who understand both humanitarian work and the private sector/technology side of blockchain. “We desperately need intermediaries because people in the two sectors talk past each other. They use the same words to mean very different things, and this leads to misunderstandings.” This is a perpetual issue in the “humanitarian tech” space, and it often leads to applications that are not in the best interest of those on the receiving end of humanitarian work.

Capacity challenges also come up with regard to managing partnerships that involve intellectual properly. When cooperating with the private sector, organizations are normally required to sign an MOU that gives rights to the company. Often humanitarian agencies do not fully understand what they are signing up for. This can mean that the company uses the humanitarian collaboration to develop technologies that are later used in ways that the humanitarian agency considers unethical or disturbing. Having technology or blockchain expertise within an organization makes it possible to better negotiate those types of situations, but often only the larger INGOs can afford that type of expertise. Similarly, organizations lack expertise in the legal and regulatory space with regard to blockchain.

How will blockchain become locally owned? Should we wait for a user-friendly version?

Technology moves extremely fast, and organizations need a certain level of capacity to create it and maintain it. “I’m an engineer working in the humanitarian space,” said one Salon participant. “Blockchain is such a complex software solution that I’m very skeptical it will ever be at a stage where it could be locally owned and managed. Even with super basic SMS-based services we have maintenance issues and challenges handing off the tech. If in this room we are struggling to understand blockchain, how will this ever work in lower tech and lower resource areas?” Another participant asked a similar question with regard to handing off a blockchain solution to a local government.

Does the sector needs to wait for a simplified and “user friendly” version of blockchain before humanitarians get into the space? Some said yes, but other participants said that the technology is moving quickly, and that it is critical for humanitarians to “get in there” to try to slow it down. “Sometimes blockchain is not the solution. Sometimes a database is just fine. We need people to pump the brakes before things get out of control.”

“How can people learn about blockchain? How could a grassroots organization begin to set one up?” asked one person. There is currently no “Square Space for Blockchain,” and the technology remains complicated, but those with a strong drive could learn, according to one person. But although “coders might be able to teach themselves ‘light blockchain,’ there is definitely a barrier to entry.” This is a challenge with the whole area of blockchain. “It skipped the education step. We need a ‘learning revolution ‘if we want people to actually use it.”

Enabling environments for learning to use blockchain don’t exist in conflict zones. The knowledge is held by a few individuals, and this makes long-term support and maintenance of DLT and blockchain systems very difficult. How to localize and own the knowledge? How to ensure sustainability? The sector needs to think about what the “Blockchain 101” is. There needs to be more accompaniment, investment and support for the enabling environment if blockchain is to be useful and sustainable in the sector.

Are there any examples of humanitarian blockchain that are working?

The GAHI report talks about five cases in particular. Disberse was highlighted by one Salon participant as an example that seems to be working. Disberse is a private fin-tech company that uses blockchain, but it was started by former humanitarians. “This example works in part because there is a sense of commitment to the humanitarian sector alongside the technical expertise.”

In general, in the humanitarian space, the place where blockchain/ DLTs appear to be the most effective is in back-end use cases. In other words, blockchain is helpful for making behind-the-scenes transactions in humanitarian assistance more efficient. It can eliminate bank transaction fees, and this leads to savings. Agencies can also use blockchain to create efficiencies and benefits for record keeping and auditability. This situation is not unique to blockchain. A recent DIAL baseline study of the global ICT4D ecosystem also found that in the social sector, the main benefits of ICTs were going to organizations, not to vulnerable populations.

“This is all fine,” according to one Salon participant, “but one must be clear that the benefits accrue to the agencies, not the ‘beneficiaries,’ who may not even know that DLTs are being used.” On the one hand, having a seamless backend built on blockchain where users don’t even know that blockchain is involved sounds ideal, However, this can be somewhat problematic. “Are agencies getting meaningful and responsible consent for using blockchain? If executives don’t even understand what the blockchain is, how do you explain that to people more generally?”

Because there is not a simple, accessible way of developing blockchain solutions and there are not a lot of user-friendly interfaces for the general population, for at least the next few years, humanitarian applications of blockchain will likely only be useful for back-office operations. This means that is is up to humanitarian organizations to re-invest any money saved by blockchain into program funding, so that “beneficiaries” are accruing the benefits.

What other “social” use cases are there for blockchain?

In the wider social sector and development sector, there are plenty of potential use cases, but again, very little documented evidence of their short- and long-term impacts. (Author’s note: I am not talking about financial and private sector use cases, I’m referring very specifically to social sectors and the international development and humanitarian sector). For example, Oxfam is tracing supply chains of rice, however this is a one-off pilot and it’s unclear whether it can scale. IBM has a variety of supply chain examples. Land registries and sustainable fishing are also being explored as are digital ID, birth registration and civil registries.

According to one Salon participant, “supply chain is the low-hanging fruit of blockchain – just recording something, tracking it, and referencing it. It’s all basically a ledger, a spreadsheet. Even digital ID – it’s a supply chain of movement. Provenance is a good way to use a blockchain solution.” Other areas where blockchain is said to have potential is in situations where election transparency is needed and also “smart contracts” where one needs complex contracts and there is a lack of trust amongst the parties. In general, where there is a recurring need for anonymized, disaggregated data, blockchain could be a solution.

The important thing, however, is having a very clear definition of the problem before deciding that blockchain is the solution. “A lot of times people don’t know what their problem is, and the problem is not one that can be fixed with blockchain.” Additionally, accuracy (”garbage in, garbage out”) remains a problem that blockchain on its own cannot solve. “If the off-chain process isn’t accurate, If you’re looking at human rights abuses of migrant workers, but everything is being fudged. If your supply chain is blurry, or if the information being put on the blockchain is not verified, then you have a separate problem to figure out before thinking about blockchain.”

What about ethics and consent and the Digital Principles?

Are the Digital Principles are being used as a way to guide ethical, responsible and sustainable blockchain use in the humanitarian space, asked one Salon participant. The general impression in the room was that no. “Deep crypto in the private sector is a black hole in the blockchain space,” according to one person, and the gap between the world of blockchain in the private sector and the world of blockchain in the humanitarian sector is huge. (See this write up, for a taste of one segment of the crypto-world.) “The majority of private sector blockchain enthusiasts who are working on humanitarian issues have not heard of any principles. They are operating with no principles, and sometimes it’s largely for PR because the blockchain hype cycle means they will get a lot of good press from it. You get someone who read an article in Vice about a problem in a place they’ve never heard of, and they decide that blockchain is the solution…. They are often re-inventing the wheel, and fire, and also electricity — they think that no one has ever thought about this problem before.”

Most in the room considered that this type of uninformed application of blockchain is irresponsible, and that these parallel worlds and conversations need to come together. “The humanitarian space has decades of experience with things that have been tried and haven’t worked – but people on the tech side think no one has ever tried solving these problems. We need to improve the dialogue and communication. There is a wealth of knowledge to share, and a huge learning curve on both sides.”

Additionally, one Salon participant pointed out the importance of bringing ethics into the discussion. “It’s not about just using a blockchain. It’s about what the problem is that you’re trying to solve, and does blockchain help address that problem? There are a lot of problems that blockchain is not appropriate for. Do you have the technical capacity or an accessible online environment? That’s important.”

On top of that, “it’s important for people to know that their information is being used in a particular way by a particular technology. We need to grapple with that, or we end up experimenting on people who are already marginalized or vulnerable to begin with. How do we do that? It’s like the Facebook moment. That same thing for blockchain – if you don’t know what’s going on and how your information is being used, it’s problematic.”

A third point is the massive environmental disadvantage in a public blockchain. Currently, the computing power used to verify and validate transactions that happen on public chains is immense. That is part of the ethical challenge related to blockchain. “You can’t get around the massive environmental aspect. And that makes it ironic for blockchain to be used to track carbon offsets.” (Note: there are blockchain companies who say they are working on reducing the environmental impact of blockchain with “pilots coming very soon” but it remains to be seen whether this is true or whether it’s another part of the hype cycle.)

What should donors be doing?

In addition to taking into consideration the ethical, intellectual property, environmental, sustainability, ownership, and consent aspects mentioned above and being guided by the Digital Principles, it was suggested that donors make sure they do their homework and conduct thorough due diligence on potential partners and grantees. “The vetting process needs to be heightened with blockchain because of all the hype around it. Companies come and go. They are here one day and disappear the next.” There was deep suspicion in the room because of the many blockchain outfits that are hyped up and do not actually have the staff to truly do blockchain for humanitarian purposes and use this angle just to get investments.

“Before investing, It would be important to talk with someone like Larissa [our lead discussant] who has done vetting,” said one Salon participant.  “Don’t fall for the marketing. Do a lot of due diligence and demand evidence. Show us the evidence or we’re not funding you. If you’re saying you want to work with a vulnerable or marginalized population, do you have contact with them right now? Do you know them right now? Or did you just read about them in Vice?”

Recommendations outlined in the GAHI report include providing multi-year financing to humanitarian organizations to allow for the possibility of scaling, and asking for interoperability requirements and guidelines around transparency to be met so that there are not multiple silos governing the sector.

So, are we there yet?

Nope. But at least we’re starting to talk about evidence and learning!

Resources

In addition to the GAHI report, the following resources may be useful:

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

Advertisements

In the search for evidence of impact, donors and investors are asking that more and more data be generated by grantees and those they serve. Some of those driving this conversation talk about the “opportunity cost” of not collecting, opening and sharing as much data as possible. Yet we need to also talk about the real and tangible risks of data collecting and sharing and the long-term impacts of reduced data privacy and security rights, especially for the vulnerable individuals and groups with whom we work.

This week I’m at the Global Philanthropy Forum Conference in the heart of Silicon Valley speaking on a panel titled “Civil Liberties and Data Philanthropy: When NOT to Ask for More.” It’s often donor requests for innovation or for proof of impact that push implementors to collect more and more data. So donors and investors have a critical role to play in encouraging greater respect and protection of the data of vulnerable individuals and groups. Philanthropists, grantees, and investees can all help to reduce these risks by bringing a values-based responsible data approach to their work.

Here are three suggestions for philanthropists on how to contribute to more responsible data management:

1) Enhance your own awareness and expertise on the potential benefits and harms associated with data. 

  • Adopt processes that take a closer look at the possible risks and harms of collecting and holding data and how to mitigate them. Ensure those aspects are reviewed and considered during investments and grant making.
  • Conduct risk-benefits-harms assessments early in the program design and/or grant decision-making processes. This type of assessment helps lay out the benefits of collecting and using data, identifies the data-related harms we might we be enabling, and asks us to determine how we are intentionally mitigating harm during the design of our data collection, use and sharing. Importantly, this process also asks us to also identify who is benefiting from data collection and who is taking on the burden of risk. It then aims to assess whether the benefits of having data outweigh the potential harms. Risks-benefits-harms assessments also help us to ensure we are doing a contextual assessment, which is important because every situation is different. When these assessments are done in a participatory way, they tend to be even more useful and accurate ways to reduce risks in data collection and management.
  • Hire people within your teams who can help provide technical support to grantees when needed in a friendly — not a punitive — way. Building in a ‘data responsibility by design’ approach can help with that. We need to think about the role of data during the early stages of design. What data is collected? Why? How? By and from whom? What are the potential benefits, risks, and harms of gathering, holding, using and sharing that data? How can we reduce the amount of data that we collect and mitigate potential harms?
  • Be careful with data on your grantees. If you are working with organizations who (because of the nature of their mission) are at risk themselves, it’s imperative that you protect their privacy and don’t expose them to harm by collecting too much data from them or about them. Here’s a good guide for human rights donors on protecting sensitive data.

2) Use your power and influence to encourage grantees and investees to handle data more responsibly. If donors are going to push for more data collection, they should also be signaling to grantees and investees that responsible data management matters and encouraging them to think about it in proposals and more broadly in their work.

  • Strengthen grantee capacity as part of the process of raising data management standards. Lower-resourced organizations may not be able to meet higher data privacy requirements, so donors should think about how they can support rather than exclude smaller organizations with less capacity as we all work together to raise data management standards.
  • Invest holistically in both grants and grantees. This starts by understanding grantees’ operational, resource, and technical constraints as well as the real security risks posed to grantee staff, data collectors, and data subjects. For this to work, donors need to create genuinely safe spaces for grantees to voice their concerns and discuss constraints that may limit their ability to safely collect the data that donors are demanding.
  • Invest in grantees’ IT and other systems and provide operational funds that enable these systems to work. There is never enough funding for IT systems, and this puts the data of vulnerable people and groups at risk. One reason that organizations struggle to fund systems and improve data management is because they can’t bill overhead. Perverse incentives prevent investments in responsible data. Donors can work through this and help find solutions.
  • Don’t punish organizations that include budget for better data use, protection and security in their proposals. It takes money and staff and systems to manage data in secure ways. Yet stories abound in the sector about proposals that include these elements being rejected because they turn out to be more expensive. It’s critical to remember that safeguarding of all kinds takes resources!
  • Find out what kind of technical or systems support grantees/investees need to better uphold ethical data use and protection and explore ways that you can provide additional funds and resources to strengthen this area in those grantees and across the wider sector.
  • Remember that we are talking about long-term organizational behavior change. It is urgent to get moving on improving how we all handle data — but this will take some time. It’s not a quick fix because the skills are in short supply and high demand right now as a result of the GDPR and related laws that are emerging in other countries around the world.
  • Don’t ask grantees to collect data that might make vulnerable individuals or groups wary of them. Data is an extension of an individual. Trust in how an organization collects and manages an individual’s data leads to trust in an organization itself. Organizations need to be trusted in order to do our work, and collection of highly sensitive data, misuse of data or a data breach can really break that trust compact and reduce an organization’s impact.

3) Think about the responsibility you have for what you do, what you fund, and the type of society that we live in. Support awareness and compliance with new regulations and legislation that can protect privacy. Don’t use “innovation” as an excuse for putting historically marginalized individuals and groups at risk or for allowing our societies to advance in ways that only benefit the wealthiest. Question the current pathway of the “Fourth Industrial Revolution” and where it may take us.

I’m sure I’m leaving out some things. What do you think donors and the wider philanthropic community can do to enhance responsible data management and digital safeguarding?

 

 

 

The recently announced World Food Programme (WFP) partnership with Palantir, IRIN’s article about it, reactions from the Responsible Data Forum, and WFP’s resulting statement inspired us to pull together a Technology Salon in New York City to discuss the ethics of humanitarian data sharing.

(See this crowdsourced document for more background on the WFP-Palantir partnership and resources for thinking about the ethics of data sharing. Also here is an overview of WFP’s SCOPE system for beneficiary identification, management and tracking.)

Our lead discussants were: Laura Walker McDonald, Global Alliance for Humanitarian Innovation; Mark Latonero, Research Lead for Data & Human Rights, Data & Society; Nathaniel Raymond, Jackson Institute of Global Affairs, Yale University; and Kareem Elbayar, Partnerships Manager, Centre for Humanitarian Data at the United Nations Office for the Coordination of Humanitarian Affairs. We were graciously hosted by The Gov Lab.

What are the concerns about humanitarian data sharing and with Palantir?

Some of the initial concerns expressed by Salon participants about humanitarian data sharing included: data privacy and the permanence of data; biases in data leading to unwarranted conclusions and assumptions; loss of stakeholder engagement when humanitarians move to big data and techno-centric approaches; low awareness and poor practices across humanitarian organizations on data privacy and security; tensions between security of data and utility of data; validity and reliability of data; lack of clarity about the true purposes of data sharing; the practice of ‘ethics outsourcing’ (testing things in places where there is a perceived ‘lower ethical standard;’ and less accountability); use of humanitarian data to target and harm aid recipients; disempowerment and extractive approaches to data; lack of checks and balances for safe and productive data sharing; difficulty of securing meaningful consent; and the links between data and surveillance by malicious actors, governments, private sector, military or intelligence agencies.

Palantir’s relationships and work with police, the CIA, ICE, the NSA, the US military and wider intelligence community are one of the main concerns about this partnership. Some ask whether a company can legitimately serve philanthropy, development, social, human rights and humanitarian sectors while also serving the military and intelligence communities and whether it is ethical for those in the former to engage in partnerships with companies who serve the latter. Others ask if WFP and others who partner with Palantir are fully aware of the company’s background, and if so, why these partnerships have been able to pass through due diligence processes. Yet others wonder if a company like Palantir can be trusted, given its background.

Below is a summary of the key points of the discussion, which happened on February 28, 2019. (Technology Salons are Chatham House affairs, so I have not attributed quotes in this post.)

Why were we surprised by this partnership/type of partnership?

Our first discussant asked why this partnership was a surprise to many. He emphasized the importance of stakeholder conversations, transparency, and wider engagement in the lead-up to these kinds of partnerships. “And I don’t mean in order to warm critics up to the idea, but rather to create a safe and trusted ecosystem. Feedback and accountability are really key to this.” He also highlighted that humanitarian organizations are not experts in advanced technologies and that it’s normal for them to bring in experts in areas that are not their forte. However, we need to remember that tech companies are not experts in humanitarian work and put the proper checks and balances in place. Bringing in a range of multidisciplinary expertise and distributed intelligence is necessary in a complex information environment. One possible approach is creating technology advisory boards. Another way to ensure more transparency and accountability is to conduct a human rights impact assessment. The next year will be a major test for these kinds of partnerships, given the growing concerns, he said.

One Salon participant said that the fact that the humanitarian sector engages in partnerships with the private sector is not a surprise at all, as the sector has worked through Public-Private Partnerships (PPPs) for several years now and they can bring huge value. The surprise is that WFP chose Palantir as the partner. “They are not the only option, so why pick them?” Another person shared that the WFP partnership went through a full legal review, and so it was not a surprise to everyone. However, communication around the partnership was not well planned or thought out and the process was not transparent and open. Others pointed out that although a legal review covers some bases, it does not assess the potential negative social impact or risk to ‘beneficiaries.’ For some the biggest surprise was WFP’s own surprise at the pushback on this particular partnership and its unsatisfactory reaction to the concerns raised about it. The response from responsible data advocates and the press attention to the WFP-Palantir partnership might be a turning point for the sector to encourage more awareness of the risks in working with certain types of companies. As many noted, this is not only a problem for WFP, it’s something that plagues the wider sector and needs to be addressed urgently.

Organizations need think beyond reputational harm and consider harm to beneficiaries

“We spend too much time focusing on avoiding risk to institutions and too little time thinking about how to mitigate risk to beneficiaries,” said one person. WFP, for example, has some of the best policies and procedures out there, yet this partnership still passed their internal test. That is a scary thought, because it implies that other agencies who have weaker policies might be agreeing to even more risky partnerships. Are these policies and risk assessments, then, covering all the different types of risk that need consideration? Many at the Salon felt that due diligence and partnership policies focus almost exclusively on organizational and reputational risk with very little attention to the risk that vulnerable populations might face. It’s not just a question of having policies, however, said one person. “Look at the Oxfam Safeguarding situation. Oxfam had some of the best safeguarding policies, yet there were egregious violations that were not addressed by having a policy. It’s a question of power and how decisions get made, and where decision-making power lies and who is involved and listened to.” (Note: one person contacted me pre-Salon to say that there was pushback by WFP country-level representatives about the Palantir partnership, but that it still went ahead. This brings up the same issue of decision-making power, and who has power to decide on these partnerships and why are voices from the frontlines not being heard? Additionally, are those whose data is captured and put into these large data systems ever consulted about what they think?)

Organizations need to assess wider implications, risks, and unintended negative consequences

It’s not only WFP that is putting information into SCOPE, said one person. “Food insecure people have no choice about whether to provide their data if they wish to receive food.” Thus, the question of truly ‘informed consent’ arises. Implementing partners don’t have a lot of choice either, he said. “Implementing agencies are forced to input beneficiary data into SCOPE if they want to work in particular zones or countries.” This means that WFP’s systems and partnerships have an impact on the entire humanitarian community, and therefore these partnerships and systems need to be more broadly consulted about with the wider sector.  The optical and reputational impact to organizations aside from WFP is significant, as they may disagree with the Palantir partnership but they are now associated with it by default. This type of harm goes beyond the fear of exploitation of the data in WFP’s “data lake.” It becomes a risk to personnel on the ground who are then seen as collaborating with a CIA contractor by putting beneficiary biometric data into SCOPE. This can also deter food-insecure people from accessing benefits. Additionally, association with CIA or US military has led to humanitarian agencies and workers being targeted, attacked and killed. That is all in addition to the question on whether these kinds of partnerships violate humanitarian principles, such as that of impartiality.

“It’s critical to understand the role of rumor in humanitarian contexts,” said one discussant. “Affected populations are trying to figure out what is happening and there is often a lot of rumor going around.”  So, if Palantir has a reputation for giving data to the CIA, people may hear about that and then be afraid to access services for fear of having their data given to the CIA. This can lead to retaliation against humanitarians and humanitarian organizations and escalate their risk of operating. Risk assessments need to go beyond the typical areas of reputation or financial risk. We also need to think about how these partnerships can affect humanitarian access and community trust and how rumors can have wide ripple effects.

The whole sector needs to put better due diligence systems in place. As it is now, noted one person, often it’s someone who doesn’t know much about data who writes up a short summary of the partnership, and there is limited review. “We’ve been struggling for 10 years to get our offices to use data. Now we’re in a situation where they’re just picking up a bunch of data and handing it over to private companies.”

UN immunities and privileges lead to a lack of accountability

The fact that UN agencies have immunities and privileges, means that laws such as the EU’s General Data Protection Regulation (GDPR) do not apply to them and they are left to self-regulate. Additionally, there is no common agreement among UN Agencies on how GDPR applies, and each UN agency interprets it on their own. As one person noted “There is a troubling sense of exceptionalism and lack of accountability in some of these agencies because ‘a beneficiary cannot take me to court.’” An interesting point, however, is that while UN agencies are immune, those contracted as their data processors are not immune — so data processors beware!

Demographically Identifiable Information (DII) can lead to serious group harm

The WFP has stated that personally identifiable information (PII) is not technically accessible to Palantir via this partnership. However, some at the Salon consider that the WFP failed in their statement about the partnership when they used the absence of PII as a defense. Demographically Identifiable Information (DII) and the activity patterns that are visible even in commodity data can be extrapolated as training data for future data modeling. “This is prospective modeling of action-based intelligence patterns as part of multiple screeners of intel,” said one discussant. He went on to explain that privacy discussions have moved from centering on property rights in the 19th Century, to individual rights in the 20th Century, to group rights in the 21st Century. We can use existing laws to emphasize protection of groups and to highlight the risks of DII leading to group harm, he said, as there are well-known cases that exemplify the notion of group harms (Plessy v Ferguson, Brown v Board of Education). Even in logistics data (which is the kind of data that WFP says Palantir will access) that contains no PII, it’s very simple to identify groups. “I can look at supply chain information and tell you where there are lactating mothers. If you don’t want refugees to give birth in the country they have arrived to, this information can be used for targeting.”

Many in the sector do not trust a company like Palantir

Though it is not clear who was in the room when WFP made the decision to partner with Palantir, the overall sector has concerns that the people making these decisions are not assessing partnerships from all angles: legal, privacy, programmatic, ethical, data use and management, social, protection, etc. Technologists and humanitarian practitioners are often not included in making these decisions, said one participant. “It’s the people with MBAs. They trust a tech company to say ‘this is secure’ but they don’t have the expertise to actually know that. Not to mention that yes, something might be secure, but maybe it’s not ethical. Senior people are signing off without having a full view. We need a range of skill sets reviewing these kinds of partnerships and investments.”

Another question arises: What happens when there is scope creep? Is Palantir in essence “grooming” the sector to then abuse data it accesses once it’s trusted and “allowed in”? Others pointed out that the grooming has already happened and Palantir is already on the inside. They first began partnering with the sector via the Clinton Global Initiative meetings back in 2013 and they are very active at World Economic Forum meetings. “This is not something coming out of the Trump administration, it was happening long before that,” said one person, and the company is already “in.” Another person said “Palantir lobbied their way into this, and they’ve gotten past the point of reputational challenge.” Palantir has approached many humanitarian agencies, including all the UN agencies, added a third person. Now that they have secured this contract with the WFP, the door to future work with a lot of other agencies is open and this is very concerning.

We’re in a new political economy: data brokerage.

“Humanitarians have lost their Geneva values and embraced Silicon Valley values” said one discussant. They are becoming data brokers within a colonial data paradigm. “We are making decisions in hierarchies of power, often extralegally,” he said. “We make decisions about other people’s data without their involvement, and we need to be asking: is it humanitarian to commodify for monetary or reasons of value the data of beneficiaries? When is it ethical to trade beneficiary data for something of value?” Another raised the issue of incentives. “Where are the incentives stacked? There is no incentive to treat beneficiaries better. All the incentives are on efficiency and scale and attracting donors.”

Can this example push the wider sector to do better?

One participant hoped there could be a net gain out of the WFP-Palantir case. “It’s a bad situation. But it’s a reckoning for the whole space. Most agencies don’t have these checks and balances in place. But people are waking up to it in a serious way. There’s an opportunity to step into. It’s hard inside of bureaucratic organizations, but it’s definitely an opportunity to start doing better.”

Another said that we need more transparency across the sector on these partnerships. “What is our process for evaluating something like this? Let’s just be transparent. We need to get these data partnership policies into the open. WFP could have simply said ‘here is our process’. But they didn’t. We should be working with an open and transparent model.” Overall, there is a serious lack of clarity on what data sharing agreements look like across the sector. One person attending the Salon said that their organization has been trying to understand current practice with regard to data sharing, and it’s been very difficult to get any examples, even redacted ones.

What needs to happen? 

In closing we discussed what needs to happen next. One person noted that in her research on Responsible Data, she found a total lack of capacity in terms of technology at non-profit organizations. “It’s the Economist Syndrome. Someone’s boss reads something on the bus and decides they need a blockchain,” someone quipped. In terms of responsible data approaches, research shows that organizations are completely overwhelmed. “They are keeping silent about their low capacity out of fear they will face consequences,” said one person, “and with GDPR, even more so”. At the wider level, we are still focusing on PII as the issue without considering DII and group rights, and this is a mistake, said another.

Organizations have very low capacity, and we are siloed. “Program officers do not have tech capacity. Tech people are kept in offices or ‘labs’ on their own and there is not a lot of porosity. We need protection advisors, lawyers, digital safety advisors, data protection officers, information management specialists, IT all around the table for this,” noted one discussant. Also, she said, though we do need principles and standards, it’s important that organizations adapt these so that they are their own principles and standards. “We need to adapt these boiler plate standards to our organizations. This has to happen based on our own organizational values.  Not everyone is rights-based, not everyone is humanitarian.” So organizations need to take the time to review and adapt standards, policies and procedures to their own vision and mission and to their own situations, contexts and operations and to generate awareness and buy-in. In conclusion, she said, “if you are not being responsible with data, you are already violating your existing values and codes. Responsible Data is already in your values, it’s a question of living it.”

Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

If you work in the aid and development sector, you’ll have done some soul searching and had a few difficult conversations with friends, donors, and colleagues* about ‘the Oxfam scandal’ this past week. Much has been written about the topic already. Here’s a (growing) compilation of 60+ posts (of varying degrees of quality).

Many in the sector are now scrambling to distance themselves from Oxfam. They want to send a message, rid themselves of stain-by-association, and avoid the fallout. Some seem to want to punish Oxfam for bringing shame upon the aid industry.

These responses, however, compound an existing problem in the sector — a focus on short-term fixes rather than long-term solutions. Actions and statements that treat Oxfam as the problem overlook the fact that it is one part of a broken system in desperate need of fixing.

I’ve worked in the sector for a long time. We all have stories about gender discrimination; sexual harassment, abuse and exploitation; racial discrimination; mistreatment; and mismanagement. We all know ‘that guy’ who got promoted, showed up at a partner or donor organization, or was put out to pasture after a massive screwup, abuse, or generally poor performance that remained an open secret.

The issues go wide and deep, and we talk about them a lot — publicly and privately. Yet the sector never seems able or willing to address them at the core. Instead, we watch the manifestations of these core issues being hushed up — and sometimes we are brave enough to report things. Why do we stay on? Because despite all the warts and all our frustrations with our organizations and our donors, we know that there are parts of this work that really matter.

The UK Charity Commission has launched an investigation into the Oxfam situation. Oxfam itself says it will set up an independent commission to review its practices and culture. It will also create “a global database of accredited referees to end the use of forged, dishonest or unreliable references by past or current Oxfam staff” and invest resources in its safeguarding processes.

These are a good steps for Oxfam. But much more is needed to address the underlying issues across the sector. One systemic fix, for example, might be a global database that is open to all agencies who are hiring, rather than limiting it to Oxfam.

But what next?

We’ll have another big scandal within a day or two, and social media will target its opinions and outrage at something new. In addition to breathing a sigh of relief, leadership across organizations and funders should grapple seriously with the question of how to overhaul the entire sector. We need profound changes that force the industry to live its professed values.

This does not mean dumping more responsibilities on safeguarding, protection, gender, participation, and human resources teams without the corresponding resources and seniority. Staff working in these areas are usually women, and they often do their jobs with little glory or fanfare. This is part of the problem. Rather than handing over clean-up to the ‘feminine’ sectors and walking away, leadership should be placing these thematic areas and functions at the heart of organizations where they have some power. And donors should be funding this in meaningful ways.

Virtually every institution in the US is going through a systematic revealing of its deepest and most entrenched issues of racism, classism, and sexism. It’s no secret that the aid and development sectors were built on colonialism. Will the ‘Oxfam scandal’ push us to finally do something to unravel and deal with that at the global level?

Can we get serious and do the deep work required to address our own institutional racism and gender discrimination and unacceptable power dynamics? Will we work actively to shift internal power structures that reward certain ages, genders, races, classes, and cultures? Will this include how we hire? How we promote? How we listen? How we market and fundraise? How we live our lives both in and outside of our workdays? Are we prepared to go further than the superficial?

Will we actually involve and engage the people we work with (our ‘beneficiaries’) as equals? Will we go beyond ‘feedback mechanisms’ to create the safe and trusted environments that are needed in order for someone to actually provide input, feedback, or report wrongdoing? Will we change our structures to become open and responsive to feedback? Will we follow up on feedback and make real changes in how we operate? In how funding is allocated?

Reforming the sector will require focused attention and conviction. We’ll have uncomfortable conversations about power, and then we’ll need to actually do something about those conversations. We’ll need to unpack the whole industry, including donors, and the dynamics inherent in funding and receiving funding. Addressing these issues in practice might mean that our program timelines are longer and our efforts cost more (update: this post gets at many of those logistics issues – recommended read!). It won’t be just another standardized code of conduct to sign or half-hearted yearly training. Openness and accountability will need to be rewarded, not punished and scandalized.

We will need to resist the urge to shout: #notallaidworkers! Now is not the time to tell ourselves that we are different than the rest of the sector or to run individual PR campaigns to fix our image. Rather, it’s time to open up and examine our institutions and organizations and the wider ecosystem and its incentives so that we can make real change happen.

We have an opportunity – #metoo, #blacklivesmatter, and other movements have prepared the way. Will we dig in and do the work in an honest way, or will we hold our breath and hope it all goes away so we can go back to business as usual?

 

*Thanks to the friends and colleagues who have had these conversations with me this week and the past two decades, and thanks also to those who reviewed and provided input on this post (Tom, Lina, Wayan and J.)!

Karen Palmer is a digital filmmaker and storyteller from London who’s doing a dual residence at ThoughtWorks in Manhattan and TED New York to further develop a project called RIOT, described as an ‘emotionally responsive, live-action film with 3D sound.’ The film uses artificial intelligence, machine learning, various biometric readings, and facial recognition to take a person through a personalized journey during dangerous riot.

Karen Palmer, the future of immersive filmmaking, Future of Storytelling (FoST) 

Karen describes RIOT as ‘bespoke film that reflects your reality.’ As you watch the film, the film is also watching you and adapting to your experience of viewing it. Using a series of biometric readings (the team is experimenting with eye tracking, facial recognition, gait analysis, infrared to capture body temperature, and an emerging technology that tracks heart rate by monitoring the capillaries under a person’s eyes) the film shifts and changes. The biometrics and AI create a “choose your own adventure” type of immersive film experience, except that the choice is made by your body’s reactions to different scenarios. A unique aspect of Karen’s work is that the viewer doesn’t need to wear any type of gear for the experience. The idea is to make RIOT as seamless and immersive as possible. Read more about Karen’s ideas and how the film is shaping up in this Fast Company article and follow along with the project on the RIOT project blog.

When we talked about her project, the first thing I thought of was “The Feelies” in Aldous Huxley’s 1932 classic ‘Brave New World.’ Yet the feelies were pure escapism, and Karen’s work aims to draw people in to a challenging experience where they face their own emotions.

On Friday, December 15, I had the opportunity to facilitate a Salon discussion with a number of people from related disciplines who are intrigued by RIOT and the various boundaries it tests and explores. We had perspectives from people working in the areas of digital storytelling and narrative, surveillance and activism, media and entertainment, emotional intelligence, digital and immersive theater, brand experience, 3D sound and immersive audio, agency and representation, conflict mediation and non-state actors, film, artificial intelligence, and interactive design.

Karen has been busy over the past month as interest in the project begins to swell. In mid-November, at Montreal’s Phi Centre’s Lucid Realities exhibit, she spoke about how digital storytelling is involving more and more of our senses, bringing an extra layer of power to the experience. This means that artists and creatives have an added layer of responsibility. (Research suggests, for example, that the brain has trouble deciphering between virtual reality [VR] and actual reality, and children under the age of 8 have had problems differentiating between a VR experience and actual memory.)

At a recent TED Talk, Karen described the essence of her work as creating experiences where the participant becomes aware of how their emotions affect the narrative of the film while they are in it, and this helps them to see how their emotions affect the narrative of their life. Can this help to create new neural pathways in the brain, she asks. Can it help a person to see how their own emotions are impacting on them but also how others are reading their emotions and reacting to those emotions in real life?

Race and sexuality are at the forefront in the US – and the Trump elections further heightened the tensions. Karen believes it’s ever more important to explore different perspectives and fears in the current context where the potential for unrest is growing. Karen hopes that RIOT can be ‘your own personal riot training tool – a way to become aware of your own reactions and of moving through your fear.’

Core themes that we discussed on Friday include:

How can we harness the power of emotion? Despite our lives being emotionally hyper-charged, (especially right now in the US), we keep using facts and data to try to change hearts and minds. This approach is ineffective. In addition, people are less trusting of third-party sources because of the onslaught of misinformation, disinformation and false information. Can we use storytelling to help us get through this period? Can immersive storytelling and creative use of 3D sound help us to trust more, to engage and to witness? Can it help us to think about how we might react during certain events, like police violence? (See Tahera Aziz’ project [re]locate about the murder of Stephen Lawrence in South London in 1993). Can it help us to better understand various perspectives? The final version of RIOT aims to bring in footage from several angles, such as CCTV from a looted store, a police body cam, and someone’s mobile phone footage shot as they ran past, in an effort to show an array of perspectives that would help viewers see things in different lights.

How do we catch the questions that RIOT stirs up in people’s minds? As someone experiences RIOT, they will have all sorts of emotions and thoughts, and these will depend on a their identity and lived experiences. At one showing of RIOT, a young white boy said he learned that if he’s feeling scared he should try to stay calm. He also said that when the cop yelled at him in the film, he assumed that he must have done something wrong. A black teenager might have had an entirely different reaction to the police. RIOT is bringing in scent, haze, 3D sound, and other elements which have started to affect people more profoundly. Some have been moved to tears or said that the film triggered anger and other strong emotions for them.

Does the artist have a responsibility to accompany people through the full emotional experience? In traditional VR experiences, a person waits in line, puts on a VR headset, experiences something profound (and potentially something triggering), then takes off the headset and is rushed out so that the next person can try it. Creators of these new and immersive media experiences are just now becoming fully aware of how to manage the emotional side of the experiences and they don’t yet have a good handle on what their responsibilities are toward those who are going through them. How do we debrief people afterwards? How do we give them space to process what has been triggered? How do we bring people into the co-creation process so that we better understand what it means to tell or experience these stories? The Columbia Digital Storytelling Lab is working on gaining a better understanding of all this and the impact it can have on people.

How do we create the grammar and frameworks for talking about this? The technologies and tactics for this type of digital immersive storytelling are entirely new and untested. Creators are only now becoming more aware of the consequences of the experiences that they are creating ‘What am I making? Why? How will people go through it? How will they leave? What are the structures and how do I make it safe for them?’ The artist can open someone up to an intense experience, but then they are often just ushered out, reeling, and someone else is rushed in. It’s critical to build time for debriefing into the experience and to have some capacity for managing the emotions and reactions that could be triggered.

SAFE Lab, for example, works with students and the community in Chicago, Harlem, and Brooklyn on youth-driven solutions to de-escalation of violence. The project development starts with the human experience and the tech comes in later. Youth are part of the solution space, but along the way they learn hard and soft skills related to emerging tech. The Lab is testing a debriefing process also. The challenge is that this is a new space for everyone; and creation, testing and documentation are happening simultaneously. Rather than just thinking about a ‘user journey,’ creators need to think about the emotionality of the full experience. This means that as opposed to just doing an immersive film – neuroscience, sociology, behavioral psychology, and lots of other fields and research are included in the dialogue. It’s a convergence of industries and sectors.

What about algorithmic bias? It’s not possible to create an unbiased algorithm, because humans all have bias. Even if you could create an unbiased algorithm, as soon as you started inputting human information into it, it would become biased. Also, as algorithms become more complex, it becomes more and more difficult to understand how they arrive to decisions. This results in black boxes that are putting out decisions that even the humans that build them can’t understand. The RIOT team is working with Dr. Hongying Meng of Brunel University London, an expert in the creation of facial and emotion detection algorithms, to develop an open source algorithm for RIOT. Even if the algorithm itself isn’t neutral, the process by which it computes will be transparent.

Most algorithms are not open. Because the majority of private companies have financial goals rather than social goals in using or creating algorithms, they have little incentive for being transparent about how an algorithm works or what biases are inherent. Ad agencies want to track how a customer reacts to a product. Facebook wants to generate more ad revenue so it adjusts what news you see on your feed. The justice system wants to save money and time by using sentencing algorithms. Yet the biases in their algorithms can cause serious harm in multiple ways. (See this 2016 report from ProPublica). The problem with these commercial algorithms is that they are opaque and the biases in them are not shared. This lack of transparency is considered by some to be more problematic than the bias itself.

Should there be a greater push for regulation of algorithms? People who work in surveillance are often ignored because they are perceived as paranoid. Yet fears that AI will be totally controlled by the military, the private sector and tech companies in ways that are hidden and opaque are real and it’s imperative to find ways to bring the actual dangers home to people. This could be partly accomplished through narrative and stories. (See John Oliver’s interview with Edward Snowden) Could artists create projects that drive conversations around algorithmic bias, help the public see the risks, and push for greater regulation? (Also of note: the New York City government recently announced that it will start a task force to look more deeply into algorithmic bias).

How is the RIOT team developing its emotion recognition algorithm? The RIOT team is collecting data to feed into the algorithm by capturing facial emotions and labeling them. The challenge is that one person may think someone looks calm, scared, or angry and another person may read it a different way. They are also testing self-reported emotions to reduce bias. The purpose of the RIOT facial detection algorithm is to measure what the person is actually feeling and how others perceive that the person is feeling. For example, how would a police officer read your face? How would a fellow protester see you? The team is developing the algorithm with the specific bias that is needed for the narrative itself. The process will be documented in a peer-reviewed research paper that considers these issues from the angle of state control of citizens. Other angles to explore would be how algorithms and biometrics are used by societies of control and/or by non-state actors such as militia in the Middle East or by right wing and/or white supremacist groups in the US. (See this article on facial recognition tools being used to identify sexual orientation)

Stay tuned to hear more…. We’ll be meeting again in the new year to go more in-depth on topics such as responsibly guiding people through VR experiences; exploring potential unintended consequences of these technologies and experiences, especially for certain racial groups; commercial applications for sensory storytelling and elements of scale; global applications of these technologies; practical development and testing of algorithms; prototyping, ideation and foundational knowledge for algorithm development.

Garry Haywood of Kinicho from also wrote his thoughts up from the day.

On November 14 Technology Salon NYC met to discuss issues related to the role of film and video in development and humanitarian work. Our lead discussants were Ambika Samarthya from Praekelt.org; Lina Srivastava of CIEL, and Rebekah Stutzman, from Digital Green’s DC office.

How does film support aid and development work?

Lina proposed that there are three main reasons for using video, film, and/or immersive media (such as virtual reality or augmented reality) in humanitarian and development work:

  • Raising awareness about an issue or a brand and serving as an entry point or a way to frame further actions.
  • Community-led discussion/participatory media, where people take agency and ownership and express themselves through media.
  • Catalyzing movements themselves, where film, video, and other visual arts are used to feed social movements.

Each of the above is aimed at a different audience. “Raising awareness” often only scratches the surface of an issue and can have limited impact if done on its own without additional actions. Community-led efforts tend to go deeper and focus on the learning and impact of the process (rather than the quality of the end product) but they usually reach fewer people (thus have a higher cost per person and less scale). When using video for catalyzing moments, the goal is normally bringing people into a longer-term advocacy effort.

In all three instances, there are issues with who controls access to tools/channels, platforms, and distribution channels. Though social media has changed this to an extent, there are still gatekeepers that impact who gets to be involved and whose voice/whose story is highlighted, funders who determine which work happens, and algorithms that dictate who will see the end products.

Participants suggested additional ways that video and film are used, including:

  • Social-emotional learning, where video is shown and then discussed to expand on new ideas and habits or to encourage behavior change.
  • Personal transformation through engaging with video.

Becky shared Digital Green’s approach, which is participatory and where community members to use video to help themselves and those around them. The organization supports community members to film videos about their agricultural practices, and these are then taken to nearby communities to share and discuss. (More on Digital Green here). Video doesn’t solve anyone’s development problem all by itself, Becky emphasized. If an agricultural extensionist is no good, having a video as part of their training materials won’t solve that. “If they have a top-down attitude, don’t engage, don’t answer questions, etc., or if people are not open to changing practices, video or no video, it won’t work.”

How can we improve impact measurement?

Questions arose from Salon participants around how to measure impact of film in a project or wider effort. Overall, impact measurement in the world of film for development is weak, noted one discussant, because change takes a long time and it is hard to track. We are often encouraged to focus on the wrong things like “vanity measurements” such as “likes” and “clicks,” but these don’t speak to longer-term and deeper impact of a film and they are often inappropriate in terms of who the audience is for the actual films (E.g., are we interested in impact on the local audience who is being impacted by the problem or the external audience who is being encouraged to care about it?)

Digital Green measures behavior change based on uptake of new agriculture practices. “After the agriculture extension worker shows a video to a group, they collect data on everyone that’s there. They record the questions that people ask, the feedback about why they can’t implement a particular practice, and in that way they know who is interested in trying a new practice.” The organization sets indicators for implementing the practice. “The extension worker returns to the community to see if the family has implemented a, b, c and if not, we try to find out why. So we have iterative improvement based on feedback from the video.” The organization does post their videos on YouTube but doesn’t know if the content there is having an impact. “We don’t even try to follow it up as we feel online video is much less relevant to our audience.” An organization that is working with social-emotional learning suggested that RCTs could be done to measure which videos are more effective. Others who work on a more individual or artistic level said that the immediate feedback and reactions from viewers were a way to gauge impact.

Donors often have different understandings of useful metrics. “What is a valuable metric? How can we gather it? How much do you want us to spend gathering it?” commented one person. Larger, longer-term partners who are not one-off donors will have a better sense of how to measure impact in reasonable ways. One person who formerly worked at a large public television station noted that it was common to have long conversation about measurement, goals, and aligning to the mission. “But we didn’t go by numbers, we focused on qualitative measurement.” She highlighted the importance of having these conversations with donors and asking them “why are you partnering with us?” Being able to say no to donors is important, she said. “If you are not sharing goals and objectives you shouldn’t be working together. Is gathering these stories a benefit to the community ? If you can’t communicate your actual intent, it’s very complicated.”

The goal of participatory video is less about engaging external (international) audiences or branding and advocacy. Rather it focuses on building skills and capacities through the process of video making. Here, the impact measurement is more related to individual, and often self-reported, skills such as confidence, finding your voice, public speaking, teamwork, leadership skills, critical thinking and media literacy. The quality of video production in these cases may be low, and videos unsuitable for widespread circulation, however the process and product can be catalysts for local-level change and locally-led advocacy on themes and topics that are important to the video-makers.

Participatory video suffers from low funding levels because it doesn’t reach the kind of scale that is desired by funders, though it can often contribute to deep, personal and community-level change. Some felt that even if community-created videos were of high production quality and translated to many languages, large-scale distribution is not always feasible because they are developed in and speak to/for hyper-local contexts, thus their relevance can be limited to smaller geographic areas. Expectation management with donors can go a long way towards shifting perspectives and understanding of what constitutes “impact.”

Should we re-think compensation?

Ambika noted that there are often challenges related to incentives and compensation when filming with communities for organizational purposes (such as branding or fundraising). Organizations are usually willing to pay people for their time in places such New York City and less inclined to do so when working with a rural community that is perceived to benefit from an organization’s services and projects. Perceptions by community members that a filmmaker is financially benefiting from video work can be hard to overcome, and this means that conflict may arise during non-profit filmmaking aimed at fundraising or building a brand. Even when individuals and communities are aware that they will not be compensated directly, there is still often some type of financial expectation, noted one Salon participant, such as the purchase of local goods and products.

Working closely with gatekeepers and community leaders can help to ease these tensions. When filmmaking takes several hours or days, however, participants may be visibly stressed or concerned about household or economic chores that are falling to the side during filming, and this can be challenging to navigate, noted one media professional. Filming in virtual reality can exacerbate this problem, since VR filming is normally over-programmed and repetitive in an effort to appear realistic.

One person suggested a change in how we approach incentives. “We spent about two years in a community filming a documentary about migration. This was part of a longer research project. We were not able to compensate the community, but we were able to invest directly in some of the local businesses and to raise funds for some community projects.” It’s difficult to understand why we would not compensate people for their time and their stories, she said. “This is basically their intellectual property, and we’re stealing it. We need a sector rethink.” Another person agreed, “in the US everyone gets paid and we have rules and standards for how that happens. We should be developing these for our work elsewhere.”

Participatory video tends to have less of a challenge with compensation. “People see the videos, the videos are for their neighbors. They are sharing good agricultural or nutrition approaches with people that they already know. They sometimes love being in the videos and that is partly its own reward. Helping people around them is also an incentive,” said one person.

There were several other rabbit holes to explore in relation to film and development, so look for more Salons in 2018!

To close out the year right, join us for ICT4Drinks on December 14th at Flatiron Hall from 7-9pm. If you’re signed up for Technology Salon emails, you’ll find the invitation in your inbox!

Salons run under Chatham House Rule so no attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

(Joint post from Linda Raftree, MERL Tech and Megan Colnar, Open Society Foundations)

The American Evaluation Association Conference happens once a year, and offers literally hundreds of sessions. It can take a while to sort though all of them. Because there are so many sessions, it’s easy to feel a bit lost in the crowds of people and content.

So, Megan Colnar (Open Society Foundations) and I thought we’d share some of the sessions that caught our eye.

I’m on the look-out for innovative tech applications, responsible and gender-sensitive data collection practices, and virtual or online/social media-focused evaluation techniques and methods. Megan plans to tune into sessions on policy change, complexity-aware techniques, and better MEL practices for funders. 

We both can’t wait to learn about evaluation in the post-truth and fake news era. Full disclosure, our sessions are also featured below.

Hope we see you there!

Wednesday, November 8th

3.15-4.15

4.30-6.00

We also think a lot of the ignite talks during this session in the Thurgood Salon South look interesting, like:

6.15-7.15

7.00-8.30

Tour of a few poster sessions before dinner. Highlights might include:

  • M&E for Journalism (51)
  • Measuring Advocacy (3)
  • Survey measures of corruption (53)
  • Theory of change in practice (186)
  • Using social networks as a decision-making tool (225)

 

Thursday, Nov 9th

8.00-9.00 – early risers are rewarded with some interesting options

9.15-10.15

10.30-11.15

12.15-1.15

1.15-2.00

2.15-3.00

3.15-4.15

4.30-5.15

 

Friday, Nov 10th

8.00-9.30early risers rewarded again!

11.00-11.45

1.45-3.15

3.30-4.15

4.30-5.15

5.30-6.15– if you can hold out for one more on a Friday evening

6.30-7.15

 

Saturday, Nov 11th–you’re on your own! Let us know what treasures you discover