Feeds:
Posts
Comments

Posts Tagged ‘risk’

In the search for evidence of impact, donors and investors are asking that more and more data be generated by grantees and those they serve. Some of those driving this conversation talk about the “opportunity cost” of not collecting, opening and sharing as much data as possible. Yet we need to also talk about the real and tangible risks of data collecting and sharing and the long-term impacts of reduced data privacy and security rights, especially for the vulnerable individuals and groups with whom we work.

This week I’m at the Global Philanthropy Forum Conference in the heart of Silicon Valley speaking on a panel titled “Civil Liberties and Data Philanthropy: When NOT to Ask for More.” It’s often donor requests for innovation or for proof of impact that push implementors to collect more and more data. So donors and investors have a critical role to play in encouraging greater respect and protection of the data of vulnerable individuals and groups. Philanthropists, grantees, and investees can all help to reduce these risks by bringing a values-based responsible data approach to their work.

Here are three suggestions for philanthropists on how to contribute to more responsible data management:

1) Enhance your own awareness and expertise on the potential benefits and harms associated with data. 

  • Adopt processes that take a closer look at the possible risks and harms of collecting and holding data and how to mitigate them. Ensure those aspects are reviewed and considered during investments and grant making.
  • Conduct risk-benefits-harms assessments early in the program design and/or grant decision-making processes. This type of assessment helps lay out the benefits of collecting and using data, identifies the data-related harms we might we be enabling, and asks us to determine how we are intentionally mitigating harm during the design of our data collection, use and sharing. Importantly, this process also asks us to also identify who is benefiting from data collection and who is taking on the burden of risk. It then aims to assess whether the benefits of having data outweigh the potential harms. Risks-benefits-harms assessments also help us to ensure we are doing a contextual assessment, which is important because every situation is different. When these assessments are done in a participatory way, they tend to be even more useful and accurate ways to reduce risks in data collection and management.
  • Hire people within your teams who can help provide technical support to grantees when needed in a friendly — not a punitive — way. Building in a ‘data responsibility by design’ approach can help with that. We need to think about the role of data during the early stages of design. What data is collected? Why? How? By and from whom? What are the potential benefits, risks, and harms of gathering, holding, using and sharing that data? How can we reduce the amount of data that we collect and mitigate potential harms?
  • Be careful with data on your grantees. If you are working with organizations who (because of the nature of their mission) are at risk themselves, it’s imperative that you protect their privacy and don’t expose them to harm by collecting too much data from them or about them. Here’s a good guide for human rights donors on protecting sensitive data.

2) Use your power and influence to encourage grantees and investees to handle data more responsibly. If donors are going to push for more data collection, they should also be signaling to grantees and investees that responsible data management matters and encouraging them to think about it in proposals and more broadly in their work.

  • Strengthen grantee capacity as part of the process of raising data management standards. Lower-resourced organizations may not be able to meet higher data privacy requirements, so donors should think about how they can support rather than exclude smaller organizations with less capacity as we all work together to raise data management standards.
  • Invest holistically in both grants and grantees. This starts by understanding grantees’ operational, resource, and technical constraints as well as the real security risks posed to grantee staff, data collectors, and data subjects. For this to work, donors need to create genuinely safe spaces for grantees to voice their concerns and discuss constraints that may limit their ability to safely collect the data that donors are demanding.
  • Invest in grantees’ IT and other systems and provide operational funds that enable these systems to work. There is never enough funding for IT systems, and this puts the data of vulnerable people and groups at risk. One reason that organizations struggle to fund systems and improve data management is because they can’t bill overhead. Perverse incentives prevent investments in responsible data. Donors can work through this and help find solutions.
  • Don’t punish organizations that include budget for better data use, protection and security in their proposals. It takes money and staff and systems to manage data in secure ways. Yet stories abound in the sector about proposals that include these elements being rejected because they turn out to be more expensive. It’s critical to remember that safeguarding of all kinds takes resources!
  • Find out what kind of technical or systems support grantees/investees need to better uphold ethical data use and protection and explore ways that you can provide additional funds and resources to strengthen this area in those grantees and across the wider sector.
  • Remember that we are talking about long-term organizational behavior change. It is urgent to get moving on improving how we all handle data — but this will take some time. It’s not a quick fix because the skills are in short supply and high demand right now as a result of the GDPR and related laws that are emerging in other countries around the world.
  • Don’t ask grantees to collect data that might make vulnerable individuals or groups wary of them. Data is an extension of an individual. Trust in how an organization collects and manages an individual’s data leads to trust in an organization itself. Organizations need to be trusted in order to do our work, and collection of highly sensitive data, misuse of data or a data breach can really break that trust compact and reduce an organization’s impact.

3) Think about the responsibility you have for what you do, what you fund, and the type of society that we live in. Support awareness and compliance with new regulations and legislation that can protect privacy. Don’t use “innovation” as an excuse for putting historically marginalized individuals and groups at risk or for allowing our societies to advance in ways that only benefit the wealthiest. Question the current pathway of the “Fourth Industrial Revolution” and where it may take us.

I’m sure I’m leaving out some things. What do you think donors and the wider philanthropic community can do to enhance responsible data management and digital safeguarding?

 

 

 

Read Full Post »

The recently announced World Food Programme (WFP) partnership with Palantir, IRIN’s article about it, reactions from the Responsible Data Forum, and WFP’s resulting statement inspired us to pull together a Technology Salon in New York City to discuss the ethics of humanitarian data sharing.

(See this crowdsourced document for more background on the WFP-Palantir partnership and resources for thinking about the ethics of data sharing. Also here is an overview of WFP’s SCOPE system for beneficiary identification, management and tracking.)

Our lead discussants were: Laura Walker McDonald, Global Alliance for Humanitarian Innovation; Mark Latonero, Research Lead for Data & Human Rights, Data & Society; Nathaniel Raymond, Jackson Institute of Global Affairs, Yale University; and Kareem Elbayar, Partnerships Manager, Centre for Humanitarian Data at the United Nations Office for the Coordination of Humanitarian Affairs. We were graciously hosted by The Gov Lab.

What are the concerns about humanitarian data sharing and with Palantir?

Some of the initial concerns expressed by Salon participants about humanitarian data sharing included: data privacy and the permanence of data; biases in data leading to unwarranted conclusions and assumptions; loss of stakeholder engagement when humanitarians move to big data and techno-centric approaches; low awareness and poor practices across humanitarian organizations on data privacy and security; tensions between security of data and utility of data; validity and reliability of data; lack of clarity about the true purposes of data sharing; the practice of ‘ethics outsourcing’ (testing things in places where there is a perceived ‘lower ethical standard;’ and less accountability); use of humanitarian data to target and harm aid recipients; disempowerment and extractive approaches to data; lack of checks and balances for safe and productive data sharing; difficulty of securing meaningful consent; and the links between data and surveillance by malicious actors, governments, private sector, military or intelligence agencies.

Palantir’s relationships and work with police, the CIA, ICE, the NSA, the US military and wider intelligence community are one of the main concerns about this partnership. Some ask whether a company can legitimately serve philanthropy, development, social, human rights and humanitarian sectors while also serving the military and intelligence communities and whether it is ethical for those in the former to engage in partnerships with companies who serve the latter. Others ask if WFP and others who partner with Palantir are fully aware of the company’s background, and if so, why these partnerships have been able to pass through due diligence processes. Yet others wonder if a company like Palantir can be trusted, given its background.

Below is a summary of the key points of the discussion, which happened on February 28, 2019. (Technology Salons are Chatham House affairs, so I have not attributed quotes in this post.)

Why were we surprised by this partnership/type of partnership?

Our first discussant asked why this partnership was a surprise to many. He emphasized the importance of stakeholder conversations, transparency, and wider engagement in the lead-up to these kinds of partnerships. “And I don’t mean in order to warm critics up to the idea, but rather to create a safe and trusted ecosystem. Feedback and accountability are really key to this.” He also highlighted that humanitarian organizations are not experts in advanced technologies and that it’s normal for them to bring in experts in areas that are not their forte. However, we need to remember that tech companies are not experts in humanitarian work and put the proper checks and balances in place. Bringing in a range of multidisciplinary expertise and distributed intelligence is necessary in a complex information environment. One possible approach is creating technology advisory boards. Another way to ensure more transparency and accountability is to conduct a human rights impact assessment. The next year will be a major test for these kinds of partnerships, given the growing concerns, he said.

One Salon participant said that the fact that the humanitarian sector engages in partnerships with the private sector is not a surprise at all, as the sector has worked through Public-Private Partnerships (PPPs) for several years now and they can bring huge value. The surprise is that WFP chose Palantir as the partner. “They are not the only option, so why pick them?” Another person shared that the WFP partnership went through a full legal review, and so it was not a surprise to everyone. However, communication around the partnership was not well planned or thought out and the process was not transparent and open. Others pointed out that although a legal review covers some bases, it does not assess the potential negative social impact or risk to ‘beneficiaries.’ For some the biggest surprise was WFP’s own surprise at the pushback on this particular partnership and its unsatisfactory reaction to the concerns raised about it. The response from responsible data advocates and the press attention to the WFP-Palantir partnership might be a turning point for the sector to encourage more awareness of the risks in working with certain types of companies. As many noted, this is not only a problem for WFP, it’s something that plagues the wider sector and needs to be addressed urgently.

Organizations need think beyond reputational harm and consider harm to beneficiaries

“We spend too much time focusing on avoiding risk to institutions and too little time thinking about how to mitigate risk to beneficiaries,” said one person. WFP, for example, has some of the best policies and procedures out there, yet this partnership still passed their internal test. That is a scary thought, because it implies that other agencies who have weaker policies might be agreeing to even more risky partnerships. Are these policies and risk assessments, then, covering all the different types of risk that need consideration? Many at the Salon felt that due diligence and partnership policies focus almost exclusively on organizational and reputational risk with very little attention to the risk that vulnerable populations might face. It’s not just a question of having policies, however, said one person. “Look at the Oxfam Safeguarding situation. Oxfam had some of the best safeguarding policies, yet there were egregious violations that were not addressed by having a policy. It’s a question of power and how decisions get made, and where decision-making power lies and who is involved and listened to.” (Note: one person contacted me pre-Salon to say that there was pushback by WFP country-level representatives about the Palantir partnership, but that it still went ahead. This brings up the same issue of decision-making power, and who has power to decide on these partnerships and why are voices from the frontlines not being heard? Additionally, are those whose data is captured and put into these large data systems ever consulted about what they think?)

Organizations need to assess wider implications, risks, and unintended negative consequences

It’s not only WFP that is putting information into SCOPE, said one person. “Food insecure people have no choice about whether to provide their data if they wish to receive food.” Thus, the question of truly ‘informed consent’ arises. Implementing partners don’t have a lot of choice either, he said. “Implementing agencies are forced to input beneficiary data into SCOPE if they want to work in particular zones or countries.” This means that WFP’s systems and partnerships have an impact on the entire humanitarian community, and therefore these partnerships and systems need to be more broadly consulted about with the wider sector.  The optical and reputational impact to organizations aside from WFP is significant, as they may disagree with the Palantir partnership but they are now associated with it by default. This type of harm goes beyond the fear of exploitation of the data in WFP’s “data lake.” It becomes a risk to personnel on the ground who are then seen as collaborating with a CIA contractor by putting beneficiary biometric data into SCOPE. This can also deter food-insecure people from accessing benefits. Additionally, association with CIA or US military has led to humanitarian agencies and workers being targeted, attacked and killed. That is all in addition to the question on whether these kinds of partnerships violate humanitarian principles, such as that of impartiality.

“It’s critical to understand the role of rumor in humanitarian contexts,” said one discussant. “Affected populations are trying to figure out what is happening and there is often a lot of rumor going around.”  So, if Palantir has a reputation for giving data to the CIA, people may hear about that and then be afraid to access services for fear of having their data given to the CIA. This can lead to retaliation against humanitarians and humanitarian organizations and escalate their risk of operating. Risk assessments need to go beyond the typical areas of reputation or financial risk. We also need to think about how these partnerships can affect humanitarian access and community trust and how rumors can have wide ripple effects.

The whole sector needs to put better due diligence systems in place. As it is now, noted one person, often it’s someone who doesn’t know much about data who writes up a short summary of the partnership, and there is limited review. “We’ve been struggling for 10 years to get our offices to use data. Now we’re in a situation where they’re just picking up a bunch of data and handing it over to private companies.”

UN immunities and privileges lead to a lack of accountability

The fact that UN agencies have immunities and privileges, means that laws such as the EU’s General Data Protection Regulation (GDPR) do not apply to them and they are left to self-regulate. Additionally, there is no common agreement among UN Agencies on how GDPR applies, and each UN agency interprets it on their own. As one person noted “There is a troubling sense of exceptionalism and lack of accountability in some of these agencies because ‘a beneficiary cannot take me to court.’” An interesting point, however, is that while UN agencies are immune, those contracted as their data processors are not immune — so data processors beware!

Demographically Identifiable Information (DII) can lead to serious group harm

The WFP has stated that personally identifiable information (PII) is not technically accessible to Palantir via this partnership. However, some at the Salon consider that the WFP failed in their statement about the partnership when they used the absence of PII as a defense. Demographically Identifiable Information (DII) and the activity patterns that are visible even in commodity data can be extrapolated as training data for future data modeling. “This is prospective modeling of action-based intelligence patterns as part of multiple screeners of intel,” said one discussant. He went on to explain that privacy discussions have moved from centering on property rights in the 19th Century, to individual rights in the 20th Century, to group rights in the 21st Century. We can use existing laws to emphasize protection of groups and to highlight the risks of DII leading to group harm, he said, as there are well-known cases that exemplify the notion of group harms (Plessy v Ferguson, Brown v Board of Education). Even in logistics data (which is the kind of data that WFP says Palantir will access) that contains no PII, it’s very simple to identify groups. “I can look at supply chain information and tell you where there are lactating mothers. If you don’t want refugees to give birth in the country they have arrived to, this information can be used for targeting.”

Many in the sector do not trust a company like Palantir

Though it is not clear who was in the room when WFP made the decision to partner with Palantir, the overall sector has concerns that the people making these decisions are not assessing partnerships from all angles: legal, privacy, programmatic, ethical, data use and management, social, protection, etc. Technologists and humanitarian practitioners are often not included in making these decisions, said one participant. “It’s the people with MBAs. They trust a tech company to say ‘this is secure’ but they don’t have the expertise to actually know that. Not to mention that yes, something might be secure, but maybe it’s not ethical. Senior people are signing off without having a full view. We need a range of skill sets reviewing these kinds of partnerships and investments.”

Another question arises: What happens when there is scope creep? Is Palantir in essence “grooming” the sector to then abuse data it accesses once it’s trusted and “allowed in”? Others pointed out that the grooming has already happened and Palantir is already on the inside. They first began partnering with the sector via the Clinton Global Initiative meetings back in 2013 and they are very active at World Economic Forum meetings. “This is not something coming out of the Trump administration, it was happening long before that,” said one person, and the company is already “in.” Another person said “Palantir lobbied their way into this, and they’ve gotten past the point of reputational challenge.” Palantir has approached many humanitarian agencies, including all the UN agencies, added a third person. Now that they have secured this contract with the WFP, the door to future work with a lot of other agencies is open and this is very concerning.

We’re in a new political economy: data brokerage.

“Humanitarians have lost their Geneva values and embraced Silicon Valley values” said one discussant. They are becoming data brokers within a colonial data paradigm. “We are making decisions in hierarchies of power, often extralegally,” he said. “We make decisions about other people’s data without their involvement, and we need to be asking: is it humanitarian to commodify for monetary or reasons of value the data of beneficiaries? When is it ethical to trade beneficiary data for something of value?” Another raised the issue of incentives. “Where are the incentives stacked? There is no incentive to treat beneficiaries better. All the incentives are on efficiency and scale and attracting donors.”

Can this example push the wider sector to do better?

One participant hoped there could be a net gain out of the WFP-Palantir case. “It’s a bad situation. But it’s a reckoning for the whole space. Most agencies don’t have these checks and balances in place. But people are waking up to it in a serious way. There’s an opportunity to step into. It’s hard inside of bureaucratic organizations, but it’s definitely an opportunity to start doing better.”

Another said that we need more transparency across the sector on these partnerships. “What is our process for evaluating something like this? Let’s just be transparent. We need to get these data partnership policies into the open. WFP could have simply said ‘here is our process’. But they didn’t. We should be working with an open and transparent model.” Overall, there is a serious lack of clarity on what data sharing agreements look like across the sector. One person attending the Salon said that their organization has been trying to understand current practice with regard to data sharing, and it’s been very difficult to get any examples, even redacted ones.

What needs to happen? 

In closing we discussed what needs to happen next. One person noted that in her research on Responsible Data, she found a total lack of capacity in terms of technology at non-profit organizations. “It’s the Economist Syndrome. Someone’s boss reads something on the bus and decides they need a blockchain,” someone quipped. In terms of responsible data approaches, research shows that organizations are completely overwhelmed. “They are keeping silent about their low capacity out of fear they will face consequences,” said one person, “and with GDPR, even more so”. At the wider level, we are still focusing on PII as the issue without considering DII and group rights, and this is a mistake, said another.

Organizations have very low capacity, and we are siloed. “Program officers do not have tech capacity. Tech people are kept in offices or ‘labs’ on their own and there is not a lot of porosity. We need protection advisors, lawyers, digital safety advisors, data protection officers, information management specialists, IT all around the table for this,” noted one discussant. Also, she said, though we do need principles and standards, it’s important that organizations adapt these so that they are their own principles and standards. “We need to adapt these boiler plate standards to our organizations. This has to happen based on our own organizational values.  Not everyone is rights-based, not everyone is humanitarian.” So organizations need to take the time to review and adapt standards, policies and procedures to their own vision and mission and to their own situations, contexts and operations and to generate awareness and buy-in. In conclusion, she said, “if you are not being responsible with data, you are already violating your existing values and codes. Responsible Data is already in your values, it’s a question of living it.”

Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

Development, humanitarian and human rights organizations increasingly collect and use digital data at the various stages of their programming. This type of data has the potential to yield great benefit, but it can also increase individual and community exposure to harm and privacy risks. How can we as a sector better balance data collection and open data sharing with privacy and security, especially when it involves the most vulnerable?

A number of donors, humanitarian and development organizations (including Oxfam, CRS, UN bodies and others) have developed or are in the process of developing guidelines to help them to be more responsible about collection, use, sharing and retention of data from those who participate in their programs.

I’m part of a team (including mStar, Sonjara, Georgetown University, the USAID Global Development Lab, and an advisory committee that includes several shining stars from the ‘responsible data’ movement) that is conducting research on existing practices, policies, systems, and legal frameworks through which international development data is collected, used, shared, and released. Based on this research, we’ll develop ‘responsible data’ practice guidelines for USAID that aim to help:

  • Mitigate privacy and security risks for beneficiaries and others
  • Improve performance and development outcomes through use of data
  • Promote transparency, accountability and public good through open data

The plan is to develop draft guidelines and then to test their application on real programs.

We are looking for digital development projects to assess how our draft guidelines would work in real world settings. Once the projects are selected, members of the research team will visit them to better understand “on-the-ground” contexts and project needs. We’ll apply draft practice guidelines to each case with the goal of identifying what parts of the guidelines are useful/ applicable, and where the gaps are in the guidelines. We’ll also capture feedback from the project management team and partners on implications for project costs and timelines, and we’ll document existing digital data-related good practices and lessons. These findings will further refine USAID’s Responsible Data Practice guidelines.

What types of projects are we looking for?

  • Ongoing or recently concluded projects that are using digital technologies to collect, store, analyze, manage, use and share individuals’ data.
  • Cases where data collected is sensitive or may put project participants at risk.
  • The project should have informal or formal processes for privacy/security risk assessment and mitigation especially with respect to field implementation of digital technologies (listed above) as part of their program. These may be implicit or explicit (i.e. documented or written). They potentially include formal review processes conducted by ethics review boards or institutional review boards (IRBs) for projects.
  • All sectors of international development and all geographies are welcome to submit case studies. We are looking for diversity in context and programming.
  • We prefer case studies from USAID-funded projects but are open to receiving case studies from other donor-supported projects.

If you have a project or an activity that falls into the above criteria, please let us know here. We welcome multiple submissions from one organization; just reuse the form for each proposed case study.

Please submit your projects by February 15, 2017.

And please share this call with others who may be interested in contributing case studies.

Click here to submit your case study.

Also feel free to get in touch with me if you have questions about the project or the call!

 

Read Full Post »

Our December 2015 Technology Salon discussion in NYC focused on approaches to girls’ digital privacy, safety and security. By extension, the discussion included ways to reduce risk for other vulnerable populations. Our lead discussants were Ximena BenaventeGirl Effect Mobile (GEM) and Jonathan McKay, Praekelt Foundation. I also shared a draft Girls’ Digital Privacy, Safety and Security Policy and Toolkit I’ve been working on with both organizations over the past year.

Girls’ digital privacy, safety and security risks

Our first discussant highlighted why it’s important to think specifically about girls and digital security. In part, this is because different factors and vulnerabilities combine, exacerbating girls’ levels of risk. For example, girls living on less than $2 per day likely only have access to basic mobile phones, which are often borrowed from parents or siblings. The organization she works with always starts with deep research on aspects like ownership vs. borrowship and whether girls’ mobile usage is free/unlimited and un-supervised or controlled by gatekeepers such as parents, brothers, or other relatives. This helps to design better tools, services and platforms and to design for safety and security, she said. “Gatekeepers are very restrictive in many cases, but parental oversight is not necessarily a bad thing. We always work with parents and other gatekeepers as well as with girls themselves when we design and test.” When girls are living in more traditional or conservative societies, she said, we also need to think about how content might affect girls both online and offline. For example, “is content sufficiently progressive in terms of girls’ rights, yet safe for girls to read, comment on or discuss with friends and family without severe retaliation?”

Research suggests that girls who are more vulnerable offline (due to poverty or other forms of marginalization), are likely also more vulnerable to certain risks online, so we design with that in mind, she said. “When we started off on this project, our team members were experts in digital, but we had less experience with the safety and privacy aspects when it comes to girls living under $2/day or who were otherwise vulnerable. “Having additional guidance and developing a policy on this aspect has helped immensely – but has also slowed our processes down and sometimes made them more expensive,” she noted. “We had to go back to everything and add additional layers of security to make it as safe as possible for girls. We have also made sure to work very closely with our local partners to be sure that everyone involved in the project is aware of girls’ safety and security.”

Social media sites: Open, Closed, Private, Anonymous?

One issue that came up was safety for children and youth on social media networks. A Salon participant said his organization had thought about developing this type of a network several years back but decided in the end that the security risks outweighed the advantages. Participants discussed whether social media networks can ever be safe. One school of thought is that the more open a platform, the safer it is, as “there is no interaction in private spaces that cannot be constantly monitored or moderated.” Some worry about open sites, however, and set up smaller, closed, private groups that were closely monitored. “We work with victims of violence to share their stories and coping mechanisms, so, for us, private groups are a better option.”

Some suggested that anonymity on a social media site can protect girls and other vulnerable groups, however there is also research showing that Internet anonymity contributes to an increase in activities such as bullying and harassment. Some Salon participants felt that it was better to leverage existing platforms and try to use them safely. Others felt that there are no existing social media platforms that have enough security for girls or other vulnerable groups to use with appropriate levels of risk. “We sometimes recruit participants via existing social media platforms,” said one discussant, “but we move people off of those sites to our own more secure sites as soon as we can.”

Moderation and education on safety

Salon participants working with vulnerable populations said that they moderate their sites very closely and remove comments if users share personal information or use offensive language. “Some project budgets allow us to have a moderator check every 2 hours. For others, we sweep accounts once a day and remove offensive content within 24 hours.” One discussant uses moderation to educate the community. “We always post an explanation about why a comment was removed in order to educate the larger user base about appropriate ways to use the social network,” he said.

Close moderation becomes difficult and costly, however, as the user base grows and a platform scales. This means individual comments cannot be screened and pre-approved, because that would take too long and defeat the purpose of an engaging platform. “We need to acknowledge the very real tension between building a successful and engaging community and maintaining privacy and security,” said one Salon participant. “The more you lock it down and the more secure it is, the harder you find it is to create a real and active community.”

Another participant noted that they use their safe, closed youth platform to educate and reinforce messaging about what is safe and positive use of social media in hopes that young people will practice safe behaviors when they use other platforms. “We know that education and awareness raising can only go so far, however,” she said, “and we are not blind to that fact.” She expressed concern about risk for youth who speak out about political issues, because more and more governments are passing laws that punish critics and censor information. The organization, however, does not want to encourage youth to stop voicing opinions or participating politically.

Data breaches and project close-out

One Salon participant asked if organizations had examples of actual data breaches, and how they had handled them. Though no one shared examples, it was recommended that every organization have a contingency plan in place for accidental data leaks or a data breach or data hack. “You need to assume that you will get hacked,” said one person, “and develop your systems with that as a given.”

In addition to the day-to-day security issues, we need to think about project close-out, said one person. “Most development interventions are funded for a short, specific period of time. When a project finishes, you get a report, you do your M&E, and you move on. However, the data lives on, and the effects of the data live on. We really need to think more about budgeting for proper project wind-down and ensure that we are accountable beyond the lifetime of a project.”

Data security, anonymization, consent

Another question was related to using and keeping girls’ (and others’) data safe. “Consent to collect and use data on a website or via a mobile platform can be tricky, especially if we don’t know how to explain what we might do with the data,” said one Salon participant. Others suggested it would be better not to collect any data at all. “Why do we even need to collect this data? Who is it for?” he asked. Others countered that this data is often the only way to understand what people are doing on the site, to make adjustments and to measure impact.

One scenario was shared where several partner organizations discussed opening up a country’s cell phone data records to help contain a massive public health epidemic, but the privacy and security risks were too great, so the idea was scrapped. “Some said we could anonymize the data, but you can never really and truly anonymize data. It would have been useful to have a policy or a rubric that would have guided us in making that decision.”

Policy and Guidelines on Girls Privacy, Security and Safety

Policy guidelines related to aspects such as responsible data for NGOs, data security, privacy and other aspects of digital security in general do exist. (Here are some that we compiled along with some other resources). Most IT departments also have strict guidelines when it comes to donor data (in the case of credit card and account information, for example). This does not always cross over to program-level ICT or M&E efforts that involve the populations that NGOs are serving through their programming.

General awareness around digital security is increasing, in part due to recent major corporate data hacks (e.g., Target, Sony) and the Edward Snowden revelations from a few years back, but much more needs to be done to educate NGO staff and management on the type of privacy and security measures that need to be taken to protect the data and mitigate risk for those who participate in their programs.  There is an argument that NGOs should have specific digital privacy, safety and security policies that are tailored to their programming and that specifically focus on the types of digital risks that girls, women, children or other vulnerable people face when they are involved in humanitarian or development programs.

One such policy (focusing on vulnerable girls) and toolkit (its accompanying principles and values, guidelines, checklists and a risk matrix template); was shared at the Salon. (Disclosure: – This policy toolkit is one that I am working on. It should be ready to share in early 2016). The policy and toolkit take program implementers through a series of issues and questions to help them assess potential risks and tradeoffs in a particular context, and to document decisions and improve accountability. The toolkit covers:

  1. data privacy and security –using approaches like Privacy by Design, setting limits on the data that is collected, achieving meaningful consent.
  2. platform content and design –ensuring that content produced for girls or that girls produce or volunteer is not putting girls at risk.
  3. partnerships –vetting and managing partners who may be providing online/offline services or who may partner on an initiative and want access to data, monetizing of girls’ data.
  4. monitoring, evaluation, research and learning (MERL) – how will program implementers gather and store digital data when they are collecting it directly or through third parties for organizational MERL purposes.

Privacy, Security and Safety Implications

Our final discussant spoke about the implications of implementing the above-mentioned girls’ privacy, safety and security policy. He started out saying that the policy starts off with a manifesto: We will not compromise a girl in any way, nor will we opt for solutions that cut corners in terms of cost, process or time at the expense of her safety. “I love having this as part of our project manifesto, he said. “It’s really inspiring! On the flip side, however, it makes everything I do more difficult, time consuming and expensive!”

To demonstrate some of the trade-offs and decisions required when working with vulnerable girls, he gave examples of how the current project (implemented with girls’ privacy and security as a core principle) differed from that of a commercial social media platform and advertising campaign he had previously worked on (where the main concern was the reputation of the corporation, not that of the users of the platform and the potential risks they might put themselves in by using the platform).

Moderation

On the private sector platform, said the discussant, “we didn’t have the option of pre-moderating comments because of the budget and because we had 800 thousand users. To meet the campaign goals, it was more important for users to be engaged than to ensure content was safe. We focused on removing pornographic photos within 24 hours, using algorithms based on how much skin tone was in the photo.” In the fields of marketing and social media, it’s a fairly well-known issue that heavy-handed moderation kills platform engagement. “The more we educated and informed users about comment moderation, or removed comments, the deader the community became. The more draconian the moderation, the lower the engagement.”

The discussant had also worked on a platform for youth to discuss and learn about sexual health and practices, where he said that users responded angrily to moderators and comments that restricted their participation. “We did expose our participants to certain dangers, but we also knew that social digital platforms are more successful when they provide their users with sense of ownership and control. So we identified users that exhibited desirable behaviors and created a different tier of users who could take ownership (super users) to police and flag comments as inappropriate or temporarily banned users.” This allowed a 25% decrease in moderation. The organization discovered, however, that they had to be careful about how much power these super users had. “They ended up creating certain factions on the platform, and we then had to develop safeguards and additional mechanisms by which we moderated our super users!”

Direct Messages among users

In the private sector project example, engagement was measured by the number of direct or private messages sent between platform users. In the current scenario, however, said the discussant, “we have not allowed any direct messages between platform users because of the potential risks to girls of having places on the site that are hidden from moderators. So as you can see, we are removing some of our metrics by disallowing features because of risk. These activities are all things that would make the platform more engaging but there is a big fear that they could put girls at risk.”

Adopting a privacy, security, and safety policy

One discussant highlighted the importance of having privacy, safety and security policies before a project or program begins. “If you start thinking about it later on, you may have to go back and rebuild things from scratch because your security holes are in the design….” The way a database is set up to capture user data can make it difficult to query in the future or for users to have any control of what information is or is not being shared about them. “If you don’t set up the database with security and privacy in mind from the beginning, it might be impossible to make the platform safe for girls without starting from scratch all over again,” he said.

He also cautioned that when making more secure choices from the start, platform and tool development generally takes longer and costs more. It can be harder to budget because designers may not have experience with costing and developing the more secure options.

“A valuable lesson is that you have to make sure that what you’re trying to do in the first place is worth it if it’s going to be that expensive. It is worth a girls’ while to use a platform if she first has to wade through a 5-page terms and conditions on a small mobile phone screen? Are those terms and conditions even relevant to her personally or within her local context? Every click you ask a user to make will reduce their interest in reaching the platform. And if we don’t imagine that a girl will want to click through 5 screens of terms and conditions, the whole effort might not be worth it.” Clearly, aspects such as terms and conditions and consent processes need to be designed specifically to fit new contexts and new kinds of users.

Making responsible tradeoffs

The Girls Privacy, Security and Safety policy and toolkit shared at the Salon includes a risk matrix where project implementers rank the intensity and probability of risks as high, medium and low. Based on how a situation, feature or other potential aspect is ranked and the possibility to mitigate serious risks, decisions are made to proceed or not. There will always be areas with a certain level of risk to the user. The key is in making decisions and trade-offs that balance the level of risk with the potential benefits or rewards of the tool, service, or platform. The toolkit can also help project designers to imagine potential unintended consequences and mitigate risk related to them. The policy also offers a way to systematically and pro-actively consider potential risks, decide how to handle them, and document decisions so that organizations and project implementers are accountable to girls, peers and partners, and organizational leadership.

“We’ve started to change how we talk about user data in our organization,” said one discussant. “We have stopped thinking about it as something WE create and own, but more as something GIRLS own. Banks don’t own people’s money – they borrow it for a short time. We are trying to think about data that way in the conversations we’re having about data, funding, business models, proposals and partnerships. You don’t get to own your users’ data, we’re not going to share de-anonymized data with you. We’re seeing legislative data in some of the countries we work that are going that way also, so it’s good to be thinking about this now and getting prepared”

Take a look at our list of resources on the topic and add anything we may have missed!

 

Thanks to our friends at ThoughtWorks for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology SalonSalons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

This post is copied from an email that my colleague Kelly Hawrylyshyn sent to me. Kelly works on disaster risk reduction (DRR) with Plan UK. If you work on DRR and gender, go on, get yourself on the map!

Women and girls make a major contribution to disaster risk reduction and yet their role and involvement often go unacknowledged. In recognition of this gap, the Gender & Disaster Network, the Huairou Commission, Oxfam International and Plan International are facilitating the greater visibility of women and girls as part of the International Day for Disaster Risk Reduction (DRR), October 13th, 2012.

Gender inequalities around the world mean that women and girls are most severely affected by disaster. However, they also have significant experience and knowledge to contribute to disaster prevention and to the resilience of communities.

With this in mind, our efforts aim to move beyond portraying women and girls as mere victims of disasters and to provide spaces and opportunities for women and girls to connect and partner freely with local governments and organizations. We aim to showcase how women and girls around the world are carrying out disaster reduction and prevention actions; engaging and leading in climate change awareness activities; taking part in demonstrations and simulations; promoting resilient cities initiatives; and mapping risks.

Using crowdsourcing and crowdmapping tools, we aim to generate greater visibility and recognition of local initiatives by women and girls worldwide for disaster risk reduction.

Visit our map and report your own examples, in advance of the International Day for Disaster Reduction, October 13th, 2012.

We need your help to “put on the map” the numerous research initiatives, media events, publications, training materials, advocacy, workshops, networks/associations, and other activities that are happening and need to be made VISIBLE!

Contributions from both individual women and girls and organizations engaged in DRR are welcomed.

And who knows, you may get to find out about some interesting work taking place in your country, or miles away from you!

Join Us to make visible Women and Girls on the Map!

Read Full Post »

When working with women and girls in conflict or displacement situations (actually, when working with anyone, in any situation), we often make assumptions. In this case, the assumption is that “economic opportunities for women and adolescent girls have positive roll-on effects”, according to Mendy Marsh, UNICEF’s Gender Based Violence (GBV) Specialist in Emergencies.

Slide from Marsh’s presentation.

We assume that when women and older adolescent girls have income, they are safer. We assume that when households have income, children are more likely to be in school, that they are accessing healthcare, and that they are better fed, says Marsh.

But do we know whether that is true or not? What does the evidence say?

I took an hour today to listen to Marsh along with Dale Buscher, Senior Director for Programs at the Women’s Refugee Commission (WRC), talk about WRC’s “Peril or Protection: Making Work Safe” Campaign (watch the recording here).

GBV happens in all communities, including stable ones. But when situations become unstable, Marsh noted, a number of additional factors combine to make women and adolescent girls in conflict or displacement settings vulnerable to violence.

Slide from Marsh’s presentation.

These factors include:

  • Inadequate legal frameworks –eg., impunity for those committing GBV and a lack of awareness of rights
  • Lack of basic survival needs  — eg., food, non-food items, fuel, water, safe shelter
  • Lack of opportunities – eg., women’s and girls’ financial dependence, potential for exploitative work
  • Sociocultural aspects – eg., harmful practices, domestic violence, early and forced marriage
  • Insecurity – eg., flight and displacement, no lighting, no safe shelter, non-separate latrines or hygiene facilities for men, women, boys and girls, or facilities that don’t lock or are insecure; dependency on males for information

Emphasis during conflict situations tends to focus on response not prevention, said Marsh. Different agencies and sectors often work in isolation, but no single agency or sector can address GBV. It needs to be addressed across all sectors with strong community participation, including that of men and boys.

Often, she noted, livelihoods programs are brought in as a response to women’s needs and based on the assumptions above. There can be unintended negative effects from these programs and we need to be aware of them so that they can be mitigated.

Following Marsh’s introduction, Busher explained that because WRC wanted to better understand any potential unintended consequences from livelihoods programs aimed at women in conflict or displacement situations, in 2009 they conducted research and produced “Peril or Protection: The Link between Livelihoods and Gender-Baed Violence in Displacement Settings.

There is a very weak evidence base in terms of the links between gender based violence and livelihoods programming, he said.

WRC found that in some cases livelihood programs implemented by NGOs actually increased women’s and adolescent girls’ risks of GBV because of factors such as their entering the public sphere, going to market, using unsafe transportation and domestic conflict. The economic opportunities heightened the risks that women and girls faced. Providing them with income generation opportunities did not necessarily make women and girls safer or give them more control over resources.

Slide from Buscher’s presentation.

The answer is not to stop creating economic opportunities, however. Rather it is to design and implement these kinds of programs in responsible ways that do no harm and that are based on in-depth consultations with women, girls and their communities, livelihoods practitioners and GBV specialists.

Based on their research and with input from different stakeholders, WRC designed a toolkit to help those creating livelihoods programs for and with women and adolescent girls to do so in a way that lessens the risk of GBV.

The process outlined in the toolkit includes secondary research, safety mapping, a safety tool, and a decision chart.

Based on the secondary research, practitioners work with adolescent girls, women and the wider community to map the places that are important for livelihoods, explained Buscher. For example, the bus, a taxi stand, a supply shop, the fields.

Community members discuss where women and girls are safe and where they are not. They describe the kinds of violence and abuse that girls and women experience in these different places.

They identify strategies for protection based on when GBV takes place in the different locations. For example, does it happen year-round? At certain times of year? Only at night? Only on weekends?

They identify and discuss the most risky situations. Is a girl or woman most at risk when she is selling by the side of the road? Alone in a shop?

They also discuss which relationships are the most prone to GBV. Bosses? Suppliers? Buyers? Intimate Partners? Together the women and girls share and discuss the strategies that they use to protect themselves.

An additional tool identifies the social safety net that a women or adolescent girl has, considering that social networks are important both for livelihoods as well as for protection. Ways to strengthen them are discussed.

Finally, a decision chart is created with a list of livelihood activities and the information from the previous charts and discussions to determine the levels of risk in the different kinds of livelihood activities and the potential strategies for mitigating GBV.

Decisions are also made by the adolescent girls and women regarding which risks they are willing to take for which levels of livelihoods.

Marsh and Buscher concluded that safe, dignified work may be the most effective form of protection because it can help mitigate negative coping strategies such as transactional sex, child labor, pulling children out of school, and selling rations.

Livelihoods, however, should not be thought of as a little bit of money to supplement daily rations. They should be sustainable and help meet basic needs in an ongoing way; and they should lead to dignified work. The amount earned and the risks involved for women and girls need to be worth it for them, considering all the other domestic chores that they are required to do. NGOs need to consult with and listen to girls and women to better understand their needs, coping strategies.

If you’d like to learn more about the research, and the toolkit, WRC offers a free e-learning tool on how to make work safe.

You can also follow the #safelivelihoods conversation on twitter.

Read Full Post »

Coming from the viewpoint that accountability and transparency, citizen engagement and public debate are critical for good development, I posted yesterday on 5 ways that ICTs can support the MDGs. I got to thinking I would be remiss not to also post something on ways that ICTs (information and communication technologies) and poor or questionable use of ICTs and social media can hinder development.

It’s not really the fault of the technology. ICTs are tools, and the real issues lie behind the tools — they lie with people who create, market and use the tools. People cannot be separated from cultures and societies and power and money and politics. And those are the things that tend to hinder development, not really the ICTs themselves. However the combination of human tendencies and the possibilities ICTs and social media offer can can sometimes lead us down a shaky path to development or actually cause harm to the people that we are working with.

When do I start getting nervous about ICTs and social media for social good?

1) When the hype wins out over the real benefits of the technology. Sometimes the very idea of a cool and innovative technology wins out over an actual and realistic analysis of its impact and success. Here I pose the cases of the so-called Iran Twitter Revolution and One Laptop per Child (and I’ll throw in Play Pumps for good measure, though it’s not an ICT project, it’s an acknowledged hype and failure case). There are certainly other excellent examples. So many examples in fact that there are events called Fail Faires being organized to discuss these failures openly and learn how to avoid them in the future.

2) When it’s about the technology, not the information and communications needs.  When you hear someone say “We want to do an mHealth project” or “We need to have a Facebook page” or “We have a donor who wants to give us a bunch of mobile phones–do you know of something we can do with them?” you can be pretty sure that you have things backwards and are going to run into trouble down the road, wasting resources and energy on programs that are resting on weak foundations. Again, we can cite the One Laptop per Child (OLPC) initiative, where you have a tool, but the context for using it isn’t there (connectivity, power, teacher training and content). There is debate whether OLPC was a total failure or whether it paved the way for netbooks, cheap computers and other technologies that we use today. I’ll still say that the grand plan of OLPC:  a low-cost laptop for every child leading to development advances; had issues from the start because it was technology led.

3) When technology is designed from afar and parachuted in. If you don’t regularly involve people who will use your new technology, in the context where you’re planning for it to be used, you’re probably going to find yourself in a bind. True for ICTs and for a lot of other types of innovations out there. There’s a great conversation on Humanitarian Design vs. Design Imperialism that says it all. ICTs are no different. Designing information and communication systems in one place and imposing them on people in another place can hinder their uptake and/or waste time and money that could be spent in other ways that would better contribute to achieving development goals.

4) When the technology is part of a larger hidden agenda. I came across two very thought-provoking posts this week: one on the US Government’s Internet Freedom agenda and another from youth activists in the Middle East and North Africa region who criticize foundations and other donors for censoring their work when it doesn’t comply with US foreign policy messages. Clearly there are hidden political agendas at work which can derail the use of ICTs for human rights work and build mistrust instead of democracy. Another example of a potential hidden agenda is donation of proprietary hardware and software by large technology companies to NGOs and NGO consortia in order to lock in business (for example mHealth or eHealth) and prevent free and open source tools from being used, and which end up being costly to maintain, upgrade and license in the long-term.

5) When tech innovations put people and lives at risk. I’d encourage you to read this story about Haystack, a software hyped as a way to circumvent government censorship of social media tools that activists use to organize. After the US government fast-tracked it for use in Iran, huge security holes were found that could put activists in great danger. In our desire to see things as cool, cutting edge, and perhaps to be seen as cool and cutting edge ourselves, those of us suggesting and promoting ICTs for reporting human rights abuses or in other sensitive areas of work can cause more harm than we might imagine. It’s dangerous to push new technologies that haven’t been properly piloted and evaluated. It’s very easy to get caught up in coolness and forget the nuts and bolts and the time it takes to develop and test something new.

6) When technologists and humanitarians work in silos. A clear example of this might be the Crisis Camps that sprung up immediately after the Haiti Earthquakes in 2010. The outpouring of good will was phenomenal, and there were some positive results. The tech community got together to see how to help, which is a good thing. However the communication between the tech community and those working on the ground was not always conducive to developing tech solutions that were actually helpful. Here is an interesting overview by Ethan Zuckerman of some of the challenges the Crisis Commons faced. I remember attending a Crisis Camp and feeling confused about why one guy was building an iPhone application for local communities to gather data. Cool application for sure, but from what people I knew on the ground were saying, most people in local communities in Haiti don’t have iPhones. With better coordination among the sectors, people could put their talents and expertise to real use rather than busy work that makes them feel good.

7) When short attention spans give rise to vigilante development interventions. Because most of us in the West no longer have a full attention span (self included here), we want bite sized bits of information. But the reality of development is complicated, complex and deep. Social media has been heralded as a way to engage donors, supporters and youth; as a way to get people to help and to care. However the story being told has not gotten any deeper or more realistic in most cases than the 30 second television commercials or LiveAid concerts that shaped perceptions of the developing world 25 years ago. The myth of the simple story and simple solution propagates perhaps even further because of how quickly the message spreads. This gives rise to public perception that aid organizations are just giant bureaucracies (kind of true) and that a simple person with a simple idea could just go in and fix things without so much hullabaloo (not the case most of the time). The quick fix culture, supported and enhanced by social media, can be detrimental to the public’s patience with development, giving rise to apathy or what I might call vigilante development interventions — whereby people in the West (cough, cough, Sean Penn) parachute into a developing country or disaster scene to take development into their own hands because they can’t understand why it’s not happening as fast as the media tells them it should.

8 ) When DIY disregards proven practice. In line with the above, there are serious concerns in the aid and development community about the ‘amateurization’ of humanitarian and development work. The Internet allows people to link and communicate globally very easily. Anyone can throw up a website or a Facebook page and start a non-profit that way, regardless of their understanding of the local dynamics or good development practices built through years of experience in this line of work. Many see criticism from development workers as a form of elitism rather than a call for caution when messing around in other people’s lives or trying to do work that you may not be prepared for or have enough understanding about. The greater awareness and desire to use ‘social media for social good’ may be a positive thing, but it may also lead to good intentions gone awry and again, a waste of time and resources for people in communities, or even harm. There’s probably no better example of this phenomenon than #1millionshirts, originally promoted by Mashable, and really a terrible idea. See Good Intents for discussion around this phenomenon and tools to help donors educate themselves.

9) When the goal is not development but brand building through social media. Cause campaigns have been all the rage for the past several years. They are seen as a way for for-profit companies and non-profits to join together for the greater good. Social media and new ICTs have helped this along by making cause campaigns cheap and easy to do. However many ‘social media for social good’ efforts are simply bad development and can end up actually doing ‘social harm’. Perhaps a main reason for some of the bad ideas is that most social media cause campaigns are not actually designed to do social good. As Mashable says, through this type of campaign, ‘small businesses can gain exposure without breaking the bank, and large companies can reach millions of consumers in a matter of hours.’ When ‘social good’ goals are secondary to the ‘exposure for my brand’ goals, I really question the benefits and contribution to development.

10) When new media increases voyeurism, sensationalism or risk. In their rush to be the most innovative or hard-hitting in the competition for scarce donor dollars, organizations sometimes expose communities to child protection risks or come up with cutesy or edgy social media ideas that invade and interrupt people’s lives; for example, ideas like putting a live web camera in a community so that donors can log on 24/7 and see what’s happening in a ‘real live community.’ (This reminds me a bit of the Procrastination Pit’s 8 Cutest and Weirdest Live Animal Cams). Or when opportunities for donors to chat with people in communities become gimmicks and interrupt people in communities from their daily lives and work. Even professional journalists sometimes engage in questionable new media practices that can endanger their sources or trivialize their stories. With the Internet, stories stick around a lot longer and travel a lot farther and reach their fingers back to where they started a lot more easily than they used to. Here I will suggest two cases: Nick Kristof’s naming and fully identifying a 9-year-old victim of rape in the DRC and @MacClelland’s ‘live tweeting’ for Mother Jones of a rape survivor’s visit to the doctor in Haiti.

Update: Feb 22, 2011 – adding a 10a!

10a) When new media and new technologies put human rights activists at risk of identification and persecution. New privacy and anonymity issues are coming up due to the increasing ubiquity of video for human rights documenting. This was clearly seen in the February 2011 uprisings in Egypt, Tunisia, Libya, Bahrain and elsewhere.  From Sam Gregory’s excellent piece on privacy and anonymity in the digital age: “In the case of video (or photos), a largely unaddressed question arises. What about the rights to anonymity and privacy for those people who appear, intentionally or not, in visual recordings originating in sites of individual or mass human rights violations? Consider the persecution later faced by bystanders and people who stepped in to film or assist Neda Agha-Soltan as she lay dying during the election protests in Iran in 2009. People in video can be identified by old-fashioned investigative techniques, by crowd-sourcing (as with the Iran example noted above…) or by face detection/recognition software. The latter is now even built into consumer products like the Facebook Photos, thus exposing activists using Facebook to a layer of risk largely beyond their control.”

11) When ICTs and new media turn activism to slacktivism. Quoting from Evgeny Morozov, “slacktivism” is the ideal type of activism for a lazy generation: why bother with sit-ins and the risk of arrest, police brutality, or torture if one can be as loud campaigning in the virtual space? Given the media’s fixation on all things digital — from blogging to social networking to Twitter — every click of your mouse is almost guaranteed to receive immediate media attention, as long as it’s geared towards the noble causes. That media attention doesn’t always translate into campaign effectiveness is only of secondary importance.” Nuff said.

I’ll leave you with this kick-ass Le Tigre Video: Get off the Internet.… knowing full well that I’m probably the first one who needs to take that advice.

‘It feels so 80s… or early 90s…  to be political… where are my friends? GET OFF THE INTERNET, I’ll meet you in the streets….’


Related posts on Wait… What?

3 ways to integrate ICTs into development work

5 ways ICTs can support the MDGs

7 (or more) questions to ask before adding ICTs

Amateurs, professionals, innovations and smart aid

I and C, then T

MDGs through a child rights lens


Read Full Post »