Feeds:
Posts
Comments

Posts Tagged ‘data’

Over the past few months, I’ve been working with CARE to develop a Responsible Data Maturity Model. This “RDMM” joins a growing set of tools (created by a wide variety of organizations) aimed at supporting organizations to move towards more responsible data management.

Responsible Data is a concept developed by the Responsible Data Forum. It outlines the collective duty to prioritize and respond to the ethical, legal, social and privacy-related challenges that come from using data. Responsible Data encompasses a variety of issues which are sometimes thought about separately, like data privacy and data protection, or ethical challenges. For any of these to be truly addressed, they need to be considered together.

CARE’s model identifies five levels of Responsible Data maturity:

  • Unaware: when an organization has not thought about Responsible Data much at all.
  • Ad-Hoc: when some staff or teams are raising the issue or doing something on their own, but there is no institutionalization of Responsible Data.
  • Developing: when there is some awareness, but the organization is only beginning to put policy, guidelines, procedures and governance in place.
  • Mastering: when the organization has its own house in order and is supporting its partners to do the same.
  • Leading: when the organization is looked to as a Responsible Data leader amongst its peers, setting an example of good practice, and influencing the wider field. Ideally an organization would be close to ‘mastering’ before placing itself in the ‘leading’ stage.

The main audience for the RDMM is the point person who is tasked with moving an organization or team forward to improve data practices and data ethics. The model can be adapted and used in ways that are appropriate for other team members who do not have Responsible Data as their main, day-to-day focus.

There are multiple other uses for the RDMM, however, for example:

  • As a diagnostic or baseline and planning tool for organizations to see where they are now, where they would like to be in 3 or 5 years, and where they need to put more support/resources.
  • As an audit framework for Responsible Data.
  • As a retro-active, after-action assessment tool or case study tool for looking at a particular program and seeing which Responsible Data elements were in place and contributed to good data practices, and then developing a case study to highlight good practices and gaps.
  • As a tool for evaluation if looking at a baseline/end-line for organizational approaches to Responsible Data.
  • In workshops as a participatory self-assessment tool to 1) help people see that moving towards a more responsible data approach is incremental and 2) to identify what a possible ideal state might look like. The tool can be adapted to what an organization sees as its ideal future state.
  • To help management understand and budget for a more responsible data approach.
  • With an adapted context, “persona,” or work stream approach that helps identify what Responsible Data maturity might look like for a particular project or program or for a particular role within a team or organization. For example, for headquarters versus for a country office, for the board versus for frontline implementers. It could also help organizations identify what parts of Responsible Data different positions or teams should be concerned with and accountable for.
  • As an investment roadmap for headquarters, leadership or donors to get a sense of what is the necessary investment to reach Responsible Data maturity.
  • As an iterative pathway to action, and a way to establish indicators or markers to mainstream Responsible Data throughout an organization.
  • In any other way you might think of! The RDMM is published with a Creative Commons License that allows you to modify and adapt it to suit your needs.

Over the past few months, we’ve tested the model with teams at headquarters, country offices, in mixed teams of people from different offices in one organization, and with groups from different organizations. We asked them to go through the different areas of the model and self-assess at which level they place themselves currently and which level they would like to achieve within a set time frame, for example 3 or 5 years. Then we worked with them to develop action points that would allow them to arrive to the desired level.

Teams found the exercise useful because:

  • It allowed them to break Responsible Data into disparate pieces that could be assigned to different parts of an organization or different members of a team.
  • It helped to lay out indicators or “markers” related to Responsible Data that could be integrated throughout an organization.
  • It allowed both teams and management to see that Responsible Data is a marathon not a sprint and will require that multiple work streams are addressed over time with the involvement of different skill sets and different parts of the organization (strategy, operations and IT, legal, programs, M&E, innovations, HR, fundraising and partnerships, etc.)
  • It helped teams with limited resources to see how to make incremental steps forward without feeling pressured to make Responsible Data their only focus.

We hope others will find the RDMM useful as well! It’s published under a creative commons license, so feel free to use it and adapt it in ways that will suit your needs.

We’re in the process of translating it into French and Spanish. We’d love to know if you use it, how, and if it is helpful to you! Please get in touch with me or Kelly Church at CARE for more information.

Download the Responsible Data Maturity Model as a Word file.

Download the Responsible Data Maturity Model as a pdf

Read Full Post »

In the search for evidence of impact, donors and investors are asking that more and more data be generated by grantees and those they serve. Some of those driving this conversation talk about the “opportunity cost” of not collecting, opening and sharing as much data as possible. Yet we need to also talk about the real and tangible risks of data collecting and sharing and the long-term impacts of reduced data privacy and security rights, especially for the vulnerable individuals and groups with whom we work.

This week I’m at the Global Philanthropy Forum Conference in the heart of Silicon Valley speaking on a panel titled “Civil Liberties and Data Philanthropy: When NOT to Ask for More.” It’s often donor requests for innovation or for proof of impact that push implementors to collect more and more data. So donors and investors have a critical role to play in encouraging greater respect and protection of the data of vulnerable individuals and groups. Philanthropists, grantees, and investees can all help to reduce these risks by bringing a values-based responsible data approach to their work.

Here are three suggestions for philanthropists on how to contribute to more responsible data management:

1) Enhance your own awareness and expertise on the potential benefits and harms associated with data. 

  • Adopt processes that take a closer look at the possible risks and harms of collecting and holding data and how to mitigate them. Ensure those aspects are reviewed and considered during investments and grant making.
  • Conduct risk-benefits-harms assessments early in the program design and/or grant decision-making processes. This type of assessment helps lay out the benefits of collecting and using data, identifies the data-related harms we might we be enabling, and asks us to determine how we are intentionally mitigating harm during the design of our data collection, use and sharing. Importantly, this process also asks us to also identify who is benefiting from data collection and who is taking on the burden of risk. It then aims to assess whether the benefits of having data outweigh the potential harms. Risks-benefits-harms assessments also help us to ensure we are doing a contextual assessment, which is important because every situation is different. When these assessments are done in a participatory way, they tend to be even more useful and accurate ways to reduce risks in data collection and management.
  • Hire people within your teams who can help provide technical support to grantees when needed in a friendly — not a punitive — way. Building in a ‘data responsibility by design’ approach can help with that. We need to think about the role of data during the early stages of design. What data is collected? Why? How? By and from whom? What are the potential benefits, risks, and harms of gathering, holding, using and sharing that data? How can we reduce the amount of data that we collect and mitigate potential harms?
  • Be careful with data on your grantees. If you are working with organizations who (because of the nature of their mission) are at risk themselves, it’s imperative that you protect their privacy and don’t expose them to harm by collecting too much data from them or about them. Here’s a good guide for human rights donors on protecting sensitive data.

2) Use your power and influence to encourage grantees and investees to handle data more responsibly. If donors are going to push for more data collection, they should also be signaling to grantees and investees that responsible data management matters and encouraging them to think about it in proposals and more broadly in their work.

  • Strengthen grantee capacity as part of the process of raising data management standards. Lower-resourced organizations may not be able to meet higher data privacy requirements, so donors should think about how they can support rather than exclude smaller organizations with less capacity as we all work together to raise data management standards.
  • Invest holistically in both grants and grantees. This starts by understanding grantees’ operational, resource, and technical constraints as well as the real security risks posed to grantee staff, data collectors, and data subjects. For this to work, donors need to create genuinely safe spaces for grantees to voice their concerns and discuss constraints that may limit their ability to safely collect the data that donors are demanding.
  • Invest in grantees’ IT and other systems and provide operational funds that enable these systems to work. There is never enough funding for IT systems, and this puts the data of vulnerable people and groups at risk. One reason that organizations struggle to fund systems and improve data management is because they can’t bill overhead. Perverse incentives prevent investments in responsible data. Donors can work through this and help find solutions.
  • Don’t punish organizations that include budget for better data use, protection and security in their proposals. It takes money and staff and systems to manage data in secure ways. Yet stories abound in the sector about proposals that include these elements being rejected because they turn out to be more expensive. It’s critical to remember that safeguarding of all kinds takes resources!
  • Find out what kind of technical or systems support grantees/investees need to better uphold ethical data use and protection and explore ways that you can provide additional funds and resources to strengthen this area in those grantees and across the wider sector.
  • Remember that we are talking about long-term organizational behavior change. It is urgent to get moving on improving how we all handle data — but this will take some time. It’s not a quick fix because the skills are in short supply and high demand right now as a result of the GDPR and related laws that are emerging in other countries around the world.
  • Don’t ask grantees to collect data that might make vulnerable individuals or groups wary of them. Data is an extension of an individual. Trust in how an organization collects and manages an individual’s data leads to trust in an organization itself. Organizations need to be trusted in order to do our work, and collection of highly sensitive data, misuse of data or a data breach can really break that trust compact and reduce an organization’s impact.

3) Think about the responsibility you have for what you do, what you fund, and the type of society that we live in. Support awareness and compliance with new regulations and legislation that can protect privacy. Don’t use “innovation” as an excuse for putting historically marginalized individuals and groups at risk or for allowing our societies to advance in ways that only benefit the wealthiest. Question the current pathway of the “Fourth Industrial Revolution” and where it may take us.

I’m sure I’m leaving out some things. What do you think donors and the wider philanthropic community can do to enhance responsible data management and digital safeguarding?

 

 

 

Read Full Post »

The recently announced World Food Programme (WFP) partnership with Palantir, IRIN’s article about it, reactions from the Responsible Data Forum, and WFP’s resulting statement inspired us to pull together a Technology Salon in New York City to discuss the ethics of humanitarian data sharing.

(See this crowdsourced document for more background on the WFP-Palantir partnership and resources for thinking about the ethics of data sharing. Also here is an overview of WFP’s SCOPE system for beneficiary identification, management and tracking.)

Our lead discussants were: Laura Walker McDonald, Global Alliance for Humanitarian Innovation; Mark Latonero, Research Lead for Data & Human Rights, Data & Society; Nathaniel Raymond, Jackson Institute of Global Affairs, Yale University; and Kareem Elbayar, Partnerships Manager, Centre for Humanitarian Data at the United Nations Office for the Coordination of Humanitarian Affairs. We were graciously hosted by The Gov Lab.

What are the concerns about humanitarian data sharing and with Palantir?

Some of the initial concerns expressed by Salon participants about humanitarian data sharing included: data privacy and the permanence of data; biases in data leading to unwarranted conclusions and assumptions; loss of stakeholder engagement when humanitarians move to big data and techno-centric approaches; low awareness and poor practices across humanitarian organizations on data privacy and security; tensions between security of data and utility of data; validity and reliability of data; lack of clarity about the true purposes of data sharing; the practice of ‘ethics outsourcing’ (testing things in places where there is a perceived ‘lower ethical standard;’ and less accountability); use of humanitarian data to target and harm aid recipients; disempowerment and extractive approaches to data; lack of checks and balances for safe and productive data sharing; difficulty of securing meaningful consent; and the links between data and surveillance by malicious actors, governments, private sector, military or intelligence agencies.

Palantir’s relationships and work with police, the CIA, ICE, the NSA, the US military and wider intelligence community are one of the main concerns about this partnership. Some ask whether a company can legitimately serve philanthropy, development, social, human rights and humanitarian sectors while also serving the military and intelligence communities and whether it is ethical for those in the former to engage in partnerships with companies who serve the latter. Others ask if WFP and others who partner with Palantir are fully aware of the company’s background, and if so, why these partnerships have been able to pass through due diligence processes. Yet others wonder if a company like Palantir can be trusted, given its background.

Below is a summary of the key points of the discussion, which happened on February 28, 2019. (Technology Salons are Chatham House affairs, so I have not attributed quotes in this post.)

Why were we surprised by this partnership/type of partnership?

Our first discussant asked why this partnership was a surprise to many. He emphasized the importance of stakeholder conversations, transparency, and wider engagement in the lead-up to these kinds of partnerships. “And I don’t mean in order to warm critics up to the idea, but rather to create a safe and trusted ecosystem. Feedback and accountability are really key to this.” He also highlighted that humanitarian organizations are not experts in advanced technologies and that it’s normal for them to bring in experts in areas that are not their forte. However, we need to remember that tech companies are not experts in humanitarian work and put the proper checks and balances in place. Bringing in a range of multidisciplinary expertise and distributed intelligence is necessary in a complex information environment. One possible approach is creating technology advisory boards. Another way to ensure more transparency and accountability is to conduct a human rights impact assessment. The next year will be a major test for these kinds of partnerships, given the growing concerns, he said.

One Salon participant said that the fact that the humanitarian sector engages in partnerships with the private sector is not a surprise at all, as the sector has worked through Public-Private Partnerships (PPPs) for several years now and they can bring huge value. The surprise is that WFP chose Palantir as the partner. “They are not the only option, so why pick them?” Another person shared that the WFP partnership went through a full legal review, and so it was not a surprise to everyone. However, communication around the partnership was not well planned or thought out and the process was not transparent and open. Others pointed out that although a legal review covers some bases, it does not assess the potential negative social impact or risk to ‘beneficiaries.’ For some the biggest surprise was WFP’s own surprise at the pushback on this particular partnership and its unsatisfactory reaction to the concerns raised about it. The response from responsible data advocates and the press attention to the WFP-Palantir partnership might be a turning point for the sector to encourage more awareness of the risks in working with certain types of companies. As many noted, this is not only a problem for WFP, it’s something that plagues the wider sector and needs to be addressed urgently.

Organizations need think beyond reputational harm and consider harm to beneficiaries

“We spend too much time focusing on avoiding risk to institutions and too little time thinking about how to mitigate risk to beneficiaries,” said one person. WFP, for example, has some of the best policies and procedures out there, yet this partnership still passed their internal test. That is a scary thought, because it implies that other agencies who have weaker policies might be agreeing to even more risky partnerships. Are these policies and risk assessments, then, covering all the different types of risk that need consideration? Many at the Salon felt that due diligence and partnership policies focus almost exclusively on organizational and reputational risk with very little attention to the risk that vulnerable populations might face. It’s not just a question of having policies, however, said one person. “Look at the Oxfam Safeguarding situation. Oxfam had some of the best safeguarding policies, yet there were egregious violations that were not addressed by having a policy. It’s a question of power and how decisions get made, and where decision-making power lies and who is involved and listened to.” (Note: one person contacted me pre-Salon to say that there was pushback by WFP country-level representatives about the Palantir partnership, but that it still went ahead. This brings up the same issue of decision-making power, and who has power to decide on these partnerships and why are voices from the frontlines not being heard? Additionally, are those whose data is captured and put into these large data systems ever consulted about what they think?)

Organizations need to assess wider implications, risks, and unintended negative consequences

It’s not only WFP that is putting information into SCOPE, said one person. “Food insecure people have no choice about whether to provide their data if they wish to receive food.” Thus, the question of truly ‘informed consent’ arises. Implementing partners don’t have a lot of choice either, he said. “Implementing agencies are forced to input beneficiary data into SCOPE if they want to work in particular zones or countries.” This means that WFP’s systems and partnerships have an impact on the entire humanitarian community, and therefore these partnerships and systems need to be more broadly consulted about with the wider sector.  The optical and reputational impact to organizations aside from WFP is significant, as they may disagree with the Palantir partnership but they are now associated with it by default. This type of harm goes beyond the fear of exploitation of the data in WFP’s “data lake.” It becomes a risk to personnel on the ground who are then seen as collaborating with a CIA contractor by putting beneficiary biometric data into SCOPE. This can also deter food-insecure people from accessing benefits. Additionally, association with CIA or US military has led to humanitarian agencies and workers being targeted, attacked and killed. That is all in addition to the question on whether these kinds of partnerships violate humanitarian principles, such as that of impartiality.

“It’s critical to understand the role of rumor in humanitarian contexts,” said one discussant. “Affected populations are trying to figure out what is happening and there is often a lot of rumor going around.”  So, if Palantir has a reputation for giving data to the CIA, people may hear about that and then be afraid to access services for fear of having their data given to the CIA. This can lead to retaliation against humanitarians and humanitarian organizations and escalate their risk of operating. Risk assessments need to go beyond the typical areas of reputation or financial risk. We also need to think about how these partnerships can affect humanitarian access and community trust and how rumors can have wide ripple effects.

The whole sector needs to put better due diligence systems in place. As it is now, noted one person, often it’s someone who doesn’t know much about data who writes up a short summary of the partnership, and there is limited review. “We’ve been struggling for 10 years to get our offices to use data. Now we’re in a situation where they’re just picking up a bunch of data and handing it over to private companies.”

UN immunities and privileges lead to a lack of accountability

The fact that UN agencies have immunities and privileges, means that laws such as the EU’s General Data Protection Regulation (GDPR) do not apply to them and they are left to self-regulate. Additionally, there is no common agreement among UN Agencies on how GDPR applies, and each UN agency interprets it on their own. As one person noted “There is a troubling sense of exceptionalism and lack of accountability in some of these agencies because ‘a beneficiary cannot take me to court.’” An interesting point, however, is that while UN agencies are immune, those contracted as their data processors are not immune — so data processors beware!

Demographically Identifiable Information (DII) can lead to serious group harm

The WFP has stated that personally identifiable information (PII) is not technically accessible to Palantir via this partnership. However, some at the Salon consider that the WFP failed in their statement about the partnership when they used the absence of PII as a defense. Demographically Identifiable Information (DII) and the activity patterns that are visible even in commodity data can be extrapolated as training data for future data modeling. “This is prospective modeling of action-based intelligence patterns as part of multiple screeners of intel,” said one discussant. He went on to explain that privacy discussions have moved from centering on property rights in the 19th Century, to individual rights in the 20th Century, to group rights in the 21st Century. We can use existing laws to emphasize protection of groups and to highlight the risks of DII leading to group harm, he said, as there are well-known cases that exemplify the notion of group harms (Plessy v Ferguson, Brown v Board of Education). Even in logistics data (which is the kind of data that WFP says Palantir will access) that contains no PII, it’s very simple to identify groups. “I can look at supply chain information and tell you where there are lactating mothers. If you don’t want refugees to give birth in the country they have arrived to, this information can be used for targeting.”

Many in the sector do not trust a company like Palantir

Though it is not clear who was in the room when WFP made the decision to partner with Palantir, the overall sector has concerns that the people making these decisions are not assessing partnerships from all angles: legal, privacy, programmatic, ethical, data use and management, social, protection, etc. Technologists and humanitarian practitioners are often not included in making these decisions, said one participant. “It’s the people with MBAs. They trust a tech company to say ‘this is secure’ but they don’t have the expertise to actually know that. Not to mention that yes, something might be secure, but maybe it’s not ethical. Senior people are signing off without having a full view. We need a range of skill sets reviewing these kinds of partnerships and investments.”

Another question arises: What happens when there is scope creep? Is Palantir in essence “grooming” the sector to then abuse data it accesses once it’s trusted and “allowed in”? Others pointed out that the grooming has already happened and Palantir is already on the inside. They first began partnering with the sector via the Clinton Global Initiative meetings back in 2013 and they are very active at World Economic Forum meetings. “This is not something coming out of the Trump administration, it was happening long before that,” said one person, and the company is already “in.” Another person said “Palantir lobbied their way into this, and they’ve gotten past the point of reputational challenge.” Palantir has approached many humanitarian agencies, including all the UN agencies, added a third person. Now that they have secured this contract with the WFP, the door to future work with a lot of other agencies is open and this is very concerning.

We’re in a new political economy: data brokerage.

“Humanitarians have lost their Geneva values and embraced Silicon Valley values” said one discussant. They are becoming data brokers within a colonial data paradigm. “We are making decisions in hierarchies of power, often extralegally,” he said. “We make decisions about other people’s data without their involvement, and we need to be asking: is it humanitarian to commodify for monetary or reasons of value the data of beneficiaries? When is it ethical to trade beneficiary data for something of value?” Another raised the issue of incentives. “Where are the incentives stacked? There is no incentive to treat beneficiaries better. All the incentives are on efficiency and scale and attracting donors.”

Can this example push the wider sector to do better?

One participant hoped there could be a net gain out of the WFP-Palantir case. “It’s a bad situation. But it’s a reckoning for the whole space. Most agencies don’t have these checks and balances in place. But people are waking up to it in a serious way. There’s an opportunity to step into. It’s hard inside of bureaucratic organizations, but it’s definitely an opportunity to start doing better.”

Another said that we need more transparency across the sector on these partnerships. “What is our process for evaluating something like this? Let’s just be transparent. We need to get these data partnership policies into the open. WFP could have simply said ‘here is our process’. But they didn’t. We should be working with an open and transparent model.” Overall, there is a serious lack of clarity on what data sharing agreements look like across the sector. One person attending the Salon said that their organization has been trying to understand current practice with regard to data sharing, and it’s been very difficult to get any examples, even redacted ones.

What needs to happen? 

In closing we discussed what needs to happen next. One person noted that in her research on Responsible Data, she found a total lack of capacity in terms of technology at non-profit organizations. “It’s the Economist Syndrome. Someone’s boss reads something on the bus and decides they need a blockchain,” someone quipped. In terms of responsible data approaches, research shows that organizations are completely overwhelmed. “They are keeping silent about their low capacity out of fear they will face consequences,” said one person, “and with GDPR, even more so”. At the wider level, we are still focusing on PII as the issue without considering DII and group rights, and this is a mistake, said another.

Organizations have very low capacity, and we are siloed. “Program officers do not have tech capacity. Tech people are kept in offices or ‘labs’ on their own and there is not a lot of porosity. We need protection advisors, lawyers, digital safety advisors, data protection officers, information management specialists, IT all around the table for this,” noted one discussant. Also, she said, though we do need principles and standards, it’s important that organizations adapt these so that they are their own principles and standards. “We need to adapt these boiler plate standards to our organizations. This has to happen based on our own organizational values.  Not everyone is rights-based, not everyone is humanitarian.” So organizations need to take the time to review and adapt standards, policies and procedures to their own vision and mission and to their own situations, contexts and operations and to generate awareness and buy-in. In conclusion, she said, “if you are not being responsible with data, you are already violating your existing values and codes. Responsible Data is already in your values, it’s a question of living it.”

Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

(I’ve been blogging a little bit over at MERLTech.org. Here’s a repost.)

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general. These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Read Full Post »

This post is co-authored by Emily Tomkys, Oxfam GB; Danna Ingleton, Amnesty International; and me (Linda Raftree, Independent)

At the MERL Tech conference in DC this month, we ran a breakout session on rethinking consent in the digital age. Most INGOs have not updated their consent forms and policies for many years, yet the growing use of technology in our work, for many different purposes, raises many questions and insecurities that are difficult to address. Our old ways of requesting and managing consent need to be modernized to meet the new realities of digital data and the changing nature of data. Is informed consent even possible when data is digital and/or opened? Do we have any way of controlling what happens with that data once it is digital? How often are organizations violating national and global data privacy laws? Can technology be part of the answer?

Let’s take a moment to clarify what kind of consent we are talking about in this post. Being clear on this point is important because there are many synchronous conversations on consent in relation to technology. For example there are people exploring the use of the consent frameworks or rhetoric in ICT user agreements – asking whether signing such user agreements can really be considered consent. There are others exploring the issue of consent for content distribution online, in particular personal or sensitive content such as private videos and photographs. And while these (and other) consent debates are related and important to this post, what we are specifically talking about is how we, our organizations and projects, address the issue of consent when we are collecting and using data from those who participate in programs or monitoring, evaluation, research and learning (MERL) that we are implementing.

This diagram highlights that no matter how someone is engaging with the data, how they do so and the decisions they make will impact on what is disclosed to the data subject.

No matter how someone is engaging with data, how they do so and the decisions they make will impact on what is disclosed to the data subject.

This is as timely as ever because introducing new technologies and kinds of data means we need to change how we build consent into project planning and implementation. In fact, it gives us an amazing opportunity to build consent into our projects in ways that our organizations may not have considered in the past. While it used to be that informed consent was the domain of frontline research staff, the reality is that getting informed consent – where there is disclosure, voluntariness, comprehension and competence of the data subject –  is the responsibility of anyone ‘touching’ the data.

Here we share examples from two organizations who have been exploring consent issues in their tech work.

Over the past two years, Girl Effect has been incorporating a number of mobile and digital tools into its programs. These include both the Girl Effect Mobile (GEM) and the Technology Enabled Girl Ambassadors (TEGA) programs.

Girl Effect Mobile is a global digital platform that is active in 49 countries and 26 languages. It is being developed in partnership with Facebook’s Free Basics initiative. GEM aims to provide a platform that connects girls to vital information, entertaining content and to each other. Girl Effect’s digital privacy, safety and security policy directs the organization to review and revise its terms and conditions to ensure that they are ‘girl-friendly’ and respond to local context and realities, and that in addition to protecting the organization (as many T&Cs are designed to do), they also protect girls and their rights. The GEM terms and conditions were initially a standard T&C. They were too long to expect girls to look at them on a mobile, the language was legalese, and they seemed one-sided. So the organization developed a new T&C with simplified language and removed some of the legal clauses that were irrelevant to the various contexts in which GEM operates. Consent language was added to cover polls and surveys, since Girl Effect uses the platform to conduct research and for its monitoring, evaluation and learning work. In addition, summary points are highlighted in a shorter version of the T&Cs with a link to the full T&Cs. Girl Effect also develops short articles about online safety, privacy and consent as part of the GEM content as a way of engaging girls with these ideas as well.

TEGA is a girl-operated mobile-enabled research tool currently operating in Northern Nigeria. It uses data-collection techniques and mobile technology to teach girls aged 18-24 how to collect meaningful, honest data about their world in real time. TEGA provides Girl Effect and partners with authentic peer-to-peer insights to inform their work. Because Girl Effect was concerned that girls being interviewed may not understand the consent they were providing during the research process, they used the mobile platform to expand on the consent process. They added a feature where the TEGA girl researchers play an audio clip that explains the consent process. Afterwards, girls who are being interviewed answer multiple choice follow up questions to show whether they have understood what they have agreed to. (Note: The TEGA team report that they have incorporated additional consent features into TEGA based on examples and questions shared in our session).

Oxfam, in addition to developing out their Responsible Program Data Policy, has been exploring ways in which technology can help address contemporary consent challenges. The organization had doubts on how much its informed consent statement (which explains who the organization is, what the research is about and why Oxfam is collecting data as well as asks whether the participant is willing to be interviewed) was understood and whether informed consent is really possible in the digital age. All the same, the organization wanted to be sure that the consent information was being read out in its fullest by enumerators (the interviewers). There were questions about what the variation might be on this between enumerators as well as in different contexts and countries of operation. To explore whether communities were hearing the consent statement fully, Oxfam is using mobile data collection with audio recordings in the local language and using speed violations to know whether the time spent on the consent page is sufficient, according to the length of the audio file played. This is by no means foolproof but what Oxfam has found so far is that the audio file is often not played in full and or not at all.

Efforts like these are only the beginning, but they help to develop a resource base and stimulate more conversations that can help organizations and specific projects think through consent in the digital age.

Additional resources include this framework for Consent Policies developed at a Responsible Data Forum gathering.

Because of how quickly technology and data use is changing, one idea that was shared was that rather than using informed consent frameworks, organizations may want to consider defining and meeting a ‘duty of care’ around the use of the data they collect. This can be somewhat accomplished through the creation of organizational-level Responsible Data Policies. There are also interesting initiatives exploring new ways of enabling communities to define consent themselves – like this data licenses prototype.

screen-shot-2016-11-02-at-10-20-53-am

The development and humanitarian sectors really need to take notice, adapt and update their thinking constantly to keep up with technology shifts. We should also be doing more sharing about these experiences. By working together on these types of wicked challenges, we can advance without duplicating our efforts.

Read Full Post »

This post was written with input from Maliha Khan, Independent Consultant; Emily Tomkys, Oxfam GB; Siobhan Green, Sonjara and Zara Rahman, The Engine Room.

A friend reminded me earlier this month at the MERL Tech Conference that a few years ago when we brought up the need for greater attention to privacy, security and ethics when using ICTs and digital data in humanitarian and development contexts, people pointed us to Tor, encryption and specialized apps. “No, no, that’s not what we mean!” we kept saying. “This is bigger. It needs to be holistic. It’s not just more tools and tech.”

So, even if as a sector we are still struggling to understand and address all the different elements of what’s now referred to as “Responsible Data” (thanks to the great work of the Engine Room and key partners), at least we’ve come a long way towards framing and defining the areas we need to tackle. We understand the increasing urgency of the issue that the volume of data in the world is increasing exponentially and the data in our sector is becoming more and more digitalized.

This year’s MERL Tech included several sessions on Responsible Data, including Responsible Data Policies, the Human Element of the Data Cycle, The Changing Nature of Informed Consent, Remote Monitoring in Fragile Environments and plenary talks that mentioned ethics, privacy and consent as integral pieces of any MERL Tech effort.

The session on Responsible Data Policies was a space to share with participants why, how, and what policies some organizations have put in place in an attempt to be more responsible. The presenters spoke about the different elements and processes their organizations have followed, and the reasoning behind the creation of these policies. They spoke about early results from the policies, though it is still early days when it comes to implementing them.

What do we mean by Responsible Data?

Responsible data is about more than just privacy or encryption. It’s a wider concept that includes attention to the data cycle at every step, and puts the rights of people reflected in the data first:

  • Clear planning and purposeful collection and use of data with the aim of improving humanitarian and development approaches and results for those we work with and for
  • Responsible treatment of the data and respectful and ethical engagement with people we collect data from, including privacy and security of data and careful attention to consent processes and/or duty of care
  • Clarity on data sharing – what data, from whom and with whom and under what circumstances and conditions
  • Attention to transparency and accountability efforts in all directions (upwards, downwards and horizontally)
  • Responsible maintenance, retention or destruction of data.

Existing documentation and areas to explore

There is a huge bucket of concepts, frameworks, laws and policies that already exist in various other sectors and that can be used, adapted and built on to develop responsible approaches to data in development and humanitarian work. Some of these are in conflict with one another, however, and those conflicts need to be worked out or at least recognized if we are to move forward as a sector and/or in our own organizations.

Some areas to explore when developing a Responsible Data policy include:

  • An organization’s existing policies and practices (IT and equipment; downloading; storing of official information; confidentiality; monitoring, evaluation and research; data collection and storage for program administration, finance and audit purposes; consent and storage for digital images and communications; social media policies).
  • Local and global laws that relate to collection, storage, use and destruction of data, such as: Freedom of information acts (FOIA); consumer protection laws; data storage and transfer regulations; laws related to data collection from minors; privacy regulations such as the latest from the EU.
  • Donor grant requirements related to data privacy and open data, such as USAID’s Chapter 579 or International Aid Transparency Initiative (IATI) stipulations.

Experiences with Responsible Data Policies

At the MERL Tech Responsible Data Policy session, organizers and participants shared their experiences. The first step for everyone developing a policy was establishing wide agreement and buy-in for why their organizations should care about Responsible Data. This was done by developing Values and Principles that form the foundation for policies and guidance.

Oxfam’s Responsible Data policy has a focus on rights, since Oxfam is a rights-based organization. The organization’s existing values made it clear that ethical use and treatment of data was something the organization must consider to hold true to its ethos. It took around six months to get all of the global affiliates to agree on the Responsible Program Data policy, a quick turnaround compared to other globally agreed documents because all the global executive directors recognized that this policy was critical. A core point for Oxfam was the belief that digital identities and access will become increasingly important for inclusion in the future, and so the organization did not want to stand in the way of people being counted and heard. However, it wanted to be sure that this was done in a way that balanced and took privacy and security into consideration.

The policy is a short document that is now in the process of operationalization in all the countries where Oxfam works. Because many of Oxfam’s affiliate headquarters reside in the European Union, it needs to consider the new EU regulations on data, which are extremely strict, for example, providing everyone with an option for withdrawing consent. This poses a challenge for development agencies who normally do not have the type of detailed databases on ‘beneficiaries’ as they do on private donors. Shifting thinking about ‘beneficiaries’ and treating them more as clients may be in order as one result of these new regulations. As Oxfam moves into implementation, challenges continue to arise. For example, data protection in Yemen is different than data protection in Haiti. Knowing all the national level laws and frameworks and mapping these out alongside donor requirements and internal policies is extremely complicated, and providing guidance to country staff is difficult given that each country has different laws.

Girl Effect’s policy has a focus on privacy, security and safety of adolescent girls, who are the core constituency of the organization. The policy became clearly necessary because although the organization had a strong girl safeguarding policy and practice, the effect of digital data had not previously been considered, and the number of programs that involve digital tools and data is increasing. The Girl Effect policy currently has four core chapters: privacy and security during design of a tool, service or platform; content considerations; partner vetting; and MEAL considerations. Girl Effect looks at not only the privacy and security elements, but also aims to spur thinking about potential risks and unintended consequences for girls who access and use digital tools, platforms and content. One core goal is to stimulate implementers to think through a series of questions that help them to identify risks. Another is to establish accountability for decisions around digital data.

The policy has been in process of implementation with one team for a year and will be updated and adapted as the organization learns. It has proven to have good uptake so far from team members and partners, and has become core to how the teams and the wider organization think about digital programming. Cost and time for implementation increase with the incorporation of stricter policies, however, and it is challenging to find a good balance between privacy and security, the ability to safely collect and use data to adapt and improve tools and platforms, and user friendliness/ease of use.

Catholic Relief Services has an existing set of eight organizational principles: Sacredness and Dignity of the human person; Rights and responsibilities; Social Nature of Humanity; The Common Good; Subsidiarity; Solidarity; Option for the Poor; Stewardship. It was a natural fit to see how these values that are already embedded in the organization could extend to the idea of Responsible Data. Data is an extension of the human person, therefore it should be afforded the same respect as the individual. The principle of ‘common good’ easily extends to responsible data sharing. The notion of subsidiarity says that decision-making should happen as close as possible to the place where the impact of the decision will be the strongest, and this is nicely linked with the idea of sharing data back with communities where CRS works and engaging them in decision-making. The option for the poor urges CRS to place a preferential value on privacy, security and safety of the data of the poor over the data demands of other entities.

The organization is at the initial phase of creating its Responsible Data Policy. The process includes the development of the values and principles, two country learning visits to understand the practices of country programs and their concerns about data, development of the policy, and a set of guidelines to support staff in following the policy.

USAID recently embarked on its process of developing practical Responsible Data guidance to pair with its efforts in the area of open data. (See ADS 579). More information will be available soon on this initiative.

Where are we now?

Though several organizations are moving towards the development of policies and guidelines, it was clear from the session that uncertainties are the order of the day, as Responsible Data is an ethical question, often relying on tradeoffs and decisions that are not hard and fast. Policies and guidelines generally aim to help implementers ask the right questions, sort through a range of possibilities and weigh potential risks and benefits.

Another critical aspect that was raised at the MERL Tech session was the financial and staff resources that can be required to be responsible about data. On the other hand, for those organizations receiving funds from the European Union or residing in the EU or the UK (where despite Brexit, organizations will likely need to comply with EU Privacy Regulations), the new regulations mean that NOT being responsible about data may result in hefty fines and potential legal action.

Going from policy to implementation is a challenge that involves both capacity strengthening in this new area as well as behavior change and a better understanding of emerging concepts and multiple legal frameworks. The nuances by country, organization and donor make the process difficult to get a handle on.

Because staff and management are already overburdened, the trick to developing and implementing Responsible Data Policies and Practice will be finding ways to strengthen staff capacity and to provide guidance in ways that do not feel overwhelmingly complex. Though each situation will be different, finding ongoing ways to share resources and experiences so that we can advance as a sector will be one key step for moving forward.

Read Full Post »

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo courtesy of Amy O'Donnell).

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo by Amy O’Donnell).

At Catholic Relief Services’ ICT4D Conference in May 2016, I worked with Amy O’Donnell  (Oxfam GB) and Paul Perrin (CRS) to facilitate a participatory session that explored notions of Digital Privacy, Security and Safety. We had a full room, with a widely varied set of experiences and expertise.

The session kicked off with stories of privacy and security breaches. One person told of having personal data stolen when a federal government clearance database was compromised. We also shared how a researcher in Denmark scraped very personal data from the OK Cupid online dating site and opened it up to the public.

A comparison was made between the OK Cupid data situation and the work that we do as development professionals. When we collect very personal information from program participants, they may not expect that their household level income, health data or personal habits would be ‘opened’ at some point.

Our first task was to explore and compare the meaning of the terms: Privacy, Security and Safety as they relate to “digital” and “development.”

What do we mean by privacy?

The “privacy” group talked quite a bit about contextuality of data ownership. They noted that there are aspects of privacy that cut across different groups of people in different societies, and that some aspects of privacy may be culturally specific. Privacy is concerned with ownership of data and protection of one’s information, they said. It’s about who owns data and who collects and protects it and notions of to whom it belongs. Private information is that which may be known by some but not by all. Privacy is a temporal notion — private information should be protected indefinitely over time. In addition, privacy is constantly changing. Because we are using data on our mobile phones, said one person, “Safaricom knows we are all in this same space, but we don’t know that they know.”

Another said that in today’s world, “You assume others can’t know something about you, but things are actually known about you that you don’t even know that others can know. There are some facts about you that you don’t think anyone should know or be able to know, but they do.” The group mentioned website terms and conditions, corporate ownership of personal data and a lack of control of privacy now. Some felt that we are unable to maintain our privacy today, whereas others felt that one could opt out of social media and other technologies to remain in control of one’s own privacy. The group noted that “privacy is about the appropriate use of data for its intended purpose. If that purpose shifts and I haven’t consented, then it’s a violation of privacy.”

What do we mean by security?

The Security group considered security to relate to an individual’s information. “It’s your information, and security of it means that what you’re doing is protected, confidential, and access is only for authorized users.” Security was also related to the location of where a person’s information is hosted and the legal parameters. Other aspects were related to “a barrier – an anti-virus program or some kind of encryption software, something that protects you from harm…. It’s about setting roles and permissions on software and installing firewalls, role-based permissions for accessing data, and cloud security of individuals’ data.” A broader aspect of security was linked to the effects of hacking that lead to offline vulnerability, to a lack of emotional security or feeling intimidated in an online space. Lastly, the group noted that “we, not the systems, are the weakest link in security – what we click on, what we view, what we’ve done. We are our own worst enemies in terms of keeping ourselves and our data secure.”

What do we mean by safety?

The Safety group noted that it’s difficult to know the difference between safety and security. “Safety evokes something highly personal. Like privacy… it’s related to being free from harm personally, physically and emotionally.” The group raised examples of protecting children from harmful online content or from people seeking to harm vulnerable users of online tools. The aspect of keeping your online financial information safe, and feeling confident that a service was ‘safe’ to use was also raised. Safety was considered to be linked to the concept of risk. “Safety engenders a level of trust, which is at the heart of safety online,” said one person.

In the context of data collection for communities we work with – safety was connected to data minimization concepts and linked with vulnerability, and a compounded vulnerability when it comes to online risk and safety. “If one person’s data is not safely maintained it puts others at risk,” noted the group. “And pieces of information that are innocuous on their own may become harmful when combined.” Lastly, the notion of safety as related to offline risk or risk to an individual due to a specific online behavior or data breach was raised.

It was noted that in all of these terms: privacy, security and safety, there is an element of power, and that in this type of work, a power relations analysis is critical.

The Digital Data Life Cycle

After unpacking the above terms, Amy took the group through an analysis of the data life cycle (courtesy of the Engine Room’s Responsible Data website) in order to highlight the different moments where the three concepts (privacy, security and safety) come into play.

Screen Shot 2016-05-25 at 6.51.50 AM

  • Plan/Design
  • Collect/Find/Acquire
  • Store
  • Transmit
  • Access
  • Share
  • Analyze/use
  • Retention
  • Disposal
  • Afterlife

Participants added additional stages in the data life cycle that they passed through in their work (coordinate, monitor the process, monitor compliance with data privacy and security policies). We placed the points of the data life cycle on the wall, and invited participants to:

  • Place a pink sticky note under the stage in the data life cycle that resonates or interests them most and think about why.
  • Place a green sticky note under the stage that is the most challenging or troublesome for them or their organizations and think about why.
  • Place a blue sticky note under the stage where they have the most experience, and to share a particular experience or tip that might help others to better manage their data life cycle in a private, secure and safe way.

Challenges, concerns and lessons

Design as well as policy are important!

  • Design drives everScreen Shot 2016-05-25 at 7.21.07 AMything else. We often start from the point of collection when really it’s at the design stage when we should think about the burden of data collection and define what’s the minimum we can ask of people? How we design – even how we get consent – can inform how the whole process happens.
  • When we get part-way through the data life cycle, we often wish we’d have thought of the whole cycle at the beginning, during the design phase.
  • In addition to good design, coordination of data collection needs to be thought about early in the process so that duplication can be reduced. This can also reduce fatigue for people who are asked over and over for their data.
  • Informed consent is such a critical issue that needs to be linked with the entire process of design for the whole data life cycle. How do you explain to people that you will be giving their data away, anonymizing, separating out, encrypting? There are often flow down clauses in some contracts that shifts responsibilities for data protection and security and it’s not always clear who is responsible for those data processes? How can you be sure that they are doing it properly and in a painstaking way?
  • Anonymization is also an issue. It’s hard to know to what level to anonymize things like call data records — to the individual? Township? District Level? And for how long will anonymization actually hold up?
  • The lack of good design and policy contributes to overlapping efforts and poor coordination of data collection efforts across agencies. We often collect too much data in poorly designed databases.
  • Policy is not enough – we need to do a much better job of monitoring compliance with policy.
  • Institutional Review Boards (IRBs) and compliance aspects need to be updated to the new digital data reality. At the same time, sometimes IRBs are not the right instrument for what we are aiming to achieve.

Data collection needs more attention.

  • Data collection is the easy part – where institutions struggle is with analyzing and doing something with the data we collect.
  • Organizations often don’t have a well-structured or systematic process for data collection.
  • We need to be clearer about what type of information we are collecting and why.
  • We need to update our data protection policy.

Reasons for data sharing are not always clear.

  • How can share data securely and efficiently without building duplicative systems? We should be thinking more during the design and collection phase about whether the data is going to be interoperable and who needs to access it.
  • How can we get the right balance in terms of data sharing? Some donors really push for information that can put people in real danger – like details of people who have participated in particular programs that would put them at risk with their home governments. Organizations really need to push back against this. It’s an education thing with donors. Middle management and intermediaries are often the ones that push for this type of data because they don’t really have a handle on the risk it represents. They are the weak points because of the demands they are putting on people. This is a challenge for open data policies – leaving it open to people leaves it to doing the laziest job possible of thinking about the potential risks for that data.
  • There are legal aspects of sharing too – such as the USAID open data policy where those collecting data have to share with the government. But we don’t have a clear understanding of what the international laws are about data sharing.
  • There are so many pressures to share data but they are not all fully thought through!

Data analysis and use of data are key weak spots for organizations.

  • We are just beginning to think through capturing lots of data.
  • Data is collected but not always used. Too often it’s extractive data collection. We don’t have the feedback loops in place, and when there are feedback loops we often don’t use the the feedback to make changes.
  • We forget often to go back to the people who have provided us with data to share back with them. It’s not often that we hold a consultation with the community to really involve them in how the data can be used.

Secure storage is a challenge.

  • We have hundreds of databases across the agency in various formats, hard drives and states of security, privacy and safety. Are we able to keep these secure?
  • We need to think more carefully about where we hold our data and who has access to it. Sometimes our data is held by external consultants. How should we be addressing that?

Disposing of data properly in a global context is hard!

  • Screen Shot 2016-05-25 at 7.17.58 AMIt’s difficult to dispose of data when there are multiple versions of it and a data footprint.
  • Disposal is an issue. We’re doing a lot of server upgrades and many of these are remote locations. How do we ensure that the right disposal process is going on globally, short of physically seeing that hard drives are smashed up!
  • We need to do a better job of disposal on personal laptops. I’ve done a lot of data collection on my personal laptop – no one has ever followed up to see if I’ve deleted it. How are we handling data handover? How do you really dispose of data?
  • Our organization hasn’t even thought about this yet!

Tips and recommendations from participants

  • Organizations should be using different tools. They should be using Pretty Good Privacy techniques rather than relying on free or commercial tools like Google or Skype.
  • People can be your weakest link if they are not aware or they don’t care about privacy and security. We send an email out to all staff on a weekly basis that talks about taking adequate measures. We share tips and stories. That helps to keep privacy and security front and center.
  • Even if you have a policy the hard part is enforcement, accountability, and policy reform. If our organizations are not doing direct policy around the formation of best practices in this area, then it’s on us to be sure we understand what is best practice, and to advocate for that. Let’s do what we can before the policy catches up.
  • The Responsible Data Forum and Tactical Tech have a great set of resources.
  • Oxfam has a Responsible Data Policy and Girl Effect have developed a Girls’ Digital Privacy, Security and Safety Toolkit that can also offer some guidance.

In conclusion, participants agreed that development agencies and NGOs need to take privacy, security and safety seriously. They can no longer afford to implement security at a lower level than corporations. “Times are changing and hackers are no longer just interested in financial information. People’s data is very valuable. We need to change and take security as seriously as corporates do!” as one person said.

 

 

Read Full Post »

Older Posts »