Feeds:
Posts
Comments

Posts Tagged ‘protection’

The recently announced World Food Programme (WFP) partnership with Palantir, IRIN’s article about it, reactions from the Responsible Data Forum, and WFP’s resulting statement inspired us to pull together a Technology Salon in New York City to discuss the ethics of humanitarian data sharing.

(See this crowdsourced document for more background on the WFP-Palantir partnership and resources for thinking about the ethics of data sharing. Also here is an overview of WFP’s SCOPE system for beneficiary identification, management and tracking.)

Our lead discussants were: Laura Walker McDonald, Global Alliance for Humanitarian Innovation; Mark Latonero, Research Lead for Data & Human Rights, Data & Society; Nathaniel Raymond, Jackson Institute of Global Affairs, Yale University; and Kareem Elbayar, Partnerships Manager, Centre for Humanitarian Data at the United Nations Office for the Coordination of Humanitarian Affairs. We were graciously hosted by The Gov Lab.

What are the concerns about humanitarian data sharing and with Palantir?

Some of the initial concerns expressed by Salon participants about humanitarian data sharing included: data privacy and the permanence of data; biases in data leading to unwarranted conclusions and assumptions; loss of stakeholder engagement when humanitarians move to big data and techno-centric approaches; low awareness and poor practices across humanitarian organizations on data privacy and security; tensions between security of data and utility of data; validity and reliability of data; lack of clarity about the true purposes of data sharing; the practice of ‘ethics outsourcing’ (testing things in places where there is a perceived ‘lower ethical standard;’ and less accountability); use of humanitarian data to target and harm aid recipients; disempowerment and extractive approaches to data; lack of checks and balances for safe and productive data sharing; difficulty of securing meaningful consent; and the links between data and surveillance by malicious actors, governments, private sector, military or intelligence agencies.

Palantir’s relationships and work with police, the CIA, ICE, the NSA, the US military and wider intelligence community are one of the main concerns about this partnership. Some ask whether a company can legitimately serve philanthropy, development, social, human rights and humanitarian sectors while also serving the military and intelligence communities and whether it is ethical for those in the former to engage in partnerships with companies who serve the latter. Others ask if WFP and others who partner with Palantir are fully aware of the company’s background, and if so, why these partnerships have been able to pass through due diligence processes. Yet others wonder if a company like Palantir can be trusted, given its background.

Below is a summary of the key points of the discussion, which happened on February 28, 2019. (Technology Salons are Chatham House affairs, so I have not attributed quotes in this post.)

Why were we surprised by this partnership/type of partnership?

Our first discussant asked why this partnership was a surprise to many. He emphasized the importance of stakeholder conversations, transparency, and wider engagement in the lead-up to these kinds of partnerships. “And I don’t mean in order to warm critics up to the idea, but rather to create a safe and trusted ecosystem. Feedback and accountability are really key to this.” He also highlighted that humanitarian organizations are not experts in advanced technologies and that it’s normal for them to bring in experts in areas that are not their forte. However, we need to remember that tech companies are not experts in humanitarian work and put the proper checks and balances in place. Bringing in a range of multidisciplinary expertise and distributed intelligence is necessary in a complex information environment. One possible approach is creating technology advisory boards. Another way to ensure more transparency and accountability is to conduct a human rights impact assessment. The next year will be a major test for these kinds of partnerships, given the growing concerns, he said.

One Salon participant said that the fact that the humanitarian sector engages in partnerships with the private sector is not a surprise at all, as the sector has worked through Public-Private Partnerships (PPPs) for several years now and they can bring huge value. The surprise is that WFP chose Palantir as the partner. “They are not the only option, so why pick them?” Another person shared that the WFP partnership went through a full legal review, and so it was not a surprise to everyone. However, communication around the partnership was not well planned or thought out and the process was not transparent and open. Others pointed out that although a legal review covers some bases, it does not assess the potential negative social impact or risk to ‘beneficiaries.’ For some the biggest surprise was WFP’s own surprise at the pushback on this particular partnership and its unsatisfactory reaction to the concerns raised about it. The response from responsible data advocates and the press attention to the WFP-Palantir partnership might be a turning point for the sector to encourage more awareness of the risks in working with certain types of companies. As many noted, this is not only a problem for WFP, it’s something that plagues the wider sector and needs to be addressed urgently.

Organizations need think beyond reputational harm and consider harm to beneficiaries

“We spend too much time focusing on avoiding risk to institutions and too little time thinking about how to mitigate risk to beneficiaries,” said one person. WFP, for example, has some of the best policies and procedures out there, yet this partnership still passed their internal test. That is a scary thought, because it implies that other agencies who have weaker policies might be agreeing to even more risky partnerships. Are these policies and risk assessments, then, covering all the different types of risk that need consideration? Many at the Salon felt that due diligence and partnership policies focus almost exclusively on organizational and reputational risk with very little attention to the risk that vulnerable populations might face. It’s not just a question of having policies, however, said one person. “Look at the Oxfam Safeguarding situation. Oxfam had some of the best safeguarding policies, yet there were egregious violations that were not addressed by having a policy. It’s a question of power and how decisions get made, and where decision-making power lies and who is involved and listened to.” (Note: one person contacted me pre-Salon to say that there was pushback by WFP country-level representatives about the Palantir partnership, but that it still went ahead. This brings up the same issue of decision-making power, and who has power to decide on these partnerships and why are voices from the frontlines not being heard? Additionally, are those whose data is captured and put into these large data systems ever consulted about what they think?)

Organizations need to assess wider implications, risks, and unintended negative consequences

It’s not only WFP that is putting information into SCOPE, said one person. “Food insecure people have no choice about whether to provide their data if they wish to receive food.” Thus, the question of truly ‘informed consent’ arises. Implementing partners don’t have a lot of choice either, he said. “Implementing agencies are forced to input beneficiary data into SCOPE if they want to work in particular zones or countries.” This means that WFP’s systems and partnerships have an impact on the entire humanitarian community, and therefore these partnerships and systems need to be more broadly consulted about with the wider sector.  The optical and reputational impact to organizations aside from WFP is significant, as they may disagree with the Palantir partnership but they are now associated with it by default. This type of harm goes beyond the fear of exploitation of the data in WFP’s “data lake.” It becomes a risk to personnel on the ground who are then seen as collaborating with a CIA contractor by putting beneficiary biometric data into SCOPE. This can also deter food-insecure people from accessing benefits. Additionally, association with CIA or US military has led to humanitarian agencies and workers being targeted, attacked and killed. That is all in addition to the question on whether these kinds of partnerships violate humanitarian principles, such as that of impartiality.

“It’s critical to understand the role of rumor in humanitarian contexts,” said one discussant. “Affected populations are trying to figure out what is happening and there is often a lot of rumor going around.”  So, if Palantir has a reputation for giving data to the CIA, people may hear about that and then be afraid to access services for fear of having their data given to the CIA. This can lead to retaliation against humanitarians and humanitarian organizations and escalate their risk of operating. Risk assessments need to go beyond the typical areas of reputation or financial risk. We also need to think about how these partnerships can affect humanitarian access and community trust and how rumors can have wide ripple effects.

The whole sector needs to put better due diligence systems in place. As it is now, noted one person, often it’s someone who doesn’t know much about data who writes up a short summary of the partnership, and there is limited review. “We’ve been struggling for 10 years to get our offices to use data. Now we’re in a situation where they’re just picking up a bunch of data and handing it over to private companies.”

UN immunities and privileges lead to a lack of accountability

The fact that UN agencies have immunities and privileges, means that laws such as the EU’s General Data Protection Regulation (GDPR) do not apply to them and they are left to self-regulate. Additionally, there is no common agreement among UN Agencies on how GDPR applies, and each UN agency interprets it on their own. As one person noted “There is a troubling sense of exceptionalism and lack of accountability in some of these agencies because ‘a beneficiary cannot take me to court.’” An interesting point, however, is that while UN agencies are immune, those contracted as their data processors are not immune — so data processors beware!

Demographically Identifiable Information (DII) can lead to serious group harm

The WFP has stated that personally identifiable information (PII) is not technically accessible to Palantir via this partnership. However, some at the Salon consider that the WFP failed in their statement about the partnership when they used the absence of PII as a defense. Demographically Identifiable Information (DII) and the activity patterns that are visible even in commodity data can be extrapolated as training data for future data modeling. “This is prospective modeling of action-based intelligence patterns as part of multiple screeners of intel,” said one discussant. He went on to explain that privacy discussions have moved from centering on property rights in the 19th Century, to individual rights in the 20th Century, to group rights in the 21st Century. We can use existing laws to emphasize protection of groups and to highlight the risks of DII leading to group harm, he said, as there are well-known cases that exemplify the notion of group harms (Plessy v Ferguson, Brown v Board of Education). Even in logistics data (which is the kind of data that WFP says Palantir will access) that contains no PII, it’s very simple to identify groups. “I can look at supply chain information and tell you where there are lactating mothers. If you don’t want refugees to give birth in the country they have arrived to, this information can be used for targeting.”

Many in the sector do not trust a company like Palantir

Though it is not clear who was in the room when WFP made the decision to partner with Palantir, the overall sector has concerns that the people making these decisions are not assessing partnerships from all angles: legal, privacy, programmatic, ethical, data use and management, social, protection, etc. Technologists and humanitarian practitioners are often not included in making these decisions, said one participant. “It’s the people with MBAs. They trust a tech company to say ‘this is secure’ but they don’t have the expertise to actually know that. Not to mention that yes, something might be secure, but maybe it’s not ethical. Senior people are signing off without having a full view. We need a range of skill sets reviewing these kinds of partnerships and investments.”

Another question arises: What happens when there is scope creep? Is Palantir in essence “grooming” the sector to then abuse data it accesses once it’s trusted and “allowed in”? Others pointed out that the grooming has already happened and Palantir is already on the inside. They first began partnering with the sector via the Clinton Global Initiative meetings back in 2013 and they are very active at World Economic Forum meetings. “This is not something coming out of the Trump administration, it was happening long before that,” said one person, and the company is already “in.” Another person said “Palantir lobbied their way into this, and they’ve gotten past the point of reputational challenge.” Palantir has approached many humanitarian agencies, including all the UN agencies, added a third person. Now that they have secured this contract with the WFP, the door to future work with a lot of other agencies is open and this is very concerning.

We’re in a new political economy: data brokerage.

“Humanitarians have lost their Geneva values and embraced Silicon Valley values” said one discussant. They are becoming data brokers within a colonial data paradigm. “We are making decisions in hierarchies of power, often extralegally,” he said. “We make decisions about other people’s data without their involvement, and we need to be asking: is it humanitarian to commodify for monetary or reasons of value the data of beneficiaries? When is it ethical to trade beneficiary data for something of value?” Another raised the issue of incentives. “Where are the incentives stacked? There is no incentive to treat beneficiaries better. All the incentives are on efficiency and scale and attracting donors.”

Can this example push the wider sector to do better?

One participant hoped there could be a net gain out of the WFP-Palantir case. “It’s a bad situation. But it’s a reckoning for the whole space. Most agencies don’t have these checks and balances in place. But people are waking up to it in a serious way. There’s an opportunity to step into. It’s hard inside of bureaucratic organizations, but it’s definitely an opportunity to start doing better.”

Another said that we need more transparency across the sector on these partnerships. “What is our process for evaluating something like this? Let’s just be transparent. We need to get these data partnership policies into the open. WFP could have simply said ‘here is our process’. But they didn’t. We should be working with an open and transparent model.” Overall, there is a serious lack of clarity on what data sharing agreements look like across the sector. One person attending the Salon said that their organization has been trying to understand current practice with regard to data sharing, and it’s been very difficult to get any examples, even redacted ones.

What needs to happen? 

In closing we discussed what needs to happen next. One person noted that in her research on Responsible Data, she found a total lack of capacity in terms of technology at non-profit organizations. “It’s the Economist Syndrome. Someone’s boss reads something on the bus and decides they need a blockchain,” someone quipped. In terms of responsible data approaches, research shows that organizations are completely overwhelmed. “They are keeping silent about their low capacity out of fear they will face consequences,” said one person, “and with GDPR, even more so”. At the wider level, we are still focusing on PII as the issue without considering DII and group rights, and this is a mistake, said another.

Organizations have very low capacity, and we are siloed. “Program officers do not have tech capacity. Tech people are kept in offices or ‘labs’ on their own and there is not a lot of porosity. We need protection advisors, lawyers, digital safety advisors, data protection officers, information management specialists, IT all around the table for this,” noted one discussant. Also, she said, though we do need principles and standards, it’s important that organizations adapt these so that they are their own principles and standards. “We need to adapt these boiler plate standards to our organizations. This has to happen based on our own organizational values.  Not everyone is rights-based, not everyone is humanitarian.” So organizations need to take the time to review and adapt standards, policies and procedures to their own vision and mission and to their own situations, contexts and operations and to generate awareness and buy-in. In conclusion, she said, “if you are not being responsible with data, you are already violating your existing values and codes. Responsible Data is already in your values, it’s a question of living it.”

Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

Our December 2015 Technology Salon discussion in NYC focused on approaches to girls’ digital privacy, safety and security. By extension, the discussion included ways to reduce risk for other vulnerable populations. Our lead discussants were Ximena BenaventeGirl Effect Mobile (GEM) and Jonathan McKay, Praekelt Foundation. I also shared a draft Girls’ Digital Privacy, Safety and Security Policy and Toolkit I’ve been working on with both organizations over the past year.

Girls’ digital privacy, safety and security risks

Our first discussant highlighted why it’s important to think specifically about girls and digital security. In part, this is because different factors and vulnerabilities combine, exacerbating girls’ levels of risk. For example, girls living on less than $2 per day likely only have access to basic mobile phones, which are often borrowed from parents or siblings. The organization she works with always starts with deep research on aspects like ownership vs. borrowship and whether girls’ mobile usage is free/unlimited and un-supervised or controlled by gatekeepers such as parents, brothers, or other relatives. This helps to design better tools, services and platforms and to design for safety and security, she said. “Gatekeepers are very restrictive in many cases, but parental oversight is not necessarily a bad thing. We always work with parents and other gatekeepers as well as with girls themselves when we design and test.” When girls are living in more traditional or conservative societies, she said, we also need to think about how content might affect girls both online and offline. For example, “is content sufficiently progressive in terms of girls’ rights, yet safe for girls to read, comment on or discuss with friends and family without severe retaliation?”

Research suggests that girls who are more vulnerable offline (due to poverty or other forms of marginalization), are likely also more vulnerable to certain risks online, so we design with that in mind, she said. “When we started off on this project, our team members were experts in digital, but we had less experience with the safety and privacy aspects when it comes to girls living under $2/day or who were otherwise vulnerable. “Having additional guidance and developing a policy on this aspect has helped immensely – but has also slowed our processes down and sometimes made them more expensive,” she noted. “We had to go back to everything and add additional layers of security to make it as safe as possible for girls. We have also made sure to work very closely with our local partners to be sure that everyone involved in the project is aware of girls’ safety and security.”

Social media sites: Open, Closed, Private, Anonymous?

One issue that came up was safety for children and youth on social media networks. A Salon participant said his organization had thought about developing this type of a network several years back but decided in the end that the security risks outweighed the advantages. Participants discussed whether social media networks can ever be safe. One school of thought is that the more open a platform, the safer it is, as “there is no interaction in private spaces that cannot be constantly monitored or moderated.” Some worry about open sites, however, and set up smaller, closed, private groups that were closely monitored. “We work with victims of violence to share their stories and coping mechanisms, so, for us, private groups are a better option.”

Some suggested that anonymity on a social media site can protect girls and other vulnerable groups, however there is also research showing that Internet anonymity contributes to an increase in activities such as bullying and harassment. Some Salon participants felt that it was better to leverage existing platforms and try to use them safely. Others felt that there are no existing social media platforms that have enough security for girls or other vulnerable groups to use with appropriate levels of risk. “We sometimes recruit participants via existing social media platforms,” said one discussant, “but we move people off of those sites to our own more secure sites as soon as we can.”

Moderation and education on safety

Salon participants working with vulnerable populations said that they moderate their sites very closely and remove comments if users share personal information or use offensive language. “Some project budgets allow us to have a moderator check every 2 hours. For others, we sweep accounts once a day and remove offensive content within 24 hours.” One discussant uses moderation to educate the community. “We always post an explanation about why a comment was removed in order to educate the larger user base about appropriate ways to use the social network,” he said.

Close moderation becomes difficult and costly, however, as the user base grows and a platform scales. This means individual comments cannot be screened and pre-approved, because that would take too long and defeat the purpose of an engaging platform. “We need to acknowledge the very real tension between building a successful and engaging community and maintaining privacy and security,” said one Salon participant. “The more you lock it down and the more secure it is, the harder you find it is to create a real and active community.”

Another participant noted that they use their safe, closed youth platform to educate and reinforce messaging about what is safe and positive use of social media in hopes that young people will practice safe behaviors when they use other platforms. “We know that education and awareness raising can only go so far, however,” she said, “and we are not blind to that fact.” She expressed concern about risk for youth who speak out about political issues, because more and more governments are passing laws that punish critics and censor information. The organization, however, does not want to encourage youth to stop voicing opinions or participating politically.

Data breaches and project close-out

One Salon participant asked if organizations had examples of actual data breaches, and how they had handled them. Though no one shared examples, it was recommended that every organization have a contingency plan in place for accidental data leaks or a data breach or data hack. “You need to assume that you will get hacked,” said one person, “and develop your systems with that as a given.”

In addition to the day-to-day security issues, we need to think about project close-out, said one person. “Most development interventions are funded for a short, specific period of time. When a project finishes, you get a report, you do your M&E, and you move on. However, the data lives on, and the effects of the data live on. We really need to think more about budgeting for proper project wind-down and ensure that we are accountable beyond the lifetime of a project.”

Data security, anonymization, consent

Another question was related to using and keeping girls’ (and others’) data safe. “Consent to collect and use data on a website or via a mobile platform can be tricky, especially if we don’t know how to explain what we might do with the data,” said one Salon participant. Others suggested it would be better not to collect any data at all. “Why do we even need to collect this data? Who is it for?” he asked. Others countered that this data is often the only way to understand what people are doing on the site, to make adjustments and to measure impact.

One scenario was shared where several partner organizations discussed opening up a country’s cell phone data records to help contain a massive public health epidemic, but the privacy and security risks were too great, so the idea was scrapped. “Some said we could anonymize the data, but you can never really and truly anonymize data. It would have been useful to have a policy or a rubric that would have guided us in making that decision.”

Policy and Guidelines on Girls Privacy, Security and Safety

Policy guidelines related to aspects such as responsible data for NGOs, data security, privacy and other aspects of digital security in general do exist. (Here are some that we compiled along with some other resources). Most IT departments also have strict guidelines when it comes to donor data (in the case of credit card and account information, for example). This does not always cross over to program-level ICT or M&E efforts that involve the populations that NGOs are serving through their programming.

General awareness around digital security is increasing, in part due to recent major corporate data hacks (e.g., Target, Sony) and the Edward Snowden revelations from a few years back, but much more needs to be done to educate NGO staff and management on the type of privacy and security measures that need to be taken to protect the data and mitigate risk for those who participate in their programs.  There is an argument that NGOs should have specific digital privacy, safety and security policies that are tailored to their programming and that specifically focus on the types of digital risks that girls, women, children or other vulnerable people face when they are involved in humanitarian or development programs.

One such policy (focusing on vulnerable girls) and toolkit (its accompanying principles and values, guidelines, checklists and a risk matrix template); was shared at the Salon. (Disclosure: – This policy toolkit is one that I am working on. It should be ready to share in early 2016). The policy and toolkit take program implementers through a series of issues and questions to help them assess potential risks and tradeoffs in a particular context, and to document decisions and improve accountability. The toolkit covers:

  1. data privacy and security –using approaches like Privacy by Design, setting limits on the data that is collected, achieving meaningful consent.
  2. platform content and design –ensuring that content produced for girls or that girls produce or volunteer is not putting girls at risk.
  3. partnerships –vetting and managing partners who may be providing online/offline services or who may partner on an initiative and want access to data, monetizing of girls’ data.
  4. monitoring, evaluation, research and learning (MERL) – how will program implementers gather and store digital data when they are collecting it directly or through third parties for organizational MERL purposes.

Privacy, Security and Safety Implications

Our final discussant spoke about the implications of implementing the above-mentioned girls’ privacy, safety and security policy. He started out saying that the policy starts off with a manifesto: We will not compromise a girl in any way, nor will we opt for solutions that cut corners in terms of cost, process or time at the expense of her safety. “I love having this as part of our project manifesto, he said. “It’s really inspiring! On the flip side, however, it makes everything I do more difficult, time consuming and expensive!”

To demonstrate some of the trade-offs and decisions required when working with vulnerable girls, he gave examples of how the current project (implemented with girls’ privacy and security as a core principle) differed from that of a commercial social media platform and advertising campaign he had previously worked on (where the main concern was the reputation of the corporation, not that of the users of the platform and the potential risks they might put themselves in by using the platform).

Moderation

On the private sector platform, said the discussant, “we didn’t have the option of pre-moderating comments because of the budget and because we had 800 thousand users. To meet the campaign goals, it was more important for users to be engaged than to ensure content was safe. We focused on removing pornographic photos within 24 hours, using algorithms based on how much skin tone was in the photo.” In the fields of marketing and social media, it’s a fairly well-known issue that heavy-handed moderation kills platform engagement. “The more we educated and informed users about comment moderation, or removed comments, the deader the community became. The more draconian the moderation, the lower the engagement.”

The discussant had also worked on a platform for youth to discuss and learn about sexual health and practices, where he said that users responded angrily to moderators and comments that restricted their participation. “We did expose our participants to certain dangers, but we also knew that social digital platforms are more successful when they provide their users with sense of ownership and control. So we identified users that exhibited desirable behaviors and created a different tier of users who could take ownership (super users) to police and flag comments as inappropriate or temporarily banned users.” This allowed a 25% decrease in moderation. The organization discovered, however, that they had to be careful about how much power these super users had. “They ended up creating certain factions on the platform, and we then had to develop safeguards and additional mechanisms by which we moderated our super users!”

Direct Messages among users

In the private sector project example, engagement was measured by the number of direct or private messages sent between platform users. In the current scenario, however, said the discussant, “we have not allowed any direct messages between platform users because of the potential risks to girls of having places on the site that are hidden from moderators. So as you can see, we are removing some of our metrics by disallowing features because of risk. These activities are all things that would make the platform more engaging but there is a big fear that they could put girls at risk.”

Adopting a privacy, security, and safety policy

One discussant highlighted the importance of having privacy, safety and security policies before a project or program begins. “If you start thinking about it later on, you may have to go back and rebuild things from scratch because your security holes are in the design….” The way a database is set up to capture user data can make it difficult to query in the future or for users to have any control of what information is or is not being shared about them. “If you don’t set up the database with security and privacy in mind from the beginning, it might be impossible to make the platform safe for girls without starting from scratch all over again,” he said.

He also cautioned that when making more secure choices from the start, platform and tool development generally takes longer and costs more. It can be harder to budget because designers may not have experience with costing and developing the more secure options.

“A valuable lesson is that you have to make sure that what you’re trying to do in the first place is worth it if it’s going to be that expensive. It is worth a girls’ while to use a platform if she first has to wade through a 5-page terms and conditions on a small mobile phone screen? Are those terms and conditions even relevant to her personally or within her local context? Every click you ask a user to make will reduce their interest in reaching the platform. And if we don’t imagine that a girl will want to click through 5 screens of terms and conditions, the whole effort might not be worth it.” Clearly, aspects such as terms and conditions and consent processes need to be designed specifically to fit new contexts and new kinds of users.

Making responsible tradeoffs

The Girls Privacy, Security and Safety policy and toolkit shared at the Salon includes a risk matrix where project implementers rank the intensity and probability of risks as high, medium and low. Based on how a situation, feature or other potential aspect is ranked and the possibility to mitigate serious risks, decisions are made to proceed or not. There will always be areas with a certain level of risk to the user. The key is in making decisions and trade-offs that balance the level of risk with the potential benefits or rewards of the tool, service, or platform. The toolkit can also help project designers to imagine potential unintended consequences and mitigate risk related to them. The policy also offers a way to systematically and pro-actively consider potential risks, decide how to handle them, and document decisions so that organizations and project implementers are accountable to girls, peers and partners, and organizational leadership.

“We’ve started to change how we talk about user data in our organization,” said one discussant. “We have stopped thinking about it as something WE create and own, but more as something GIRLS own. Banks don’t own people’s money – they borrow it for a short time. We are trying to think about data that way in the conversations we’re having about data, funding, business models, proposals and partnerships. You don’t get to own your users’ data, we’re not going to share de-anonymized data with you. We’re seeing legislative data in some of the countries we work that are going that way also, so it’s good to be thinking about this now and getting prepared”

Take a look at our list of resources on the topic and add anything we may have missed!

 

Thanks to our friends at ThoughtWorks for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology SalonSalons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

This is a guest post from Anna Crowe, Research Officer on the Privacy in the Developing World Project, and  Carly Nyst, Head of International Advocacy at Privacy International, a London-based NGO working on issues related to technology and human rights, with a focus on privacy and data protection. Privacy International’s new report, Aiding Surveillance, which covers this topic in greater depth was released this week.

by Anna Crowe and Carly Nyst

NOV 21 CANON 040

New technologies hold great potential for the developing world, and countless development scholars and practitioners have sung the praises of technology in accelerating development, reducing poverty, spurring innovation and improving accountability and transparency.

Worryingly, however, privacy is presented as a luxury that creates barriers to development, rather than a key aspect to sustainable development. This perspective needs to change.

Privacy is not a luxury, but a fundamental human right

New technologies are being incorporated into development initiatives and programmes relating to everything from education to health and elections, and in humanitarian initiatives, including crisis response, food delivery and refugee management. But many of the same technologies being deployed in the developing world with lofty claims and high price tags have been extremely controversial in the developed world. Expansive registration systems, identity schemes and databases that collect biometric information including fingerprints, facial scans, iris information and even DNA, have been proposed, resisted, and sometimes rejected in various countries.

The deployment of surveillance technologies by development actors, foreign aid donors and humanitarian organisations, however, is often conducted in the complete absence of the type of public debate or deliberation that has occurred in developed countries. Development actors rarely consider target populations’ opinions when approving aid programmes. Important strategy documents such as the UN Office for Humanitarian Affairs’ Humanitarianism in a Networked Age and the UN High-Level Panel on the Post-2015 Development Agenda’s A New Global Partnership: Eradicate Poverty and Transfer Economies through Sustainable Development give little space to the possible impact adopting new technologies or data analysis techniques could have on individuals’ privacy.

Some of this trend can be attributed to development actors’ systematic failure to recognise the risks to privacy that development initiatives present. However, it also reflects an often unspoken view that the right to privacy must necessarily be sacrificed at the altar of development – that privacy and development are conflicting, mutually exclusive goals.

The assumptions underpinning this view are as follows:

  • that privacy is not important to people in developing countries;
  • that the privacy implications of new technologies are not significant enough to warrant special attention;
  • and that respecting privacy comes at a high cost, endangering the success of development initiatives and creating unnecessary work for development actors.

These assumptions are deeply flawed. While it should go without saying, privacy is a universal right, enshrined in numerous international human rights treaties, and matters to all individuals, including those living in the developing world. The vast majority of developing countries have explicit constitutional requirements to ensure that their policies and practices do not unnecessarily interfere with privacy. The right to privacy guarantees individuals a personal sphere, free from state interference, and the ability to determine who has information about them and how it is used. Privacy is also an “essential requirement for the realization of the right to freedom of expression”. It is not an “optional” right that only those living in the developed world deserve to see protected. To presume otherwise ignores the humanity of individuals living in various parts of the world.

Technologies undoubtedly have the potential to dramatically improve the provision of development and humanitarian aid and to empower populations. However, the privacy implications of many new technologies are significant and are not well understood by many development actors. The expectations that are placed on technologies to solve problems need to be significantly circumscribed, and the potential negative implications of technologies must be assessed before their deployment. Biometric identification systems, for example, may assist in aid disbursement, but if they also wrongly exclude whole categories of people, then the objectives of the original development intervention have not been achieved. Similarly, border surveillance and communications surveillance systems may help a government improve national security, but may also enable the surveillance of human rights defenders, political activists, immigrants and other groups.

Asking for humanitarian actors to protect and respect privacy rights must not be distorted as requiring inflexible and impossibly high standards that would derail development initiatives if put into practice. Privacy is not an absolute right and may be limited, but only where limitation is necessary, proportionate and in accordance with law. The crucial aspect is to actually undertake an analysis of the technology and its privacy implications and to do so in a thoughtful and considered manner. For example, if an intervention requires collecting personal data from those receiving aid, the first step should be to ask what information is necessary to collect, rather than just applying a standard approach to each programme. In some cases, this may mean additional work. But this work should be considered in light of the contribution upholding human rights and the rule of law make to development and to producing sustainable outcomes. And in some cases, respecting privacy can also mean saving lives, as information falling into the wrong hands could spell tragedy.

A new framing

While there is an increasing recognition among development actors that more attention needs to be paid to privacy, it is not enough to merely ensure that a programme or initiative does not actively harm the right to privacy; instead, development actors should aim to promote rights, including the right to privacy, as an integral part of achieving sustainable development outcomes. Development is not just, or even mostly, about accelerating economic growth. The core of development is building capacity and infrastructure, advancing equality, and supporting democratic societies that protect, respect and fulfill human rights.

The benefits of development and humanitarian assistance can be delivered without unnecessary and disproportionate limitations on the right to privacy. The challenge is to improve access to and understanding of technologies, ensure that policymakers and the laws they adopt respond to the challenges and possibilities of technology, and generate greater public debate to ensure that rights and freedoms are negotiated at a societal level.

Technologies can be built to satisfy both development and privacy.

Download the Aiding Surveillance report.

Read Full Post »

This post was originally published on the Open Knowledge Foundation blog

A core theme that the Open Development track covered at September’s Open Knowledge Conference was Ethics and Risk in Open Development. There were more questions than answers in the discussions, summarized below, and the Open Development working group plans to further examine these issues over the coming year.

Informed consent and opting in or out

Ethics around ‘opt in’ and ‘opt out’ when working with people in communities with fewer resources, lower connectivity, and/or less of an understanding about privacy and data are tricky. Yet project implementers have a responsibility to work to the best of their ability to ensure that participants understand what will happen with their data in general, and what might happen if it is shared openly.

There are some concerns around how these decisions are currently being made and by whom. Can an NGO make the decision to share or open data from/about program participants? Is it OK for an NGO to share ‘beneficiary’ data with the private sector in return for funding to help make a program ‘sustainable’? What liabilities might donors or program implementers face in the future as these issues develop?

Issues related to private vs. public good need further discussion, and there is no one right answer because concepts and definitions of ‘private’ and ‘public’ data change according to context and geography.

Informed participation, informed risk-taking

The ‘do no harm’ principle is applicable in emergency and conflict situations, but is it realistic to apply it to activism? There is concern that organizations implementing programs that rely on newer ICTs and open data are not ensuring that activists have enough information to make an informed choice about their involvement. At the same time, assuming that activists don’t know enough to decide for themselves can come across as paternalistic.

As one participant at OK Con commented, “human rights and accountability work are about changing power relations. Those threatened by power shifts are likely to respond with violence and intimidation. If you are trying to avoid all harm, you will probably not have any impact.” There is also the concept of transformative change: “things get worse before they get better. How do you include that in your prediction of what risks may be involved? There also may be a perception gap in terms of what different people consider harm to be. Whose opinion counts and are we listening? Are the right people involved in the conversations about this?”

A key point is that whomever assumes the risk needs to be involved in assessing that potential risk and deciding what the actions should be — but people also need to be fully informed. With new tools coming into play all the time, can people be truly ‘informed’ and are outsiders who come in with new technologies doing a good enough job of facilitating discussions about possible implications and risk with those who will face the consequences? Are community members and activists themselves included in risk analysis, assumption testing, threat modeling and risk mitigation work? Is there a way to predict the likelihood of harm? For example, can we determine whether releasing ‘x’ data will likely lead to ‘y’ harm happening? How can participants, practitioners and program designers get better at identifying and mitigating risks?

When things get scary…

Even when risk analysis is conducted, it is impossible to predict or foresee every possible way that a program can go wrong during implementation. Then the question becomes what to do when you are in the middle of something that is putting people at risk or leading to extremely negative unintended consequences. Who can you call for help? What do you do when there is no mitigation possible and you need to pull the plug on an effort? Who decides that you’ve reached that point? This is not an issue that exclusively affects programs that use open data, but open data may create new risks with which practitioners, participants and activists have less experience, thus the need to examine it more closely.

Participants felt that there is not enough honest discussion on this aspect. There is a pop culture of ‘admitting failure’ but admitting harm is different because there is a higher sense of liability and distress. “When I’m really scared shitless about what is happening in a project, what do I do?” asked one participant at the OK Con discussion sessions. “When I realize that opening data up has generated a huge potential risk to people who are already vulnerable, where do I go for help?” We tend to share our “cute” failures, not our really dismal ones.

Academia has done some work around research ethics, informed consent, human subject research and use of Internal Review Boards (IRBs). What aspects of this can or should be applied to mobile data gathering, crowdsourcing, open data work and the like? What about when citizens are their own source of information and they voluntarily share data without a clear understanding of what happens to the data, or what the possible implications are?

Do we need to think about updating and modernizing the concept of IRBs? A major issue is that many people who are conducting these kinds of data collection and sharing activities using new ICTs are unaware of research ethics and IRBs and don’t consider what they are doing to be ‘research’. How can we broaden this discussion and engage those who may not be aware of the need to integrate informed consent, risk analysis and privacy awareness into their approaches?

The elephant in the room

Despite our good intentions to do better planning and risk management, one big problem is donors, according to some of the OK Con participants.  Do donors require enough risk assessment and mitigation planning in their program proposal designs? Do they allow organizations enough time to develop a well-thought-out and participatory Theory of Change along with a rigorous risk assessment together with program participants? Are funding recipients required to report back on risks and how they played out? As one person put it, “talk about failure is currently more like a ‘cult of failure’ and there is no real learning from it. Systematically we have to report up the chain on money and results and all the good things happening. and no one up at the top really wants to know about the bad things. The most interesting learning doesn’t get back to the donors or permeate across practitioners. We never talk about all the work-arounds and backdoor negotiations that make development work happen. This is a serious systemic issue.”

Greater transparency can actually be a deterrent to talking about some of these complexities, because “the last thing donors want is more complexity as it raises difficult questions.”

Reporting upwards into government representatives in Parliament or Congress leads to continued aversion to any failures or ‘bad news’. Though funding recipients are urged to be innovative, they still need to hit numeric targets so that the international aid budget can be defended in government spaces. Thus, the message is mixed: “Make sure you are learning and recognizing failure, but please don’t put anything too serious in the final report.” There is awareness that rigid program planning doesn’t work and that we need to be adaptive, yet we are asked to “put it all into a log frame and make sure the government aid person can defend it to their superiors.”

Where to from here?

It was suggested that monitoring and evaluation (M&E) could be used as a tool for examining some of these issues, but M&E needs to be seen as a learning component, not only an accountability one. M&E needs to feed into the choices people are making along the way and linking it in well during program design may be one way to include a more adaptive and iterative approach. M&E should force practitioners to ask themselves the right questions as they design programs and as they assess them throughout implementation. Theory of Change might help, and an ethics-based approach could be introduced as well to raise these questions about risk and privacy and ensure that they are addressed from the start of an initiative.

Practitioners have also expressed the need for additional resources to help them predict and manage possible risk: case studies, a safe space for sharing concerns during implementation, people who can help when things go pear-shaped, a menu of methodologies, a set of principles or questions to ask during program design, or even an ICT4D Implementation Hotline or a forum for questions and discussion.

These ethical issues around privacy and risk are not exclusive to Open Development. Similar issues were raised last week at the Open Government Partnership Summit sessions on whistle blowing, privacy, and safeguarding civic space, especially in light of the Snowden case. They were also raised at last year’s Technology Salon on Participatory Mapping.

A number of groups are looking more deeply into this area, including the Capture the Ocean Project, The Engine Room, IDRC’s research network, The Open Technology InstitutePrivacy InternationalGSMA, those working on “Big Data,” those in the Internet of Things space, and others.

I’m looking forward to further discussion with the Open Development working group on all of this in the coming months, and will also be putting a little time into mapping out existing initiatives and identifying gaps when it comes to these cross-cutting ethics, power, privacy and risk issues in open development and other ICT-enabled data-heavy initiatives.

Please do share information, projects, research, opinion pieces and more if you have them!

Read Full Post »

The February 5 Technology Salon in New York City asked “What are the ethics in participatory digital mapping?” Judging by the packed Salon and long waiting list, many of us are struggling with these questions in our work.

Some of the key ethical points raised at the Salon related to the benefits of open data vs privacy and the desire to do no harm. Others were about whether digital maps are an effective tool in participatory community development or if they are mostly an innovation showcase for donors or a backdrop for individual egos to assert their ‘personal coolness’. The absence of research and ethics protocols for some of these new kinds of data gathering and sharing was also an issue of concern for participants.

During the Salon we were only able to scratch the surface, and we hope to get together soon for a more in-depth session (or maybe 2 or 3 sessions – stay tuned!) to further unpack the ethical issues around participatory digital community mapping.

The points raised by discussants and participants included:

1) Showcasing innovation

Is digital mapping really about communities, or are we really just using communities as a backdrop to showcase our own innovation and coolness or that of our donors?

2) Can you do justice to both process and product?

Maps should be less an “in-out tool“ and more part of a broader program. External agents should be supporting communities to articulate and to be full partners in saying, doing, and knowing what they want to do with maps. Digital mapping may not be better than hand drawn maps, if we consider that the process of mapping is just as or more important than the final product. Hand drawn maps can allow for important discussions to happen while people draw. This seems to happens much less with the digital mapping process, which is more technical, and it happens even less when outside agents are doing the mapping. A hand drawn map can be imbued with meaning in terms of the size, color or placement of objects or borders. Important meaning may be missed when hand drawn maps are replaced with digital ones.

Digital maps, however, can be printed and further enhanced with comments and drawings and discussed in the community, as some noted. And digital maps can lend a sense of professionalism to community members and help them to make a stronger case to authorities and decisions makers. Some participants raised concerns about power relations during mapping processes, and worried that using digital tools could emphasize those.

3) The ethics of wasting people’s time.

Community mapping is difficult. The goal of external agents should be to train local people so that they can be owners of the process and sustain it in the long term. This takes time. Often, however, mapping experts are flown in for a week or two to train community members. They leave people with some knowledge, but not enough to fully manage the mapping process and tools. If people end up only half-trained and without local options to continue training, their time has essentially been wasted. In addition, if young people see the training as a pathway to a highly demanded skill set yet are left partially trained and without access to tools and equipment, they will also feel they have wasted their time.

4) Data extraction

When agencies, academics and mappers come in with their clipboards or their GPS units and conduct the same surveys and studies over and over with the same populations, people’s time is also wasted. Open digital community mapping comes from a viewpoint that an open map and open data are one way to make sure that data that is taken from or created by communities is made available to the communities for their own use and can be accessed by others so that the same data is not collected repeatedly. Though there are privacy concerns around opening data, there is a counter balanced ethical dilemma related to how much time gets wasted by keeping data closed.

5) The (missing) link between data and action

Related to the issue of time wasting is the common issue of a missing link between data collected and/or mapped, action and results. Making a map identifying issues is certainly no guarantee that the government will come and take care of those issues. Maps are a means to an end, but often the end is not clear. What do we really hope the data leads to? What does the community hope for? Mapping can be a flashy technology that brings people to the table, but that is no guarantee that something will happen to resolve the issues the map is aimed at solving.

6) Intermediaries are important

One way to ensure that there is a link between data and action is to identify stakeholders that have the ability to use, understand and re-interpret the data. One case was mentioned where health workers collected data and then wanted to know “What do we do now? How does this affect the work that we do? How do we present this information to community health workers in a way that it is useful to our work?” It’s important to tone the data down and make them understandable to the base population, and to also show them in a way that is useful to people working at local institutions. Each audience will need the data to be visualized or shared in a different, contextually appropriate way if they are going to use the data for decision-making. It’s possible to provide the same data in different ways across different platforms from paper to high tech. The challenge of keeping all the data and the different sharing platforms updated, however, is one that can’t be overlooked.

7) What does informed consent actually mean in today’s world?

There is a viewpoint that data must be open and that locking up data is unethical. On the other hand, there are questions about research ethics and protocols when doing mapping projects and sharing or opening data. Are those who do mapping getting informed consent from people to use or open their data? This is the cornerstone of ethics when doing research with human beings. One must be able to explain and be clear about the risks of this data collection, or it is impossible to get truly informed consent. What consent do community mappers need from other community members if they are opening data or information? What about when people are volunteering their information and self-reporting? What does informed consent mean in those cases? And what needs to be done to ensure that consent is truly informed? How can open data and mapping be explained to those who have not used the Internet before? How can we have informed consent if we cannot promise anyone that their data are really secure? Do we have ethics review boards for these new technological ways of gathering data?

8) Not having community data also has ethical implications

It may seem like time wasting, and there may be privacy and protection questions, but there are are also ethical implications of not having community data. When tools like satellite remote sensing are used to do slum mapping, for example, data are very dehumanized and can lead to sterile decision-making. The data that come from a community itself can make these maps more human and these decisions more humane. But there is a balance between the human/humanizing side and the need to protect. Standards are needed for bringing in community and/or human data in an anonymized way, because there are ethical implications on both ends.

9) The problem with donors….

Big donors are not asking the tough questions, according to some participants. There is a lack of understanding around the meaning, use and value of the data being collected and the utility of maps. “If the data is crap, you’ll have crap GIS and a crap map. If you are just doing a map to do a map, there’s an issue.” There is great incentive from the donor side to show maps and to demonstrate value, because maps are a great photo op, a great visual. But how to go a level down to make a map really useful? Are the M&E folks raising the bar and asking these hard questions? Often from the funder’s perspective, mapping is seen as something that can be done quickly. “Get the map up and the project is done. Voila! And if you can do it in 3 weeks, even better!”

Some participants felt the need for greater donor awareness of these ethical questions because many of them are directly related to funding issues. As one participant noted, whether you coordinate, whether it’s participatory, whether you communicate and share back the information, whether you can do the right thing with the privacy issue — these all depend on what you can convince a donor to fund. Often it’s faster to reinvent the wheel because doing it the right way – coordinating, learning from past efforts, involving the community — takes more time and money. That’s often the hard constraint on these questions of ethics.

Check this link for some resources on the topic, and add yours to the list.

Many thanks to our lead discussants, Robert Banick from the American Red Cross and Erica Hagen from Ground Truth, and to Population Council for hosting us for this month’s Salon!

The next Technology Salon NYC will be coming up in March. Stay tuned for more information, and if you’d like to receive notifications about future salons, sign up for the mailing list!

Read Full Post »

The November 14, 2012, Technology Salon NYC focused on ways that ICTs can support work with children who migrate. Our lead discussants were:  Sarah Engebretsen and Kate Barker from Population Council, and Brian Root and Enrique Piracés from Human Rights Watch.

This post summarizes discussions that surfaced around the Population Council’s upcoming Girls on the Move report, which looks at adolescent girls’ (ages 10-19) internal and regional migration in ‘developing’ countries, including opportunity and risk. (In a second blog post I will cover Human Rights Watch’s points and resulting discussions.)

The Girls on the Move report (to be released in February 2013) will synthesize current evidence, incorporate results of specially commissioned research, illustrate experiences of migrant girls, provide examples of promising policies and programs, and offer concrete action-oriented recommendations.

1) How are migrant girls using ICTs?

While the report’s focus is not technology, the research team notes that there is some evidence showing that adolescent girls are using ICTs for:

  • Extending social networks. In China and Southeast Asia, migrant girls are building and accessing personal networks through mobiles and texting. This is especially pronounced among girls who work long hours in tedious jobs in factories, and who do not have much time with family and friends. Text messaging helps them maintain connections with existing social networks. It also gives them space for flirtation, which may not be something they can do in their former rural context because of cultural norms that look down on flirtatious behavior.
  • Finding new jobs. Both boys and girls use mobiles and text messaging for exchanging quick news about job openings. This suggests there could be an opening for program interventions that would connect to migrant children through texting, and that might supply information on community resources, for example, where to go in cases of threat or emergency—that might then propagate across migrant virtual networks.
  • Sending remittances. Based on research with adolescent girls and drawing from examples of adult migrants, it seems likely that a vast majority of migrant girls save money and send it to their families. Evidence on how girl migrants are using remittances is limited, but a survey conducted in Kenya found that 90% of adult migrants had sent money home to families in other parts of Kenya via mobile phone in the 30 days before the survey. There is more research needed on adolescent girls’ remittance patterns. Research is also lacking on adolescent girls’ access to and use of mobile phones and on whether mobile phones are owned or borrowed from another person who is the handset owner. Remittances, however, as one participant pointed out, are obviously only sent by mobile in countries with functioning mobile money systems.
  • Keeping in touch with family back home. In Western Kenya, migrant brides who are very isolated placed great importance on mobiles to stay in touch with family and friends back home. Facebook is very popular in some countries for keeping in touch with families and friends back home. In Johannesburg and Somalia, for example, one participant said “Facebook is huge.” Migrating adolescent girls and domestic working girls in Burkina Faso, however, do not have Internet access at all, via mobiles or otherwise.

2) Areas where ICTs could support work on child protection and migration

  • Child Protection Systems There is a general global move towards developing child protection systems that work for different kinds of vulnerable children. These efforts are important in the transit phase and right upon arrival as these phases are particularly risky for children who migrate. ICTs can play a role in managing information that is part of these systems. Ways to connect community child protection systems into district and national systems need more investigation.
  • Reporting abuse and getting help One example of ways that ICTs are supporting child protection in India and several other countries is Child Help Lines. ChildLine India received almost 23 million calls as of March 2012, with 62% of callers between the ages of 11 and 18. The helplines provide vulnerable groups of children and youth with referrals to local services, and in the best cases they are public-private partnerships that link with national and state governments. Of note is that boys call in more often than girls, and this raises questions about girls’ access to phones to actually make a call to obtain support. It also points to the need for differentiated strategies to reach both boys and girls.

3) Technology and exclusion

  • Social exclusion and access is a specific challenge due to the pronounced social exclusion of many migrant girls, particularly those who are married or working in socially isolated jobs such as child domestic workers. Girls in these situations may not have any access to technology at all, including to mobile phones.  Girls and women especially tend to have less access than men; they are often not the owners of devices. There is a research gap here, as no one actually knows how many migrating adolescent girls access mobiles and how many can borrow a phone for use. It is not clear if girls have their own phones, or if they are using an employer’s or a friend’s phone or a public call box. This would be a key factor in terms of working with adolescent girls and understanding risk and designing programs.
  • Technology should build on – not be seen as a replacement for – social networks. Girls access to social capital is a huge underlying topic. There is normally a rupture in social networks when girls move. They become socially isolated and this puts them at great risk. Domestic girl workers leave home and become more vulnerable to exploitation —  they have no friends or family around them, and they may not be able to access communication technologies. For this reason it is critical to understand that technology cannot replace social networks. A social network is needed first, and then ICTs can allow girls to remain in touch with those in their network. It is very important to think about understanding and/or building social networks before pushing the idea of technology or incorporating technologies.

4) ICTs and potential risk to child migrants

  • SMS, anonymity and privacy. According to a study one participant was involved in, some children and youth report feeling that they can speak up more freely by SMS since they can text privately even in close quarters. Others noted that some organizations are incorporating online counseling services for similar reasons. A study in Nigeria is ongoing regarding this same topic, and in Southeast Asia it has been shown that girls often use text messages to flirt using an alternate identity.
  • Retaliation. Concerns were raised regarding the possibility for retaliation if a child reports abuse or uses a mobile for flirting and the phone is confiscated.  Practices of self-protection and message deleting are not very well implemented in most cases. A participant noted that some of the low-end phones in Tanzania and Kenya periodically delete outgoing messages and only keep 15 messages on the phone at a time. This can help somewhat, though it is not a feature that is particularly aimed at protection and privacy, rather, it is more a function of low memory space. Safer Mobile is one initiative that looks at risk and privacy; however, like most efforts looking at risk, it is focused on political conflict and human rights situations, not at privacy and protection for child migrants or other types of abuse reporting that children may be involved in.

5) Research gaps and challenges

  • Migration contexts. It was emphasized that migration during an emergency situation is very different from a voluntary migration, or seasonal migration. Work is being done around communication with disaster or emergency affected populations via the Communication with Disaster Affected Communities (CDAC) Network, but this theme does not seem to be one of widespread discussion among US-based NGOs and humanitarian organizations.
  • Migrants are not necessarily disadvantaged however a bias exists in that researchers tend to look for disadvantage or those who are disadvantaged. They often ask questions that are biased towards showing that migrants are disadvantaged, but this is not always the case. Sometimes migrating children are the most advantaged. In some contexts migrating requires family support and funds to migrate, and those with the least amount of resources may not be able to move. In some cases migrant children have a huge, strong family structure. In others, children are escaping early marriage, their parents’ passing away or other difficult situations.
  • Integrated information and data crossing. One issue with research around migrants is that most looks solely at migrants and does not cross migration with other information. Many girls migrate with the idea that they will be able to get an education, for example, but there is not a lot of information on whether migrating girls have more or less access to education. The literature tends to focus on girls in the worst situations. In addition, although there are 4 times as many internal migrants as there are international migrants, focus tends to be on international migration.

In a second post, I will cover Human Rights Watch’s work on using data visualization to advocate for the rights of immigrants in the US.

Many thanks to our lead discussants from the Population Council and to the Women’s Refugee Commission and the International Rescue Committee for hosting! The next Technology Salon NYC will be coming up in January 2013. Stay tuned for more information and if you’d like to receive notifications about future salons, sign up for the mailing list!

Also, if you have research or examples of how child and youth migrants are using ICTs before, during or after their journey, or information on how organizations are using ICTs to support the process, please let me know.

Related posts and resources:

How can ICTs support and protect children who migrate?

New communication tools and disaster affected communities

Empowering communities with technology tools to protect children

Children on the Move website

Read Full Post »

On August 14, @zehrarizvi and I co-hosted a Twitter chat on ways that ICTs can support and protect “children on the move,” eg., children and youth who migrate, are displaced, or move around (or are moved). Background information on the issue and the research.

We discovered some new angles on the topic and some resources and studies that are out or will be coming out soon, for example an upcoming Girls Count study on Girls on the Move by Population Council, a UNHCR paper on ICTs and urban refugee protection in Cairo, and Amnesty’s Technology, People and Solutions work.

Thanks to everyone who participated – whether you were actively tweeting or just observing. We hope it was as useful to you all as it was to us! If you think of anything new to add, please tweet it using #CoMandICT or email me at linda.raftree [at] planusa.org.

Read the Storify version here (with a bonus picture of @chrisalbon holding up a ‘burner’ phone). Don’t know what a ‘burner’ phone is? Click and take a look.

Read Full Post »

‘Breast flattening,’ also known as ‘breast ironing’ or ‘breast massage,’ is a practice whereby a young girl’s developing breasts are massaged, pounded, pressed, or patted with an object, usually heated in a wooden fire, to make them stop developing, grow more slowly or disappear completely.

Rebecca Tapscott* spent last August in the area of Bafut, in the Northwest, Anglophone region of Cameroon, where a 2006 study by the German Society for International Cooperation (GIZ) found an 18% prevalence rate of breast flattening. The practice has been known as breast ‘ironing,’ however, Rebecca opts to use the term ‘flattening’ to decrease stigma and encourage open conversation about a practice that remains largely hidden.** Rebecca wanted to understand the role of breast flattening in the broader context of adolescence in Cameroon, including past and present motivations for breast flattening; cultural foundations; relation to other forms of gender-based violence such as female genital mutilation or cutting (FGM/C); how and where it is practiced; the psychological and the physical implications of the practice on individual girls.

Rebecca published her findings via the Feinstein International Center in May 2012, in a paper titled: Understanding Breast “Ironing”: A Study of the Methods, Motivations, and Outcomes of Breast Flattening Practices in Cameroon.

She visited our office late last year to share the results of her work with staff at the Plan USA office, explaining that there is currently very little research or documentation of breast flattening. The GIZ study is the only quantitative study on the practice, and there do not seem to be any medical studies on breast flattening. It’s a practice that is believed to affect 1 out of 4 girls in Cameroon, more commonly in some regions than in others. Of note is that research on breast flattening has thus far only been conducted in Cameroon, thus creating the impression that the practice is uniquely Cameroonian, when in reality, it may be a regional phenomenon.

Organizations working to end breast flattening do so with the aim of protecting girls. However, the question of protection is an interesting one depending on who you talk with. Rebecca found that those who practice breast flattening also believe they are protecting girls. “Many mothers believe that unattended, a girl’s breasts will attract advances of men who believe physically developed girls are ‘ripe’ for sex. Breast flattening is seen as a way to keep girls safe from men so that they are able to stay in school and avoid pregnancy.”

According to Rebecca, breast flattening is practiced out of a desire to delay a girl’s physical development and reduce the risk of promiscuous behavior. Proponents of the practice consider that “men will pursue ‘developed’ girls and that girl children are not able to cope with or deter men’s attention. They see that promiscuity can result in early pregnancy, which limits educational, career, and marriage opportunities, shames the family, increases costs to family (newborn, loss of bride price, health complications from early childbirth or unsafe abortion).” In addition, there is the belief that girls are not sufficiently intellectually developed to learn about puberty, and therefore should not yet develop breasts. Another reason given for the practice is the belief that girls who develop before their classmates will be the target of teasing and become social outcasts. There is also, for some, the belief that large breasts are unattractive or not fashionable.

Rebecca cites a poor understanding of human biology as one reason that the practice persists. Some of the beliefs around it include: “Belief that sensitivity and pain during breast development indicates that the growth is too early and must be artificially delayed until the girl is older. Belief that when a girl develops breasts she will stop growing taller. Belief that puberty can be delayed by delaying breast development. Belief that for breasts to grow properly, the first growth should always be repressed (like baby teeth). Belief that if breasts begin growing at a young age, they will grow too large or be misshapen, resulting in a displeasing or disproportionate female form. Belief that breasts and other outward signs of sexuality develop in accordance with a girl’s interest in sex, and therefore a developed girl is soliciting sexual advances or is a ‘bad’ girl. Belief that when a girl develops breasts, she believes she has matured and subsequently, she becomes less obedient. Belief that it will improve breast feeding at a later age.”

“…For mothers there is the perception that we should delay the development of girls as much as possible, believing that physical development shows maturity. Men look at girls and talk amongst themselves and say, “she’s ripe for sex.” They are not looking for marriage prospects…Men are aggressive. In pidgin, they say “she got get done big,” meaning, she’s matured and ready for sex. I can go after her now. Women, on the other hand, know that their daughters are just kids.” ~ journalist that Rebecca interviewed.

Rebecca also heard explanations such as “A girl will stop growing taller because the weight of her breasts will hold her down. One can delay puberty by delaying breast growth. If the breasts hurt during puberty, they are developing too early.” There are also many beliefs related to a girl’s development and her sexual reputation. Some of these beliefs include that a girl with large breasts is a ‘bad girl’. This stems from the belief that breasts start to develop when a man touches them or if a girl is thinking about sex, going to night clubs or watching pornography. Rebecca found that many of these beliefs were held across all segments of society, from the very well educated to those with no formal education.

“The body responds to psychological ideas. If a girl looks for a “friend,” her breasts will grow faster. If she is interested in boys or watches pornography, her body will develop faster.” ~ Interviewee at the Ministry of Social Affairs

“If a girl is interested in sex and thinks about it a lot, she will develop faster. I saw two girls of 12 years, one of whom was very developed physically and the other was not. The one who was developed could speak very frankly about sex, showing that she was knowledgeable from some experience, while the other girl was very naive and shy.” ~ Teacher

Because of potential stigma or harm to girls who talked about undergoing breast flattening, Rebecca only interviewed a few teenaged girls directly during this phase of her research. Instead, she mainly interviewed older women, as well as a few boys and men. She found that most women who had experienced breast flattening didn’t seem to remember the experience as extremely traumatic; however, she commented, “most are remembering from back in the day. Most at first said didn’t really hurt but then after a while into the interview, they’d remember, well, yes it hurt.” Rebecca was surprised at the number of people who said that heated leaves were used, because the media normally reports that a grinding stone or pestle is used for breast flattening.

“When I was 11 years old, my grandmother did a form of breast flattening to me. This was in 1984. I was walking around with her shoulders hunched forward to hide my developing chest, so my grandmother called me to the kitchen. She warmed some fresh leaves on the fire and said something in the dialect, like a pleading to the ancestors or spirits. Then she applied the leaves to my chest, and used them to rub and pat my breasts flat. She would also rub and pat my back, so as to make the chest flat and even on the front and the back. It hurts because the chest is so sensitive then—but they are not pressing too hard.” ~ 38 year old woman from the community

“To do the practice, I warmed the pestle in the fire, and used it in a circular motion to press the breasts flat. I did it once per day for over a week—maybe 8 or 9 days. I massaged them well, and they went back for seven years.” ~ 53-year-old woman attending a maternal health clinic in Bafut, attending with a different daughter whose breasts she did not flatten

“Until about a year ago, I believed that when a girl is interested in sex, watches porn, or lets boys touch their breasts, her breasts will grow larger. I think my mother must believe this. My ideas changed when I saw my own friends—I knew they were virgins, but they had large breasts. Also when my own breasts got bigger, and it was not because a man was touching them.” ~ Journalist in Bamenda

There is currently no consensus on whether the practice is effective at reducing the size of a girl’s breasts or if it has long-term effects on breast size. “People told me completely different things gave the same result, or people cited the very same practices as yielding different results,” said Rebecca.

Rebecca’s findings indicate that the practice of breast flattening is not a longstanding ‘traditional’ practice. Many people reported that it became more popular with urbanization. “People [who immigrated to cities] didn’t know their neighbors and they were worried about the safety of their daughters. It seems that an old practice that was used for ‘shaping’ was repurposed and adapted. Breast flattening is a very intimate and personal thing between mother and daughter. It doesn’t happen to all daughters. It’s very difficult to track, and there is no association with ethnic groups, education, socioeconomic levels, religion, etc.,” she explained.

“Most men don’t know about it. One boy said he thought it was good to protect girls.” When she talked holistically with men about what they look for in a woman, Rebecca said, “an interest in chastity and virginity came out very clearly. In Cameroon the average age for girls to lose virginity is 13-17, and it’s the same for boys. According to studies, for most, the first sexual experience is unwilling.”

Most doctors that Rebecca talked to had never heard of the practice. One doctor in Yaoundé said he had seen first and second degree burns as a result. Some cite that edemia may be a result. Development organizations like GIZ say it can cause cysts, cancer, and other difficulties but also cited that only 8% of women report suffering negative side effects, while 18% report that their breasts “fell” or sagged at an early age.

Many women who Rebecca interviewed considered the practice a very low concern compared to other problems that impact on their development, such as illiteracy, sexual exploitation, poverty and unemployment. “The women that I talked to,” said Rebecca, “often asked me, ‘Why are you asking me about this? It’s such a small thing compared to other things we have to face.’”

Given that there is not much research or consensus around breast flattening in Cameroon and the broader region, Rebecca’s work may be of use to local and international organizations that are working to promote women’s and children’s rights. Rebecca emphasizes that most people who engage in what are commonly referred to as ‘harmful traditional practices’ including female genital cutting (FGC) and breast flattening, actually do so with their child’s best interest in mind, as a means of protecting and promoting the child within the community and following social norms. Therefore, frightening or berating people may not be the best approach to discourage the practice of breast flattening. Community input will be needed to identify the root causes of the practice for it to become obsolete. Better sex education and a reduction in stigma around talking about sexual reproductive health may also help. Men will need to be part of the solution, and so will mothers who currently feel the need to take protection of their daughters into their own hands. Like many similar practices, it’s not likely that breast flattening will end until other systems and environments that set the stage for it also change.

*Rebecca worked with Plan Cameroon for several weeks on the Youth Empowerment through Technology, Arts and Media (YETAM) Project last June and July. She traveled to Bafut following her internship to conduct independent research.

**This distinction is similar to the difference between the terms female genital ‘mutilation’ and female genital ‘cutting.’

Read the full report.

Contact Rebecca for more information: rebecca.tapscott [at] gmail.com

Read Full Post »

Bamboo Shoots training manual

I like to share good training guides when I come across them, so here is a quick summary and a link to Bamboo Shoots. It was originally created by Plan in Cambodia.

Bamboo Shoots is a training manual on child rights, child centered community development and child-led community actions for facilitators working with children and youth groups. You can download it here.

Bamboo Shoots was developed to: Increase children’s understanding of their rights as defined by the United Nations Convention on the Rights of the Child (UNCRC); raise children’s awareness of their rights and build their capacities to claim them; create opportunities for children to recognize, identify and prioritize issues and problems or gaps in relation to child rights violations; and provide opportunities for children to influence agendas and action regarding identified and prioritized child rights violations.

Bamboo Shoots takes complicated concepts and breaks them down into easy language and engaging, interactive sessions. It also offers good resources and background material for facilitators so that they can manage the sessions well.

Part One:

I like this manual because it starts off right in the first chapter with the importance of setting the tone and the context for good child and youth participation. It provides ideas on selecting participants and facilitators, and gives a description of a good facilitator. It provides recommendations on the setting and practical considerations for managing a workshop with children, as well as good paragraph to help think through when and when not to include other adults in the training.

The guideline goes through the 6 principles for making child participation a reality:

  1. Non-discrimination and inclusiveness
  2. Democracy and equality of opportunity
  3. Physical, emotional and psychological safety of participants
  4. Adult responsibility
  5. Voluntarism, informed consent and transparency
  6. Participation as an enjoyable and stimulating experience for children

It shares Plan’s code of ethics on child participation and important steps to follow in working with children, as well as tips on how to establish a good working relationship with children, how to help children learn and develop their potential, how to help children build self-confidence and self-esteem, and how to encourage children to develop a responsible attitude towards others and a sense of community. There is a section on how to keep children safe also and an explanation of a facilitator’s ‘duty of care’.

A last section of part one lists common facilitation techniques and tools, such as: role-play, working in pairs and groups, idea storming, whole group discussion, questioning, projects, buzz sessions, drawing, photographs, video, word association, recreating information and more; and gives ideas on when they are most useful.

Part Two:

Section 9 on community mapping

The next section has very complete sessions on:

  • the concept of rights
  • the history of human rights, and international treaties on rights
  • children’s rights as human rights
  • duties and responsibilities in relation to child rights
  • making sure children are involved
  • child rights and daily realities and making a child rights map
  • gaps in fulfilling child rights
  • setting priority problems and violations of child rights
  • creating an action agenda and proposed solutions to the gaps identified

Each session comes complete with a pre-training assessment, reading material for facilitators and handouts for participants.

Part Three:

The last section of the manual helps facilitators take children through the steps to child-led community action, including children’s participation in all the program and project cycles: assessment, planning, implementation, monitoring and evaluation.

Needs-based vs. Rights-based

It also explains Plan’s rights-based child-centered community development approach, the foundations of that approach, and the difference between needs-based approaches and rights-based approaches. It goes on to cover planning and supporting child-led community action.

The last section of the guide offers a list of resources and references.

For anyone working with children, or even anyone looking for an excellent comprehensive community training package on rights and community-led action, I really recommend checking out Bamboo Shoots. Whether you are working through media and ICTs or using more traditional means for engaging children, this is a great guide on how to do it well from start to finish. I’ll be referring to it often.

Additional Resources:

Minimum standards for child participation in national and regional consultation events

Protocols and documents to help ensure good quality child participation

United Nations Convention on the Rights of the Child

Insight Share’s rights-based approach to participatory video toolkit

Related posts on Wait… What?

Child participation at events: getting it right

Community based child protection

Child protection, the media and youth media programs

Read Full Post »

Salim Mvurya, Plan Kwale's District Area Manager

Plan’s Kwale District office in Kenya has been very successful in building innovative community-led programming that incorporates new ICTs.  I had the opportunity to interview Salim Mvurya, the Area Manager, last week, and was really struck by his insights on how to effectively incorporate ICTs into community-led processes to reach development goals and improve on child rights, child protection and governance.

In this video, Salim gives some background on how Plan Kwale has been using ICTs in their programs since 2003 (1:11). He shares ideas about the potential of new ICTs (3.42) and some key lessons learned since 2003 (5.03).

Watch the video to get the advice straight from Salim. Or if your internet connection is slow or you’re like me and you like to skim through an article rather sit still and watch a video, the transcript is below.

In a second video, Salim gives really astute advice to the tech community (0.15), corporations (1.19), and development organizations (1.57) on how to successfully integrate ICTs to enable good development processes. He also mentions the importance of moving with the times (4.43). Read the transcript here.

Transcript for Part 1:

ICTs and development Part 1: ICT tools for child rights, child protection and social accountability

My name is Salim Mvurya, I’m the Area Manager for Plan in the Kwale District. My core responsibility as an area manager is to provide leadersip to the Kwale team in both program issues and also operational issues within the organization. This week we have been here in a workshop where we’ve been focusing mostly on issues of ICT for development and particularly what we’ve been learning here is the issue of mapping. We’ve also learned Ushahidi. We’ve also learned from our colleagues in Kilifi on mGESA (a local application of mGEOS that Plan Kenya, Plan Finland, University of Nairobi and Pajat Mgmt are developing) and basically we have been looking at this workshop as providing opportunities for using ICTs for development, but more particularly for us in Kwale is the issue of child protection and youth governance.

How has Plan Kwale been using ICTs for issues of child rights, child protection and child participation?

ICT in Kwale has a bit of a long history and it’s because of the issues on child rights. Kwale has a number of issues. Child marriages, issues of violations of child rights through sexual exploitation, and child poverty. So the efforts to do media started in Kwale in 2003 when we rolled out our first video that was done by children at the time to profile some of the issues of child marriage. But more importantly in 2005, we began to think greatly how we can bring the voices of children to duty-bearers and at time we thought of having a children’s community radio.

Because of lack of experience, we were thinking maybe at the end of that year we could launch the radio station. But then it took longer than we envisioned because we needed to roll out a participatory process. Alongside the same time, we had ideas of community-led birth registration which was being done in one community based organization. But later we also thought about looking at how ICT can help us in moving that direction.

Then we also had this idea of inter-generational dialogue, where children and youth can sit with duty-bearers and discuss critical issues affecting them, so we began using youth and video there, children and video, and showing those videos in a community meeting where then people could discuss the issues.  Alongside the same time we were partnering with various media houses and also rolling out radio programs where people could listen and also foster some discussions on children.

So it’s been a long journey but I think what we are seeing is that we need now to consolidate the gains, the experiences and efforts so that we can have a more strategic approach to ICT for Development and this workshop basically provides us with an opportunity and a platform to think much more.

What potential do you see for some of the newer ICT tools for your work in Kwale?

I see great potential in some of the tools that have been learned here this week, more particularly to get information at the click of a button from the ground. We could use the tools to map out resources out in the community, to map zones where there are a lot of issues on child protection, areas where we have issues like low birth registration… There is great potential for the tools that we’ve learned here to assist us not only in planning for projects, but in issues of social accountability. For example if you map out the areas where we have projects for Constituency Development Fund you can easily see where we have projects that have been done well but where we also have projects where maybe communities will need to discuss much more with duty-bearers to be able to, you know, foster issues of social accountability.

What are your biggest challenges? What mistakes have you made?

One thing that we’ve been learning in the process… well, you know sometimes we have ideas that we think can work in the next week, like for example the children’s community radio when we were thinking about it we were thinking that it could take off in about 2 months. But what we learned is that there are processes to be involved. Communities have to be prepared well for sustainability. Children have to be trained, there needs to be capacity building. You have also to conform to government procedures and processes.

The same also with birth registration. We thought in 6 months we could send an SMS and get your birth notification, but what we have also learned is that it takes a process. It takes awhile. You have to get the government buy in.  You also have to work on software, where the government is having a critical input. Because, although it is a pilot, we also think that if it works well then it has to be replicated, so it has to conform with the thinking in government. Also, with the issues of youth and media, one thing that has to be very clear is that you have to get youth who are committed, so you start with a bigger group, and you end up with those who are passionate

So I think it’s very critical when somebody is thinking about ICT for Development that, one, you look at the context: is it relevant to that area? What kind of skills are needed? What kind of processes for sustainability? but also getting the passion. Getting people who are passionate to lead the process is also a very critical lesson.

Related posts on Wait… What?

Salim’s ICT4D advice part 2: innovate, but keep it real

Youth mappers: from Kibera to Kinango

A positively brilliant ICT4D workshop in Kwale, Kenya

Is this map better than that map?

Modernizing birth registration with mobile technology

Read Full Post »

Older Posts »