Feeds:
Posts
Comments

Archive for the ‘protection’ Category

Screen Shot 2014-05-08 at 9.36.00 AMDebate and thinking around data, ethics, ICT have been growing and expanding a lot lately, which makes me very happy!

Coming up on May 22 in NYC, the engine room, Hivos, the Berkman Center for Internet and Society, and Kurante (my newish gig) are organizing the latest in a series of events as part of the Responsible Data Forum.

The event will be hosted at ThoughtWorks and it is in-person only. Space is limited, so if you’d like to join us, let us know soon by filling in this form. 

What’s it all about?

This particular Responsible Data Forum event is an effort to map the ethical, legal, privacy and security challenges surrounding the increased use and sharing of data in development programming. The Forum will aim to explore the ways in which these challenges are experienced in project design and implementation, as well as when project data is shared or published in an effort to strengthen accountability. The event will be a collaborative effort to begin developing concrete tools and strategies to address these challenges, which can be further tested and refined with end users at events in Amsterdam and Budapest.

We will explore the responsible data challenges faced by development practitioners in program design and implementation.

Some of the use cases we’ll consider include:

  • projects collecting data from marginalized populations, aspiring to respect a do no harm principle, but also to identify opportunities for informational empowerment
  • project design staff seeking to understand and manage the lifespan of project data from collection, through maintenance, utilization, and sharing or destruction.
  • project staff that are considering data sharing or joint data collection with government agencies or corporate actors
  • project staff who want to better understand how ICT4D will impact communities
  • projects exploring the potential of popular ICT-related mechanisms, such as hackathons, incubation labs or innovation hubs
  • projects wishing to use development data for research purposes, and crafting responsible ways to use personally identifiable data for academic purposes
  • projects working with children under the age of 18, struggling to balance the need for data to improve programming approaches, and demand higher levels of protection for children

By gathering a significant number of development practitioners grappling with these issues, the Forum aims to pose practical and critical questions to the use of data and ICTs in development programming. Through collaborative sessions and group work, the Forum will identify common pressing issues for which there might be practical and feasible solutions. The Forum will focus on prototyping specific tools and strategies to respond to these challenges.

What will be accomplished?

Some outputs from the event may include:

  • Tools and checklists for managing responsible data challenges for specific project modalities, such as sms surveys, constructing national databases, or social media scraping and engagement.
  • Best practices and ethical controls for data sharing agreements with governments, corporate actors, academia or civil society
  • Strategies for responsible program development
  • Guidelines for data-driven projects dealing with communities with limited representation or access to information
  • Heuristics and frameworks for understanding anonymity and re-identification of large development data sets
  • Potential policy interventions to create greater awareness and possibly consider minimum standards

Hope to see some of you on the 22nd! Sign up here if you’re interested in attending, and read more about the Responsible Data Forum here.

 

Read Full Post »

Last week’s Technology Salon New York City touched on ethics in technology for democracy initiatives. We heard from lead discussants Malavika Jayaram, Berkman Center for Internet and SocietyIvan Sigal, Global Voices; and Amilcar Priestley, Afrolatin@ Project. Though the topic was catalyzed by the Associated Press’ article on ‘Zunzuneo’ (a.k.a. ‘Cuban Twitter’) and subsequent discussions in the press and elsewhere, we aimed to cover some of the wider ethical issues encountered by people and organizations who implement technology for democracy programs.

Salons are off the record spaces, so no attribution is made in this post, but I’ve summarized the discussion points here:

First up: Zunzuneo

The media misinterpreted much of the Zunzuneo story. Zunzuneo was not a secret mission, according to one Salon participant, as it’s not in the remit of USAID to carry out covert operations. The AP article conflated a number of ideas regarding how USAID works and the contracting mechanisms that were involved in this case, he said. USAID and the Office of Transition Initiatives (OTI) frequently disguise members, organizations, and contractors that work for it on the ground for security reasons. (See USAID’s side of the story here). This may still be an ethical question, but it is not technically “spying.” The project was known within the OTI and development community, but on a ‘need to know’ basis. It was not a ‘fly by night’ operation; it was more a ‘quietly and not very effectively run project.’

There were likely ethics breaches in Zunzuneo, from a legal standpoint. It’s not clear whether the data and phone numbers collected from the Cuban public for the project were obtained in a legal or ethical way. Some reports say they were obtained through a mid-level employee (a “Cuban engineer who had gotten the phone list” according to the AP article). (Note: I spoke separately to someone close to the project who told me that user opt-in/opt-out and other standard privacy protocols were in place). It’s also not entirely clear whether, as the AP states, the user information collected was being categorized into segments who were loyal or disloyal to the Cuban government, information which could put users at risk if found out.

Zunzuneo took place in a broader historical and geo-political context. As one person put it, the project followed Secretary Clinton’s speeches on Internet Freedom. There was a rush to bring technology into the geopolitical space, and ‘the articulation of why technology was important collided with a bureaucratic process in USAID and the State Department (the ‘F process’) that absorbed USAID into the State Department and made development part of the State Department’s broader political agenda.’ This agenda had been in the works for quite some time, and was part of a wider strategy of quietly moving into development spaces and combining development, diplomacy, intelligence and military (defense), the so-called 3 D’s.

Implementers failed to think through good design, ethics and community aspects of the work. In a number of projects of this type, the idea was that if you give people technology, they will somehow create bottom up pressure for political social change. As one person noted, ‘in the Middle East, as a counter example, the tech was there to enable and assist people who had spent 8-10 years building networks. The idea that we can drop tech into a space and an uprising will just happen and it will coincidentally push the US geopolitical agenda is a fantasy.’ Often these kinds of programs start with a strategic communications goal that serves a political end of the US Government. They are designed with the idea that a particular input equals some kind of a specific result down the chain. The problem comes when the people doing the seeding of the ideas and inputs are not familiar with the context they will be operating in. They are injecting inputs into a space that they don’t understand. The bigger ethical question is: Why does this thought process prevail in development? Much of that answer is found in US domestic politics and the ways that initiatives get funded.

Zunzuneo was not a big surprise for Afrolatino organizations. According to one discussant, Afrolatino organizations were not surprised when the Zunzuneo article came out, given the geopolitical history and the ongoing presence of the US in Latin America. Zunzuneo was seen as a 21st Century version of what has been happening for decades. Though it was criticized, it was not seen as particularly detrimental. Furthermore, the Afrolatino community (within the wider Latino community) has had a variety of relationships with the US over time – for example, some Afrolatino groups supported the Contras. Many Afrolatino groups have felt that they were not benefiting overall from the mestizo governments who have held power. In addition, much of Latin America’s younger generation is less tainted by the Cold War mentality, and does not see US involvement in the region as necessarily bad. Programs like Zunzuneo come with a lot of money attached, so often wider concerns about their implications are not in the forefront because organizations need to access funding. Central American and Caribbean countries are only just entering into a phase of deeper analysis of digital citizenship, and views and perceptions on privacy are still being developed.

Perceptions of privacy

There are differences in perception when it comes to privacy and these perceptions are contextual. They vary within and across countries and communities based on age, race, gender, economic levels, comfort with digital devices, political perspective and past history. Some older people, for example, are worried about the privacy violation of having their voice or image recorded, because the voice, image and gaze hold spiritual value and power. These angles of privacy need to be considered as we think through what privacy means in different contexts and adapt our discourse accordingly.

Privacy is hard to explain, as one discussant said: ‘There are not enough dead bodies yet, so it’s hard to get people interested. People get mad when the media gets mad, and until an issue hits the media, it may go unnoticed. It’s very hard to conceptualize the potential harm from lack of privacy. There may be a chilling effect but it’s hard to measure. The digital divide comes in as well, and those with less exposure may have trouble understanding devices and technology. They will then have even greater trouble understanding beyond the device to data doubles, disembodied information and de-anonymization, which are about 7 levels removed from what people can immediately see. Caring a lot about privacy can get you labeled as paranoid or a crazy person in many places.’

Fatalism about privacy can also hamper efforts. In the developing world, many feel that everything is corrupt and inept, and that there is no point in worrying about privacy and security. ‘Nothing ever works anyway, so even if the government wanted to spy on us, they’d screw it up,’ is the feeling. This is often the attitude of human rights workers and others who could be at greatest risk from privacy breaches or data collection, such as that which was reportedly happening within Zunzuneo. Especially among populations and practitioners who have less experience with new technologies and data, this can create large-scale risk.

Intent, action, context and consequences

Good intentions with little attention to privacy vs data collection with a hidden political agenda. Where are the lines when data that are collected for a ‘good cause’ (for example, to improve humanitarian response) might be used for a different purpose that puts vulnerable people at risk? What about data that are collected with less altruistic intentions? What about when the two scenarios overlap? Data might be freely given or collected in an emergency that would be considered a privacy violation in a ‘development’ setting, or the data collection may lead to a privacy violation post-emergency. Often, slapping the ‘obviously good and unarguably positive’ label of ‘Internet freedom’ on something implies that it’s unquestionably positive when it may in fact be part of a political agenda with a misleading label. There is a long history of those with power collecting data that helps them understand and/or control those with less power, as one Salon participant noted, and we need to be cognizant of that when we think about data and privacy.

US Government approaches to political development often take an input/output approach, when, in fact, political development is not the same as health development. ‘In political work, there is no clear and clean epidemiological goal we are trying to reach,’ noted a Salon participant. Political development is often contentious and the targets and approaches are very different than those of health. When a health model and rhetoric is used to work on other development issues, it is misleading. The wholesale adoption of these kinds of disease model approaches leaves people and communities out of the decision making process about their own development. Similarly, the rhetoric of strategic communications and its inclusion into the development agenda came about after the War on Terror, and it is also a poor fit for political development. The rhetoric of ‘opening’ and ‘liberating’ data is similar. These arguments may work well for one kind of issue, but they are not transferable to a political agenda. One Salon participant pointed out the rhetoric of the privatization model also, and explained that a profound yet not often considered implication of the privatization of services is that once a service passes over to the private sector, the Freedom of Information Act (FOIA) does not apply, and citizens and human rights organizations lose FOIA as a tool. Examples included the US prison system and the Blackwater case of several years ago.

It can be confusing for implementers to know what to do, what tools to use, what funding to accept and when it is OK to bring in an outside agenda. Salon participants provided a number of examples where they had to make choices and felt ethics could have been compromised. Is it OK to sign people up on Facebook or Gmail during an ICT and education project, given these companies’ marketing and privacy policies? What about working on aid transparency initiatives in places where human rights work or crime reporting can get people killed or individual philanthropists/donors might be kidnapped or extorted? What about a hackathon where the data and solutions are later given to a government’s civilian-military affairs office? What about telling LGBT youth about a social media site that encourages LGBT youth to connect openly with one another (in light of recent harsh legal penalties against homosexuality)? What about employing a user-centered design approach for a project that will eventually be overlaid on top of a larger platform, system or service that does not pass the privacy litmus test? Is it better to contribute to improving healthcare while knowing that your software system might compromise privacy and autonomy because it sits on top of a biometric system, for example? Participants at the Salon face these ethical dilemmas every day, and as one person noted, ‘I wonder if I am just window dressing something that will look and feel holistic and human-centered, but that will be used to justify decisions down the road that are politically negative or go against my values.’ Participants said they normally rely on their own moral compass, but clearly many Salon participants are wrestling with the potential ethical implications of their actions.

What we can do? Recommendations from Salon participants

Work closely with and listen to local partners, who should be driving the process and decisions. There may be a role for an outside perspective, but the outside perspective should not trump the local one. Inculcate and support local communities to build their own tools, narratives, and projects. Let people set their own agendas. Find ways to facilitate long-term development processes around communities rather than being subject to agendas from the outside.

Consider this to be ICT for Discrimination and think in every instance and every design decision about how to dial down discrimination. Data lead to sorting, and data get lumped into clusters. Find ways during the design process to reduce the discrimination that will come from that sorting and clustering process. The ‘Do no harm’ approach is key. Practitioners and designers should also be wary of the automation of development and the potential for automated decisions to be discriminatory.

Call out hypocrisy. Those of us who sit at Salons or attend global meetings hold tremendous privilege and power as compared to most of the rest of the world. ‘It’s not landless farmers or disenfranchised young black youth in Brazil who get to attend global meetings,’ said one Salon attendee. ‘It’s people like us. We need to be cognizant of the advantage we have as holders of power.’ Here in the US, the participant added, we need to be more aware of what private sector US technology companies are doing to take advantage of and maintain their stronghold in the global market and how the US government is working to allow US corporations to benefit disproportionately from the current Internet governance structure.

Use a rights-based approach to data and privacy to help to frame these issues and situations. Disclosure and consent are sometimes considered extraneous, especially in emergency situations. People think ‘this might be the only time I can get into this disaster or conflict zone, so I’m going to Hoover up as much data as possible without worrying about privacy.’ On the other hand, sometimes organizations are paternalistic and make choices for people about their own privacy. Consent and disclosure are not new issues; they are merely manifested in new ways as new technology changes the game and we cannot guarantee anonymity or privacy any more for research subjects. There is also a difference between information a person actively volunteers and information that is passively collected and used without a person’s knowledge. Framing privacy in a human rights context can help place importance on both processes and outcomes that support people’s rights to control their own data and that increase empowerment.

Create a minimum standard for privacy. Though we may not be able to determine a ceiling for privacy, one Salon participant said we should at least consider a floor or a minimum standard. Actors on the ground will always feel that privacy standards are a luxury because they have little know-how and little funding, so creating and working within an ethical standard should be a mandate from donors. The standard could be established as an M&E criterion.

Establish an ethics checklist to decide on funding sources and create policies and processes that help organizations to better understand how a donor or sub-donor would access and/or use data collected as part of a project or program they are funding. This is not always an easy solution, however, especially for cash-strapped local organizations. In India, for example, organizations are legally restricted from receiving certain types of funding based on government concerns that external agencies are trying to bring in Western democracy and Western values. Local organizations have a hard time getting funding for anti-censorship or free speech efforts. As one person at the Salon said, ‘agencies working on the ground are in a bind because they can’t take money from Google because it’s tainted, they can’t take money from the State Department because it’s imperialism and they can’t take money from local donors because there are none.’

Use encryption and other technology solutions. Given the low levels of understanding and awareness of these tools, more needs to be done so that more organizations learn how to use them, and they need to be made simpler, more accessible and user-friendly. ‘Crypto Parties’ can help get organizations familiar with encryption and privacy, but better outreach is needed so that organizations understand the relevance of encryption and feel welcome in tech-heavy environments.

Thanks to participants and lead discussants for the great discussions and to ThoughtWorks for hosting us at their offices!

 If you’d like to attend future Salons, sign up here!

Read Full Post »

This is a cross post from Heather Leson, Community Engagement Director at the Open Knowledge Foundation. The original post appeared here on the School of Data site.

by Heather Leson

What is the currency of change? What can coders (consumers) do with IATI data? How can suppliers deliver the data sets? Last week I had the honour of participating in the Open Data for Development Codeathon and the International Aid Transparency Initiative Technical Advisory Group meetings. IATI’s goal is to make information about aid spending easier to access, use, and understand. It was great that these events were back-to-back to push a big picture view.

My big takeaways included similar themes that I have learned on my open source journey:

You can talk about open data [insert tech or OS project] all you want, but if you don’t have an interactive community (including mentorship programmes), an education strategy, engagement/feedback loops plan, translation/localization plan and a process for people to learn how to contribute, then you build a double-edged barrier: barrier to entry and barrier for impact/contributor outputs.

Currency

About the Open Data in Development Codeathon

At the Codathon close, Mark Surman, Executive Director of Mozilla Foundation, gave us a call to action to make the web. Well, in order to create a world of data makers, I think we should run aid and development processes through this mindset. What is the currency of change? I hear many people talking about theory of change and impact, but I’d like to add ‘currency’. This is not only about money, this is about using the best brainpower and best energy sources to solve real world problems in smart ways. I think if we heed Mark’s call to action with a “yes, and”, then we can rethink how we approach complex change. Every single industry is suffering from the same issue: how to deal with the influx of supply and demand in information. We need to change how we approach the problem. Combined events like these give a window into tackling problems in a new format. It is not about the next greatest app, but more about asking: how can we learn from the Webmakers and build with each other in our respective fields and networks?

Ease of Delivery

The IATI community / network is very passionate about moving the ball forward on releasing data. During the sessions, it was clear that the attendees see some gaps and are already working to fill them. The new IATI website is set up to grow with a Community component. The feedback from each of the sessions was distilled by the IATI – TAG and Civil Society Guidance groups to share with the IATI Secretariat.

In the Open Data in Development, Impact of Open Data in Developing Countries, and CSO Guidance sessions, we discussed some key items about sharing, learning, and using IATI data. Farai Matsika, with International HIV/Aids Alliance, was particularly poignant reminding us of IATI’s CSO purpose – we need to share data with those we serve.

Country edits IATI

One of the biggest themes was data ethics. As we rush to ask NGOs and CSOs to release data, what are some of the data pitfalls? Anahi Ayala Iaccuci of Internews and Linda Raftree of Plan International USA both reminded participants that data needs to be anonymized to protect those at risk. Ms. Iaccuci asked that we consider the complex nature of sharing both sides of the open data story – successes and failures. As well, she advised: don’t create trust, but think about who people are trusting. Turning this model around is key to rethinking assumptions. I would add to her point: trust and sharing are currency and will add to the success measures of IATI. If people don’t trust the IATI data, they won’t share and use it.

Anne Crowe of Privacy International frequently asked attendees to consider the ramifications of opening data. It is clear that the IATI TAG does not curate the data that NGOS and CSOs share. Thus it falls on each of these organizations to learn how to be data makers in order to contribute data to IATI. Perhaps organizations need a lead educator and curator to ensure the future success of the IATI process, including quality data.

I think that School of Data and the Partnership for Open Data have a huge part to play with IATI. My colleague Zara Rahman is collecting user feedback for the Open Development Toolkit, and Katelyn Rogers is leading the Open Development mailing list. We collectively want to help people become data makers and consumers to effectively achieve their development goals using open data. This also means also tackling the ongoing questions about data quality and data ethics.


Here are some additional resources shared during the IATI meetings.

Read Full Post »

Screen Shot 2013-11-23 at 6.14.40 PM

Migration has been a part of the human experience since the dawn of time, and populations have always moved in search of resources and better conditions. Today, unaccompanied children and youth are an integral part of national and global migration patterns, often leaving their place of origin due to violence, conflict, abuse, or other rights violations, or simply to seek better opportunities for themselves.

It is estimated that 33 million (or some 16 percent) of the total migrant population today is younger than age 
20. Child and adolescent migrants make up a significant proportion of the total population of migrants in Africa (28 percent), Asia (21 percent), Oceania (11 percent), Europe (11 percent), and the Americas (10 percent).

The issue of migration is central to the current political debate as well as to the development discussion, especially in conversations about the “post 2015” agenda. Though many organizations are working to improve children’s well-being in their home communities, prevention work with children and youth is not likely to end migration. Civil society organizations, together with children and youth, government, community members, and other stakeholders can help make migration safer and more productive for those young people who do end up on the move.

As the debate around migration rages, access to and use of ICTs is expanding exponentially around the globe. For this reason Plan International USA and the Oak Foundation felt it was an opportune time to take stock of the ways that ICTs are being used in the child and youth migration process.

Our new report, “Modern Mobility: the role of ICTs in child and youth migration” takes a look at:

  • how children and youth are using ICTs to prepare for migration; to guide and facilitate their journey; to keep in touch with families; to connect with opportunities for support and work; and to cope with integration, forced repatriation or continued movement; and
  • how civil society organizations are using ICTs to facilitate and manage their work; to support children and youth on the move; and to communicate and advocate for the rights of child and youth migrants.

In the Modern Mobility paper, we identify and provide examples of three core ways that child and youth migrants are using new ICTs during the different phases of the migration process:

  1. for communicating and connecting with families and friends
  2. for accessing information
  3. for accessing services

We then outline seven areas where we found CSOs are using ICTs in their work with child and youth migrants, and we offer some examples:

Ways that CSOs are using ICTs in their work with child and youth migrants.

Ways that CSOs are using ICTs in their work with child and youth migrants.

Though we were able to identify some major trends in how children and youth themselves use ICTs and how organizations are experimenting with ICTs in programming, we found little information on the impact that ICTs and ICT-enabled programs and services have on migrating children and youth, whether positive or negative. Most CSO practitioners that we talked with said that they had very little awareness of how other organizations or initiatives similar to their own were using ICTs. Most also said they did not know where to find orientation or guidance on good practice in the use of ICTs in child-centered programming, ICTs in protection work (aside from protecting children from online risks), or use of ICTs in work with children and young people at various stages of migration. Most CSO practitioners we spoke with were interested in learning more, sharing experiences, and improving their capacities to use ICTs in their work.

Based on Plan Finland’s “ICT-Enabled Development Guide” (authored by Hannah Beardon), the Modern Mobility report provides CSOs with a checklist to support thinking around the strategic use of ICTs in general.

ICT-enabled development checklist developed by Hannah Beardon for Plan International.

ICT-enabled development checklist developed by Hannah Beardon for Plan International.

We also offer a list of key considerations for practitioners who wish to incorporate new technologies into their work, including core questions to ask about access, age, capacity, conflict, connectivity, cost, disability, economic status, electricity, existing information ecosystems, gender, information literacy, language, literacy, power, protection, privacy, sustainability, and user-involvement.

Our recommendation for taking this area forward is to develop greater awareness and capacity among CSOs regarding the potential uses and risks of ICTs in work with children and youth on the move by:

  1. Establishing an active community of practice on ICTs and children and youth on the move.
  2. Mapping and sharing existing projects and programs.
  3. Creating a guide or toolbox on good practice for ICTs in work with children and youth on the move.
  4. Further providing guidance on how ICTs can help “normal” programs to reach out to and include children and youth on the move.
  5. Further documentation and development of an evidence base.
  6. Sharing and distributing this report for discussion and action.

Download the Modern Mobility report here.

We’d love comments and feedback, and information about examples or documentation/evidence that we did not come across while writing the report!

Read Full Post »

At the November 8th Technology Salon in New York City, we looked at the role of ICTs in communication for development (C4D) initiatives with marginalized adolescent girls. Lead discussants Kerida McDonald and Katarzyna Pawelczyk discussed recent UNICEF reports related to the topic, and John Zoltner spoke about FHI360’s C4D work in practice.

To begin, it was pointed out that C4D is not donor communications or marketing. It is the use of communication approaches and methodologies to achieve influence at various levels —  e.g., family, institutional and policy —  to change behavior and social norms. C4D is one approach that is being used to address the root causes of gender inequality and exclusion.

Screen Shot 2013-10-11 at 7.24.48 AMAs the UNICEF report on ICTs and C4D* notes, girls may face a number of situations that contribute to and/or are caused by their marginalization: early pregnancy, female genital cutting, early marriage, high rates of HIV/AIDS, low levels of education, lack of control over resources. ICTs alone cannot resolve these, because there is a deep and broad set of root causes. However, ICTs can be integrated systematically into the set of C4D tools and approaches that contribute to positive change.

Issues like bandwidth, censorship and electricity need to be considered when integrating ICTs into C4D work, and approaches that fit the context need to be developed. Practitioners should use tools that are in the hands of girls and their communities now, yet be aware of advances in access and new technologies, as these change rapidly.

Key points:

Interactivity is more empowering than one-way messaging:  Many of the ICT solutions being promoted today focus on sending messages out via mobile phones. However C4D approaches aim for interactivity and multi-channel, multi-directional communication, which has proven more empowering.

Content: Traditional media normally goes through a rigorous editorial process and it is possible to infuse it with a gender balance. Social media does not have the same type of filters, and it can easily be used to reinforce stereotypes about girls. This is something to watch and be aware of.

Purpose: It’s common with ICT-related approaches to start with the technology rather than starting with the goals. As one Salon participant asked “What are the results we want to see for ourselves? What are the results that girls want to see? What are the root causes of discrimination and how are we trying to address them? What does success look like for girls? For organizations? Is there a role for ICTs in helping achieve success? If so, what is it?” These questions need to be the starting point, rather than the technology.

Participation: One Salon participant mentioned a 2-year project that is working together with girls to define their needs and their vision of success. The process is one co-design, and it is aimed at understanding what girls want. Many girls expressed a feeling of isolation and desire for connection, and so the project is looking at how ICTs can help them connect. As the process developed, the diversity of needs became very clear and plans have changed dramatically based on input from a range of girls from different contexts. Implementors need to be prepared to change, adapt and respond to what girls say they want and to local realities.

****

Screen Shot 2013-11-23 at 10.41.22 PMA second study commissioned by UNICEF explores how young people use social media. The researchers encountered some challenges in terms of a strong gender approach for the study. Though a gender lens was used for analysis, there is little available data disaggregated by sex. The study does not focus on the most marginalized, because it looks at the use of social media, which normally requires a data connection or Internet access, which the most marginalized youth usually do not have.

The authors of the report found that youth most commonly used the Internet and social media for socializing and communicating with friends. Youth connected less often for schoolwork. One reason for this may be that in the countries/contexts where the research took place, there is no real integration of ICTs into the school system. It was emphasized that the  findings in the report are not comparable or nationally representative, and blanket statements such as “this means x for the whole developing world” should be avoided.

Key points:

Self-reporting biases. Boys tend to have higher levels of confidence and self-report greater ICT proficiency than girls do. This may skew results and make it seem that boys have higher skill levels.

Do girls really have less access? We often hear that girls have less access than boys. The evidence gathered for this particular report found that “yes and no.” In some places, when researchers asked “Do you have access to a mobile,” there was not a huge difference between urban and rural or between boys and girls. When they dug deeper, however, it became more complex. In the case of Zambia, access and ownership were similar for boys and girls, but fewer girls were connecting at all to the Internet as compared to boys. Understanding connectivity and use was quite complicated.

What are girls vs. boys doing online? This is an important factor when thinking about what solutions are applicable to which situation(s). Differences came up here in the study. In Argentina, girls were doing certain activities more frequently, such as chatting and looking for information, but they were not gaming. In Zambia, girls were doing some things less often than boys; for example, fewer girls than boys were looking for health information, although the number was still significant. A notable finding was that both girls and boys were accessing general health information more often than they were accessing sensitive information, such as sexual health or mental health.

What are the risks in the online world? A qualitative portion of the study in Kenya used focus groups with girls and boys, and asked about their uses of social media and experience of risk. Many out-of-school girls aged 15-17 reported that they used social media as a way to meet a potential partner to help them out of their financial situation. They reported riskier behavior, contact with older men, and relationships more often than girls who were in school. Girls in general were more likely to report unpleasant online encounters than boys, for example, request for self-exposure photos.

Hiding social media use. Most of the young people that researchers spoke with in Kenya were hiding social media use from their parents, who disapproved of it. This is an important point to note in C4D efforts that plan on using social media, and program designers will want to take parental attitudes about different media and communication channels into consideration as they design C4D programs.

****

When implementing programs, it is noteworthy how boys and girls tend to use ICT and media tools. Gender issues often manifest themselves right away. “The boys grab the cameras, the boys sit down first at the computers.” If practitioners don’t create special rules and a safe space for girls to participate, girls may be marginalized. In practical ICT and media work, it’s common for boys and girls to take on certain roles. “Some girls like to go on camera, but more often they tend to facilitate what is being done rather than star in it.” The gender gap in ICT access and use, where it exists, is a reflection of the power gaps of society in general.

In the most rural areas, even when people have access, they usually don’t have the resources and skills to use ICTs.  Very simple challenges can affect girls’ ability to participate in projects, for example, oftentimes a project will hold training at times when it’s difficult for girls to attend. Unless someone systematically goes through and applies a gender lens to a program, organizations often don’t notice the challenges girls may face in participating. It’s not enough to do gender training or measure gender once a year; gendered approaches needs to be built into program design.

Long-terms interventions are needed if the goal is to emancipate girls, help them learn better, graduate, postpone pregnancy, and get a job. This cannot be done in a year with a simple project that has only one focus, because girls are dealing with education, healthcare, and a whole series of very entrenched social issues. What’s needed is to follow a cohort of girls and to provide information and support across all these sectors over the long-term.

Key points:

Engaging boys and men: Negative reactions from men are a concern if and when girls and women start to feel more empowered or to access resources. For example, some mobile money and cash transfer programs direct funds to girls and women, and some studies have found that violence against women increases when women start to have more money and more freedom. Another study, however, of a small-scale effort that provides unconditional cash transfers to girls ages 18-19 in rural Kenya, is demonstrating just the opposite: girls have been able to say where money is spent and the gender dynamics have improved. This raises the question of whether program methodologies need to be oriented towards engaging boys and men and involving them in changing gender dynamics, and whether engaging boys and men can help avoid an increase in violence. Working with boys to become “girl champions” was cited as a way to help to bring boys into the process as advocates and role models.

Girls as producers, not just consumers. ICTs are not only tools for sending content to girls. Some programs are working to help girls produce content and create digital stories in their own languages. Sometimes these stories are used to advocate to decision makers for change in favor of girls and their agendas. Digital stories are being used as part of research processes and to support monitoring, evaluation and accountability work through ‘real-time’ data.

ICTs and social accountability. Digital tools are helping young people address accountability issues and inform local and national development processes. In some cases, youth are able to use simple, narrow bandwidth tools to keep up to date on actions of government officials or to respond to surveys to voice their priorities. Online tools can also lead to offline, face-to-face engagement. One issue, however, is that in some countries, youth are able to establish communication with national government ministers (because there is national-level capacity and infrastructure) but at local level there is very little chance or capability for engagement with elected officials, who are unprepared to respond and engage with youth or via social media. Youth therefore tend to bypass local government and communicate with national government. There is a need for capacity building at local level and decentralized policies and practices so that response capacity is strengthened.

Do ICTs marginalize girls? Some Salon participants worried that as conversations and information increasingly move to a digital environment, ICTs are magnifying the information and communication divide and further marginalizing some girls. Others felt that the fact that we are able to reach the majority of the world’s population now is very significant, and the inability to reach absolutely everyone doesn’t mean we should stop using ICTs. For this very reason – because sharing of information is increasingly digital – we should continue working to get more girls online and strengthen their confidence and abilities to use ICTs.

Many thanks to UNICEF for hosting the Salon!

(Salons operate under Chatham House Rule, thus no attribution has been given in the above summary. Sign up here if you’d like to attend Salons in the future!)

*Disclosure: I co-authored this report with Keshet Bachan.

Read Full Post »

This is a guest post from Anna Crowe, Research Officer on the Privacy in the Developing World Project, and  Carly Nyst, Head of International Advocacy at Privacy International, a London-based NGO working on issues related to technology and human rights, with a focus on privacy and data protection. Privacy International’s new report, Aiding Surveillance, which covers this topic in greater depth was released this week.

by Anna Crowe and Carly Nyst

NOV 21 CANON 040

New technologies hold great potential for the developing world, and countless development scholars and practitioners have sung the praises of technology in accelerating development, reducing poverty, spurring innovation and improving accountability and transparency.

Worryingly, however, privacy is presented as a luxury that creates barriers to development, rather than a key aspect to sustainable development. This perspective needs to change.

Privacy is not a luxury, but a fundamental human right

New technologies are being incorporated into development initiatives and programmes relating to everything from education to health and elections, and in humanitarian initiatives, including crisis response, food delivery and refugee management. But many of the same technologies being deployed in the developing world with lofty claims and high price tags have been extremely controversial in the developed world. Expansive registration systems, identity schemes and databases that collect biometric information including fingerprints, facial scans, iris information and even DNA, have been proposed, resisted, and sometimes rejected in various countries.

The deployment of surveillance technologies by development actors, foreign aid donors and humanitarian organisations, however, is often conducted in the complete absence of the type of public debate or deliberation that has occurred in developed countries. Development actors rarely consider target populations’ opinions when approving aid programmes. Important strategy documents such as the UN Office for Humanitarian Affairs’ Humanitarianism in a Networked Age and the UN High-Level Panel on the Post-2015 Development Agenda’s A New Global Partnership: Eradicate Poverty and Transfer Economies through Sustainable Development give little space to the possible impact adopting new technologies or data analysis techniques could have on individuals’ privacy.

Some of this trend can be attributed to development actors’ systematic failure to recognise the risks to privacy that development initiatives present. However, it also reflects an often unspoken view that the right to privacy must necessarily be sacrificed at the altar of development – that privacy and development are conflicting, mutually exclusive goals.

The assumptions underpinning this view are as follows:

  • that privacy is not important to people in developing countries;
  • that the privacy implications of new technologies are not significant enough to warrant special attention;
  • and that respecting privacy comes at a high cost, endangering the success of development initiatives and creating unnecessary work for development actors.

These assumptions are deeply flawed. While it should go without saying, privacy is a universal right, enshrined in numerous international human rights treaties, and matters to all individuals, including those living in the developing world. The vast majority of developing countries have explicit constitutional requirements to ensure that their policies and practices do not unnecessarily interfere with privacy. The right to privacy guarantees individuals a personal sphere, free from state interference, and the ability to determine who has information about them and how it is used. Privacy is also an “essential requirement for the realization of the right to freedom of expression”. It is not an “optional” right that only those living in the developed world deserve to see protected. To presume otherwise ignores the humanity of individuals living in various parts of the world.

Technologies undoubtedly have the potential to dramatically improve the provision of development and humanitarian aid and to empower populations. However, the privacy implications of many new technologies are significant and are not well understood by many development actors. The expectations that are placed on technologies to solve problems need to be significantly circumscribed, and the potential negative implications of technologies must be assessed before their deployment. Biometric identification systems, for example, may assist in aid disbursement, but if they also wrongly exclude whole categories of people, then the objectives of the original development intervention have not been achieved. Similarly, border surveillance and communications surveillance systems may help a government improve national security, but may also enable the surveillance of human rights defenders, political activists, immigrants and other groups.

Asking for humanitarian actors to protect and respect privacy rights must not be distorted as requiring inflexible and impossibly high standards that would derail development initiatives if put into practice. Privacy is not an absolute right and may be limited, but only where limitation is necessary, proportionate and in accordance with law. The crucial aspect is to actually undertake an analysis of the technology and its privacy implications and to do so in a thoughtful and considered manner. For example, if an intervention requires collecting personal data from those receiving aid, the first step should be to ask what information is necessary to collect, rather than just applying a standard approach to each programme. In some cases, this may mean additional work. But this work should be considered in light of the contribution upholding human rights and the rule of law make to development and to producing sustainable outcomes. And in some cases, respecting privacy can also mean saving lives, as information falling into the wrong hands could spell tragedy.

A new framing

While there is an increasing recognition among development actors that more attention needs to be paid to privacy, it is not enough to merely ensure that a programme or initiative does not actively harm the right to privacy; instead, development actors should aim to promote rights, including the right to privacy, as an integral part of achieving sustainable development outcomes. Development is not just, or even mostly, about accelerating economic growth. The core of development is building capacity and infrastructure, advancing equality, and supporting democratic societies that protect, respect and fulfill human rights.

The benefits of development and humanitarian assistance can be delivered without unnecessary and disproportionate limitations on the right to privacy. The challenge is to improve access to and understanding of technologies, ensure that policymakers and the laws they adopt respond to the challenges and possibilities of technology, and generate greater public debate to ensure that rights and freedoms are negotiated at a societal level.

Technologies can be built to satisfy both development and privacy.

Download the Aiding Surveillance report.

Read Full Post »

This post was originally published on the Open Knowledge Foundation blog

A core theme that the Open Development track covered at September’s Open Knowledge Conference was Ethics and Risk in Open Development. There were more questions than answers in the discussions, summarized below, and the Open Development working group plans to further examine these issues over the coming year.

Informed consent and opting in or out

Ethics around ‘opt in’ and ‘opt out’ when working with people in communities with fewer resources, lower connectivity, and/or less of an understanding about privacy and data are tricky. Yet project implementers have a responsibility to work to the best of their ability to ensure that participants understand what will happen with their data in general, and what might happen if it is shared openly.

There are some concerns around how these decisions are currently being made and by whom. Can an NGO make the decision to share or open data from/about program participants? Is it OK for an NGO to share ‘beneficiary’ data with the private sector in return for funding to help make a program ‘sustainable’? What liabilities might donors or program implementers face in the future as these issues develop?

Issues related to private vs. public good need further discussion, and there is no one right answer because concepts and definitions of ‘private’ and ‘public’ data change according to context and geography.

Informed participation, informed risk-taking

The ‘do no harm’ principle is applicable in emergency and conflict situations, but is it realistic to apply it to activism? There is concern that organizations implementing programs that rely on newer ICTs and open data are not ensuring that activists have enough information to make an informed choice about their involvement. At the same time, assuming that activists don’t know enough to decide for themselves can come across as paternalistic.

As one participant at OK Con commented, “human rights and accountability work are about changing power relations. Those threatened by power shifts are likely to respond with violence and intimidation. If you are trying to avoid all harm, you will probably not have any impact.” There is also the concept of transformative change: “things get worse before they get better. How do you include that in your prediction of what risks may be involved? There also may be a perception gap in terms of what different people consider harm to be. Whose opinion counts and are we listening? Are the right people involved in the conversations about this?”

A key point is that whomever assumes the risk needs to be involved in assessing that potential risk and deciding what the actions should be — but people also need to be fully informed. With new tools coming into play all the time, can people be truly ‘informed’ and are outsiders who come in with new technologies doing a good enough job of facilitating discussions about possible implications and risk with those who will face the consequences? Are community members and activists themselves included in risk analysis, assumption testing, threat modeling and risk mitigation work? Is there a way to predict the likelihood of harm? For example, can we determine whether releasing ‘x’ data will likely lead to ‘y’ harm happening? How can participants, practitioners and program designers get better at identifying and mitigating risks?

When things get scary…

Even when risk analysis is conducted, it is impossible to predict or foresee every possible way that a program can go wrong during implementation. Then the question becomes what to do when you are in the middle of something that is putting people at risk or leading to extremely negative unintended consequences. Who can you call for help? What do you do when there is no mitigation possible and you need to pull the plug on an effort? Who decides that you’ve reached that point? This is not an issue that exclusively affects programs that use open data, but open data may create new risks with which practitioners, participants and activists have less experience, thus the need to examine it more closely.

Participants felt that there is not enough honest discussion on this aspect. There is a pop culture of ‘admitting failure’ but admitting harm is different because there is a higher sense of liability and distress. “When I’m really scared shitless about what is happening in a project, what do I do?” asked one participant at the OK Con discussion sessions. “When I realize that opening data up has generated a huge potential risk to people who are already vulnerable, where do I go for help?” We tend to share our “cute” failures, not our really dismal ones.

Academia has done some work around research ethics, informed consent, human subject research and use of Internal Review Boards (IRBs). What aspects of this can or should be applied to mobile data gathering, crowdsourcing, open data work and the like? What about when citizens are their own source of information and they voluntarily share data without a clear understanding of what happens to the data, or what the possible implications are?

Do we need to think about updating and modernizing the concept of IRBs? A major issue is that many people who are conducting these kinds of data collection and sharing activities using new ICTs are unaware of research ethics and IRBs and don’t consider what they are doing to be ‘research’. How can we broaden this discussion and engage those who may not be aware of the need to integrate informed consent, risk analysis and privacy awareness into their approaches?

The elephant in the room

Despite our good intentions to do better planning and risk management, one big problem is donors, according to some of the OK Con participants.  Do donors require enough risk assessment and mitigation planning in their program proposal designs? Do they allow organizations enough time to develop a well-thought-out and participatory Theory of Change along with a rigorous risk assessment together with program participants? Are funding recipients required to report back on risks and how they played out? As one person put it, “talk about failure is currently more like a ‘cult of failure’ and there is no real learning from it. Systematically we have to report up the chain on money and results and all the good things happening. and no one up at the top really wants to know about the bad things. The most interesting learning doesn’t get back to the donors or permeate across practitioners. We never talk about all the work-arounds and backdoor negotiations that make development work happen. This is a serious systemic issue.”

Greater transparency can actually be a deterrent to talking about some of these complexities, because “the last thing donors want is more complexity as it raises difficult questions.”

Reporting upwards into government representatives in Parliament or Congress leads to continued aversion to any failures or ‘bad news’. Though funding recipients are urged to be innovative, they still need to hit numeric targets so that the international aid budget can be defended in government spaces. Thus, the message is mixed: “Make sure you are learning and recognizing failure, but please don’t put anything too serious in the final report.” There is awareness that rigid program planning doesn’t work and that we need to be adaptive, yet we are asked to “put it all into a log frame and make sure the government aid person can defend it to their superiors.”

Where to from here?

It was suggested that monitoring and evaluation (M&E) could be used as a tool for examining some of these issues, but M&E needs to be seen as a learning component, not only an accountability one. M&E needs to feed into the choices people are making along the way and linking it in well during program design may be one way to include a more adaptive and iterative approach. M&E should force practitioners to ask themselves the right questions as they design programs and as they assess them throughout implementation. Theory of Change might help, and an ethics-based approach could be introduced as well to raise these questions about risk and privacy and ensure that they are addressed from the start of an initiative.

Practitioners have also expressed the need for additional resources to help them predict and manage possible risk: case studies, a safe space for sharing concerns during implementation, people who can help when things go pear-shaped, a menu of methodologies, a set of principles or questions to ask during program design, or even an ICT4D Implementation Hotline or a forum for questions and discussion.

These ethical issues around privacy and risk are not exclusive to Open Development. Similar issues were raised last week at the Open Government Partnership Summit sessions on whistle blowing, privacy, and safeguarding civic space, especially in light of the Snowden case. They were also raised at last year’s Technology Salon on Participatory Mapping.

A number of groups are looking more deeply into this area, including the Capture the Ocean Project, The Engine Room, IDRC’s research network, The Open Technology InstitutePrivacy InternationalGSMA, those working on “Big Data,” those in the Internet of Things space, and others.

I’m looking forward to further discussion with the Open Development working group on all of this in the coming months, and will also be putting a little time into mapping out existing initiatives and identifying gaps when it comes to these cross-cutting ethics, power, privacy and risk issues in open development and other ICT-enabled data-heavy initiatives.

Please do share information, projects, research, opinion pieces and more if you have them!

Read Full Post »

Screen Shot 2013-10-11 at 7.24.48 AMA paper that Keshet Bachan and I authored for Unicef is now available for your reading pleasure!

Here’s a  summary of what we talk about in the paper:

Social, cultural, economic and political traditions and systems that prevent girls, especially the most marginalized, from fully achieving their rights present a formidable challenge to development organizations. The integration of new Information and Communication Technologies (ICTs) to the Communication for Development (C4D) toolbox offers an additional means for challenging unequal power relations and increasing participation of marginalized girls in social
transformation.

We examine ways that ICTs can strengthen C4D programming by:

  • enhancing girls’ connections, engagement and agency;
  • helping girls access knowledge; and
  • supporting improved governance and service delivery efforts.

We reflect and build on the views of adolescent girls from 13 developing countries who participated in a unique discussion for this paper, and we then provide recommendations to support the integration of ICTs in C4D work with marginalized adolescent girls, including:

  • Girls as active participants in program design. Practitioners should understand local context and ensure that programs use communication channels that are accessible to girls. This will often require multi-channel and multiple platform approaches that reach more marginalized girls who may not have access to or use of ICTs. Programs should be community driven, and real-time feedback from girls should be incorporated to adjust programs to their needs and preferences. Mentoring is a key component of programming with girls, and holistic programs designed together with girls tend towards being more successful.
  • Privacy and protection. Every program should conduct a thorough risk analysis of proposed approaches to ensure that girls are not placed at risk by participating, sharing and consuming information, or publicly holding others to account. Girls should also be supported to make their own informed choices about their online presence and use of ICT devices and platforms. A broader set of stakeholders should be engaged and influenced to help mitigate systemic and structural risks to girls.
  • Research and documentation. The evidence base for use of ICTs in C4D programming with marginalized adolescent girls is quite scarce. Better documentation would improve understanding of what programs are the most effective, and what the real added value of ICTs are in these efforts.
  • Capacity building. Because the integration of ICTs into C4D work is a relatively new area that lacks a consistent methodological framework, organizations should support a comprehensive training process for staff to cover areas such as program design, effective use of new ICT tools in combination with existing tools and methods, and close attention to privacy and risk mitigation.
  • Policy. Programs should use free and open source software. In addition, child protection policies, measures and guidelines should be updated to reflect changes in technology, platforms and information sharing.

The paper was first shared at the 12th Inter-Agency Roundtable on Communication for Development in November 2011. It was then reviewed and updated in August 2012, and released in August 2013 under the title “Integrating Information and Communication Technologies into Communication for Development Strategies to Support and Empower Marginalized Adolescent Girls.”

Download it here!

Read Full Post »

Screen Shot 2013-07-14 at 2.40.51 PMLast week, 600 exceptional youth activists from 80 countries arrived to New York City for a UN Takeover, where they called for urgent action by member states to meet Millennium Development Goal 2 on education by 2015. The youth’s inputs will feed into setting the agenda for global education priorities post-2015. One of the highlights of the week was this inspiring talk by Malala Yousafzai, who made her first public address to the UN on June 12th, her 16th birthday.

Seven of the youth participating in the UN Takeover with the support of Plan joined us as lead discussants for our July 10th Technology Salon. Agung, Dina, and Nurul from Indonesia; Kamanda and Fatmata from Sierra Leone; Tova from Sweden; and Frank from Uganda told us about ICT access and use in their communities and countries. We also heard about their work as youth activists on issues of child marriage, school violence, good governance, and education, and whether ICTs are effective outreach tools for campaigning in their contexts.

The realities of access

In both Sierra Leone and Uganda Internet access is quite difficult. Traveling to Internet cafés in urban areas is too expensive for rural youth to do regularly, and it is unsafe for young women to travel in the evenings. There is not enough equipment in schools and universities, and youth have trouble affording and finding regular access. The majority of primary and secondary schools do not have ICTs, and non-governmental organizations are unable to reach everyone with their programs to supply equipment and training. Although there are often funds given to governments to build computer labs, these tend to benefit urban areas. In some cases projects and funds are used for political gain and personal favors. Even at university level, student access might be limited to 1-2 hours per week at a computer lab, meaning they end up doing almost everything on paper.

Lack of ICT access impacts on job prospects for youth, because jobs exist but employers are seeking people who know how to operate a computer. Many of these job applications have to be submitted online. This puts jobs out of reach of youth in rural areas. Basic infrastructure remains a problem in rural areas. Although telecommunication lines have been laid, electricity for charging mobile phones is still a problem and often electricity is dependent on a solar panel or a generator, making it difficult to run a computer lab or Telecenter.

ICTs are heightening the development divide, noted one Salon participant. In schools near urban areas, parents pay more in tuition and school fees and their children have better ICT access than rural children. This creates inequality. “Students going to these schools have access and they will even study computer science. But when you go to a rural village you might only see one small room where children can access a computer, if anything at all. Teachers themselves don’t know how to use computers.” In cities, parents know ICTs are important. In the rural villages, however, many people are skeptical of technologies. This inequality of access and education means that youth in rural areas and the poor are not able to meet requirements for jobs that use ICTs.

One discussant noted, “It is possible to access Internet through mobile phones. You can use some phones to access Internet, Facebook, etc. In the villages, however, you find that you can only receive calls and make calls. There is no Internet. When I went to Nairobi and saw everyone with smart phones, I wondered, ‘What is wrong with Uganda?’ We don’t have many smart phones.” Another discussant commented that her university has a wide area network, but it is only available to lecturers, not to students.

Most of the youth discussants considered that, among their peer groups, more girls than boys had mobile phones, and more girls were active on the Internet and Facebook.

Access brings concerns

In Indonesia, it was noted, Internet is very available, except for the more remote islands. In Java, commented one discussant, “every young person has a smart phone. They use Facebook and Twitter and can get all kinds of information, and those without smart phones can use Internet cafés.” Internet access, however, is creating new problems. “Parents are proud that their kids are going to the Internet shop to get information, but they also worry about increased access to pornography.” Internet is believed to contribute to an increase in child marriages. The youth discussants said they would like more guidance on how to filter information, know what is true and what is not, use Internet safely, and avoid exposure to offensive content. One discussant from Indonesia mentioned that parents in her community worried that if girls went to Internet cafes or browsed online, they would be exposed to inappropriate materials or prostitution through Facebook.

In Sweden, access to Internet and smart phones is universal. However, parents may buy children a smart phone even if they cannot really afford it. Although many children learn English early because they can easily access Internet, many also do not learn how to write properly because they only use computers.

When phones are available but there is no capacity to purchase them, additional problems also arise. According to one discussant, “Some girls want to have big things before their time.” This can lead to young women offering sex to older men in return for money, fancy phones and airtime.

ICTs in formal education

Youth discussants all said that they are increasingly expected to have access to the Internet and computers in order to complete their school assignments, and they felt this was not a realistic expectation. In one of the youth’s schools in Indonesia, computer class is offered for 4 hours per week and a computer lab is available with 30 desktop computers. In another school in Jakarta, however, every child is expected to have their own laptop. “Our problem is different than in the remote areas. Every teacher in Jakarta thinks that a smart phone or computer is ‘the world in our hands.’ They think we don’t need education about the computer itself. They think we can learn from the Internet how to use computers, and so we have to search and learn this all by ourselves with little guidance.” In Sweden, “if you don’t have Internet access, it will be very difficult to pass a course.”

Effective ways to reach and engage youth in campaigns

Discussants were asked about the communication channels that are most effective for campaigning or engaging youth and communities. In rural Sierra Leone and Uganda, face-to-face was considered the most effective outreach channel for reaching youth and communities, given low levels of access to computers, radios and mobile phones. “Most times our campaigns are face-to-face. We move to communities, we use local language to be sure everyone gets the message,” said one youth discussant. In Jakarta, however, “it’s easy to use online means, it never sleeps. Young people in Jakarta are too lazy to attend workshops. They don’t like to listen to speakers. So we share by social media, like Facebook and Twitter.”

Digital media is only useful in urban areas, said one youth discussant from Sierra Leone. “We mostly use radio to do advocacy and sensitization campaigns. We also do it face-to-face. For secondary schools, we do talks. We tell them about documents signed by government or NGOs, what is in place, what is not in place. We give advice. We talk straight about health, about sex education. You just wait for the light in their eyeball to see if they are understanding. We also do dramas, and we paste up wall bills. We do all of this in our local languages.” Youth groups and youth networks are also useful channels for passing along messages and building support.

Radio is effective in theory, but one discussant noted that in his district, there are only two radio stations. “You take your information or announcement there, and they say they will pass it, but you stay waiting… it’s a challenge.”

Campaigns must also involve engaging local decision makers, a participant noted. Often chiefs do not understand, and they may be the very ones who violate the rights of girls. Youth noted the need to be diplomatic however, or they risk being seen as impolite or trouble-makers. “You have to really risk yourself to do rights work in the community,” noted one discussant. Another commented that having support and buy-in from local leaders is critical in order to be taken seriously. “You need a ‘big voice’ to back you and to convince people to listen to you.”

INGO staff can help legitimize youth work in some cases, but there are also issues. “Local leaders always ask for money,” noted one discussant. “When they hear Plan, UNICEF, Care, Save the Children, they think these organizations gave us money and we’ve taken it for ourselves.” Youth often resort to using external INGO staff as their legitimizing force because “we don’t have other role models, everybody wants money. The politicians say they will help us but then they are always too busy. We have to take the lead ourselves.”

Conflicting information and messages can also be a problem, commented a Salon participant. “One year, it’s the ABC Campaign for HIV prevention, the next it’s condoms, and then it’s prevention. Sometimes youth don’t know who to believe. The NGO says something, the government says something, and local leaders say something else. We need consistency.” In addition, he noted, “INGOs come in with their big range rovers, so of course local leaders and communities think that there is money involved. INGOs need to think more carefully and avoid these conflicting messages.”

What would youth like to see?

Going forward, the youth would like more access, more ICT education, more transparency and accountability in terms of how governments spend funds directed to ICT programs, and more guidance on filtering information and ensuring it’s veracity so that children will not be taken advantage of.

*****

Thanks to the Population Council for hosting us for the Salon! Join us for our next Salon on July 25th: How can we scale Mobiles for Development initiatives? 

The Technology Salon methodology was used for the session, including Chatham House Rule, therefore no attribution has been made in this summary post. Sign up here to receive notifications about upcoming Salons in New York, Nairobi, San Francisco, London and Washington, DC. 

Read Full Post »

The February 5 Technology Salon in New York City asked “What are the ethics in participatory digital mapping?” Judging by the packed Salon and long waiting list, many of us are struggling with these questions in our work.

Some of the key ethical points raised at the Salon related to the benefits of open data vs privacy and the desire to do no harm. Others were about whether digital maps are an effective tool in participatory community development or if they are mostly an innovation showcase for donors or a backdrop for individual egos to assert their ‘personal coolness’. The absence of research and ethics protocols for some of these new kinds of data gathering and sharing was also an issue of concern for participants.

During the Salon we were only able to scratch the surface, and we hope to get together soon for a more in-depth session (or maybe 2 or 3 sessions – stay tuned!) to further unpack the ethical issues around participatory digital community mapping.

The points raised by discussants and participants included:

1) Showcasing innovation

Is digital mapping really about communities, or are we really just using communities as a backdrop to showcase our own innovation and coolness or that of our donors?

2) Can you do justice to both process and product?

Maps should be less an “in-out tool“ and more part of a broader program. External agents should be supporting communities to articulate and to be full partners in saying, doing, and knowing what they want to do with maps. Digital mapping may not be better than hand drawn maps, if we consider that the process of mapping is just as or more important than the final product. Hand drawn maps can allow for important discussions to happen while people draw. This seems to happens much less with the digital mapping process, which is more technical, and it happens even less when outside agents are doing the mapping. A hand drawn map can be imbued with meaning in terms of the size, color or placement of objects or borders. Important meaning may be missed when hand drawn maps are replaced with digital ones.

Digital maps, however, can be printed and further enhanced with comments and drawings and discussed in the community, as some noted. And digital maps can lend a sense of professionalism to community members and help them to make a stronger case to authorities and decisions makers. Some participants raised concerns about power relations during mapping processes, and worried that using digital tools could emphasize those.

3) The ethics of wasting people’s time.

Community mapping is difficult. The goal of external agents should be to train local people so that they can be owners of the process and sustain it in the long term. This takes time. Often, however, mapping experts are flown in for a week or two to train community members. They leave people with some knowledge, but not enough to fully manage the mapping process and tools. If people end up only half-trained and without local options to continue training, their time has essentially been wasted. In addition, if young people see the training as a pathway to a highly demanded skill set yet are left partially trained and without access to tools and equipment, they will also feel they have wasted their time.

4) Data extraction

When agencies, academics and mappers come in with their clipboards or their GPS units and conduct the same surveys and studies over and over with the same populations, people’s time is also wasted. Open digital community mapping comes from a viewpoint that an open map and open data are one way to make sure that data that is taken from or created by communities is made available to the communities for their own use and can be accessed by others so that the same data is not collected repeatedly. Though there are privacy concerns around opening data, there is a counter balanced ethical dilemma related to how much time gets wasted by keeping data closed.

5) The (missing) link between data and action

Related to the issue of time wasting is the common issue of a missing link between data collected and/or mapped, action and results. Making a map identifying issues is certainly no guarantee that the government will come and take care of those issues. Maps are a means to an end, but often the end is not clear. What do we really hope the data leads to? What does the community hope for? Mapping can be a flashy technology that brings people to the table, but that is no guarantee that something will happen to resolve the issues the map is aimed at solving.

6) Intermediaries are important

One way to ensure that there is a link between data and action is to identify stakeholders that have the ability to use, understand and re-interpret the data. One case was mentioned where health workers collected data and then wanted to know “What do we do now? How does this affect the work that we do? How do we present this information to community health workers in a way that it is useful to our work?” It’s important to tone the data down and make them understandable to the base population, and to also show them in a way that is useful to people working at local institutions. Each audience will need the data to be visualized or shared in a different, contextually appropriate way if they are going to use the data for decision-making. It’s possible to provide the same data in different ways across different platforms from paper to high tech. The challenge of keeping all the data and the different sharing platforms updated, however, is one that can’t be overlooked.

7) What does informed consent actually mean in today’s world?

There is a viewpoint that data must be open and that locking up data is unethical. On the other hand, there are questions about research ethics and protocols when doing mapping projects and sharing or opening data. Are those who do mapping getting informed consent from people to use or open their data? This is the cornerstone of ethics when doing research with human beings. One must be able to explain and be clear about the risks of this data collection, or it is impossible to get truly informed consent. What consent do community mappers need from other community members if they are opening data or information? What about when people are volunteering their information and self-reporting? What does informed consent mean in those cases? And what needs to be done to ensure that consent is truly informed? How can open data and mapping be explained to those who have not used the Internet before? How can we have informed consent if we cannot promise anyone that their data are really secure? Do we have ethics review boards for these new technological ways of gathering data?

8) Not having community data also has ethical implications

It may seem like time wasting, and there may be privacy and protection questions, but there are are also ethical implications of not having community data. When tools like satellite remote sensing are used to do slum mapping, for example, data are very dehumanized and can lead to sterile decision-making. The data that come from a community itself can make these maps more human and these decisions more humane. But there is a balance between the human/humanizing side and the need to protect. Standards are needed for bringing in community and/or human data in an anonymized way, because there are ethical implications on both ends.

9) The problem with donors….

Big donors are not asking the tough questions, according to some participants. There is a lack of understanding around the meaning, use and value of the data being collected and the utility of maps. “If the data is crap, you’ll have crap GIS and a crap map. If you are just doing a map to do a map, there’s an issue.” There is great incentive from the donor side to show maps and to demonstrate value, because maps are a great photo op, a great visual. But how to go a level down to make a map really useful? Are the M&E folks raising the bar and asking these hard questions? Often from the funder’s perspective, mapping is seen as something that can be done quickly. “Get the map up and the project is done. Voila! And if you can do it in 3 weeks, even better!”

Some participants felt the need for greater donor awareness of these ethical questions because many of them are directly related to funding issues. As one participant noted, whether you coordinate, whether it’s participatory, whether you communicate and share back the information, whether you can do the right thing with the privacy issue — these all depend on what you can convince a donor to fund. Often it’s faster to reinvent the wheel because doing it the right way – coordinating, learning from past efforts, involving the community — takes more time and money. That’s often the hard constraint on these questions of ethics.

Check this link for some resources on the topic, and add yours to the list.

Many thanks to our lead discussants, Robert Banick from the American Red Cross and Erica Hagen from Ground Truth, and to Population Council for hosting us for this month’s Salon!

The next Technology Salon NYC will be coming up in March. Stay tuned for more information, and if you’d like to receive notifications about future salons, sign up for the mailing list!

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 733 other followers