Feeds:
Posts
Comments

Archive for the ‘protection’ Category

This post is co-authored by Emily Tomkys, Oxfam GB; Danna Ingleton, Amnesty International; and me (Linda Raftree, Independent)

At the MERL Tech conference in DC this month, we ran a breakout session on rethinking consent in the digital age. Most INGOs have not updated their consent forms and policies for many years, yet the growing use of technology in our work, for many different purposes, raises many questions and insecurities that are difficult to address. Our old ways of requesting and managing consent need to be modernized to meet the new realities of digital data and the changing nature of data. Is informed consent even possible when data is digital and/or opened? Do we have any way of controlling what happens with that data once it is digital? How often are organizations violating national and global data privacy laws? Can technology be part of the answer?

Let’s take a moment to clarify what kind of consent we are talking about in this post. Being clear on this point is important because there are many synchronous conversations on consent in relation to technology. For example there are people exploring the use of the consent frameworks or rhetoric in ICT user agreements – asking whether signing such user agreements can really be considered consent. There are others exploring the issue of consent for content distribution online, in particular personal or sensitive content such as private videos and photographs. And while these (and other) consent debates are related and important to this post, what we are specifically talking about is how we, our organizations and projects, address the issue of consent when we are collecting and using data from those who participate in programs or monitoring, evaluation, research and learning (MERL) that we are implementing.

This diagram highlights that no matter how someone is engaging with the data, how they do so and the decisions they make will impact on what is disclosed to the data subject.

No matter how someone is engaging with data, how they do so and the decisions they make will impact on what is disclosed to the data subject.

This is as timely as ever because introducing new technologies and kinds of data means we need to change how we build consent into project planning and implementation. In fact, it gives us an amazing opportunity to build consent into our projects in ways that our organizations may not have considered in the past. While it used to be that informed consent was the domain of frontline research staff, the reality is that getting informed consent – where there is disclosure, voluntariness, comprehension and competence of the data subject –  is the responsibility of anyone ‘touching’ the data.

Here we share examples from two organizations who have been exploring consent issues in their tech work.

Over the past two years, Girl Effect has been incorporating a number of mobile and digital tools into its programs. These include both the Girl Effect Mobile (GEM) and the Technology Enabled Girl Ambassadors (TEGA) programs.

Girl Effect Mobile is a global digital platform that is active in 49 countries and 26 languages. It is being developed in partnership with Facebook’s Free Basics initiative. GEM aims to provide a platform that connects girls to vital information, entertaining content and to each other. Girl Effect’s digital privacy, safety and security policy directs the organization to review and revise its terms and conditions to ensure that they are ‘girl-friendly’ and respond to local context and realities, and that in addition to protecting the organization (as many T&Cs are designed to do), they also protect girls and their rights. The GEM terms and conditions were initially a standard T&C. They were too long to expect girls to look at them on a mobile, the language was legalese, and they seemed one-sided. So the organization developed a new T&C with simplified language and removed some of the legal clauses that were irrelevant to the various contexts in which GEM operates. Consent language was added to cover polls and surveys, since Girl Effect uses the platform to conduct research and for its monitoring, evaluation and learning work. In addition, summary points are highlighted in a shorter version of the T&Cs with a link to the full T&Cs. Girl Effect also develops short articles about online safety, privacy and consent as part of the GEM content as a way of engaging girls with these ideas as well.

TEGA is a girl-operated mobile-enabled research tool currently operating in Northern Nigeria. It uses data-collection techniques and mobile technology to teach girls aged 18-24 how to collect meaningful, honest data about their world in real time. TEGA provides Girl Effect and partners with authentic peer-to-peer insights to inform their work. Because Girl Effect was concerned that girls being interviewed may not understand the consent they were providing during the research process, they used the mobile platform to expand on the consent process. They added a feature where the TEGA girl researchers play an audio clip that explains the consent process. Afterwards, girls who are being interviewed answer multiple choice follow up questions to show whether they have understood what they have agreed to. (Note: The TEGA team report that they have incorporated additional consent features into TEGA based on examples and questions shared in our session).

Oxfam, in addition to developing out their Responsible Program Data Policy, has been exploring ways in which technology can help address contemporary consent challenges. The organization had doubts on how much its informed consent statement (which explains who the organization is, what the research is about and why Oxfam is collecting data as well as asks whether the participant is willing to be interviewed) was understood and whether informed consent is really possible in the digital age. All the same, the organization wanted to be sure that the consent information was being read out in its fullest by enumerators (the interviewers). There were questions about what the variation might be on this between enumerators as well as in different contexts and countries of operation. To explore whether communities were hearing the consent statement fully, Oxfam is using mobile data collection with audio recordings in the local language and using speed violations to know whether the time spent on the consent page is sufficient, according to the length of the audio file played. This is by no means foolproof but what Oxfam has found so far is that the audio file is often not played in full and or not at all.

Efforts like these are only the beginning, but they help to develop a resource base and stimulate more conversations that can help organizations and specific projects think through consent in the digital age.

Additional resources include this framework for Consent Policies developed at a Responsible Data Forum gathering.

Because of how quickly technology and data use is changing, one idea that was shared was that rather than using informed consent frameworks, organizations may want to consider defining and meeting a ‘duty of care’ around the use of the data they collect. This can be somewhat accomplished through the creation of organizational-level Responsible Data Policies. There are also interesting initiatives exploring new ways of enabling communities to define consent themselves – like this data licenses prototype.

screen-shot-2016-11-02-at-10-20-53-am

The development and humanitarian sectors really need to take notice, adapt and update their thinking constantly to keep up with technology shifts. We should also be doing more sharing about these experiences. By working together on these types of wicked challenges, we can advance without duplicating our efforts.

Read Full Post »

Our March 18th Technology Salon NYC covered the Internet of Things and Global Development with three experienced discussants: John Garrity, Global Technology Policy Advisor at CISCO and co-author of Harnessing the Internet of Things for Global Development; Sylvia Cadena, Community Partnerships Specialist, Asia Pacific Network Information Centre (APNIC) and the Asia Information Society Innovation Fund (ISIF); and Andy McWilliams, Creative Technologist at ThoughtWorks and founder and director of Art-A-Hack and Hardware Hack Lab.

By Wilgengebroed on Flickr [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)%5D, via Wikimedia Commons

What is the Internet of Things?

One key task at the Salon was clarifying what exactly is the “Internet of Things.” According to Wikipedia:

The Internet of Things (IoT) is the network of physical objects—devices, vehicles, buildings and other items—embedded with electronics, software, sensors, and network connectivity that enables these objects to collect and exchange data.[1] The IoT allows objects to be sensed and controlled remotely across existing network infrastructure,[2] creating opportunities for more direct integration of the physical world into computer-based systems, and resulting in improved efficiency, accuracy and economic benefit;[3][4][5][6][7][8] when IoT is augmented with sensors and actuators, the technology becomes an instance of the more general class of cyber-physical systems, which also encompasses technologies such as smart grids, smart homes, intelligent transportation and smart cities. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure. Experts estimate that the IoT will consist of almost 50 billion objects by 2020.[9]

As one discussant explained, the IoT involves three categories of entities: sensors, actuators and computing devices. Sensors read data in from the world for computing devices to process via a decision logic which then generates some type of action back out to the world (motors that turn doors, control systems that operate water pumps, actions happening through a touch screen, etc.). Sensors can be anything from video cameras to thermometers or humidity sensors. They can be consumer items (like a garage door opener or a wearable device) or industrial grade (like those that keep giant machinery running in an oil field). Sensors are common in mobile phones, but more and more we see them being de-coupled from cell phones and integrated into or attached to all manner of other every day things. The boom in the IoT means that in whereas in the past, a person may have had one URL for their desktop computer, now they might be occupying several URLs:  through their phone, their iPad, their laptop, their Fitbit and a number of other ‘things.’

Why does IoT matter for Global Development?

Price points for sensors are going down very quickly and wireless networks are steadily expanding — not just wifi but macro cellular technologies. According to one lead discussant, 95% of the world is covered by 2G and two-thirds by 3G networks. Alongside that is a plethora of technology that is wide range and low tech. This means that all kinds of data, all over the world, are going to be available in massive quantities through the IoT. Some are excited about this because of how data can be used to track global development indicators, for example, the type of data being sought to measure the Sustainable Development Goals (SDGs). Others are concerned about the impact of data collected via the IoT on privacy.

What are some examples of the IoT in Global Development?

Discussants and others gave many examples of how the IoT is making its way into development initiatives, including:

  • Flow meters and water sensors to track whether hand pumps are working
  • Protecting the vaccine cold chain – with a 2G thermometer, an individual can monitor the cold chain for local use and the information also goes directly to health ministries and to donors
  • Monitoring the environment and tracking animals or endangered species
  • Monitoring traffic routes to manage traffic systems
  • Managing micro-irrigation of small shareholder plots from a distance through a feature phone
  • As a complement to traditional monitoring and evaluation (M&E) — a sensor on a cook stove can track how often a stove is actually used (versus information an individual might provide using recall), helping to corroborate and reduce bias
  • Verifying whether a teacher is teaching or has shown up to school using a video camera

The CISCO publication on the IoT and Global Development provides many more examples and an overview of where the area is now and where it’s heading.

How advanced is the IoT in the development space?

Currently, IoT in global development is very much a hacker space, according to one discussant. There are very few off the shelf solutions that development or humanitarian organizations can purchase and readily implement. Some social enterprises are ramping up activity, but there is no larger ecosystem of opportunities for off the shelf products.

Because the IoT in global development is at an early phase, challenges abound. Technical issues, power requirements, reliability and upkeep of sensors (which need to be calibrated), IP issues, security and privacy, technical capacity, and policy questions all need to be worked out. One discussant noted that these challenges carry on from the mobile for development (m4d) and information and communication technologies for development (ICT4D) work of the past.

Participants agreed that challenges are currently huge. For example, devices are homogeneous, making them very easy to hack and affect a lot of devices at once. No one has completely gotten their head around the privacy and consent issues, which are are very different than those of using FB. There are lots of interoperability issues also. As one person highlighted — there are over 100 different communication protocols being used today. It is more complicated than the old “BetaMax v VHS” question – we have no idea at this point what the standard will be for IoT.

For those who see the IoT as a follow-on from ICT4D and m4d, the big question is how to make sure we are applying what we’ve learned and avoiding the same mistakes and pitfalls. “We need to be sure we’re not committing the error of just seeing the next big thing, the next shiny device, and forgetting what we already know,” said one discussant. There is plenty of material and documentation on how to avoid repeating past mistakes, he noted. “Read ICT works. Avoid pilotitis. Don’t be tech-led. Use open source and so on…. Look at the digital principles and apply them to the IoT.”

A higher level question, as one person commented, is around the “inconvenient truth” that although ICTs drive economic growth at the macro level, they also drive income inequality. No one knows how the IoT will contribute or create harm on that front.

Are there any existing standards for the IoT? Should there be?

Because there is so much going on with the IoT – new interventions, different sectors, all kinds of devices, a huge variety in levels of use, from hacker spaces up to industrial applications — there are a huge range of standards and protocols out there, said one discussant. “We don’t really want to see governments picking winners or saying ‘we’re going to us this or that.’ We want to see the market play out and the better protocols to bubble up to the surface. What’s working best where? What’s cost effective? What open protocols might be most useful?”

Another discussant pointed out that there is a legacy predating the IOT: machine-to-machine (M2M), which has not always been Internet based. “Since this legacy is still there. How can we move things forward with regard to standardization and interoperability yet also avoid leaving out those who are using M2M?”

What’s up with IPv4 and IPv6 and the IoT? (And why haven’t I heard about this?)

Another crucial technical point raised is that of IPv4 and IPv6, something that not many Salon participant had heard of, but that will greatly impact on how the IoT rolls out and expands, and just who will be left out of this new digital divide. (Note: I found this video to be helpful for explaining IPv4 vs IPv6.)

“Remember when we used Netscape and we understood how an IP number translated into an IP address…?” asked one discussant. “Many people never get that lovely experience these days, but it’s important! There is a finite number of IP4 addresses and they are running out. Only Africa and Latin America have addresses left,” she noted.

IPv6 has been around for 20 years but there has not been a serious effort to switch over. Yet in order to connect the next billion and the multiple devices that they may bring online, we need more addresses. “Your laptop, your mobile, your coffee pot, your fridge, your TV – for many of us these are all now connected devices. One person might be using 10 IP addresses. Multiply that by millions of people, and the only thing that makes sense is switching over to IPv6,” she said.

There is a problem with the technical skills and the political decisions needed to make that transition happen. For much of the world, the IoT will not happen very smoothly and entire regions may be left out of the IoT revolution if high level decision makers don’t decide to move ahead with IPv6.

What are some of the other challenges with global roll-out of IoT?

In addition to the IPv4 – IPv6 transition, there are all kinds of other challenges with the IoT, noted one discussant. The technical skills required to make the transition that would enable IoT in some regions, for example Asia Pacific, are sorely needed. Engineers will need to understand how to make this shift happen, and in some places that is going to be a big challenge. “Things have always been connected to the Internet. There are just going to be lots more, different things connected to the Internet now.”

One major challenge is that there are huge ethical questions along with security and connectivity holes (as I will outline later in this summary post, and as discussed in last year’s salon on Wearable Technologies). In addition, noted one discussant, if we are designing networks that are going to collect data for diseases, for vaccines, for all kinds of normal businesses, and put the data in the cloud, developing countries need to have the ability to secure the data, the computing capacity to deal with it, and the skills to do their own data analysis.

“By pushing the IoT onto countries and not supporting the capacity to manage it, instead of helping with development, you are again creating a giant gap. There will be all kinds of data collected on climate change in the Pacific Island Countries, for example, but the countries don’t have capacity to deal with this data. So once more it will be a bunch of outsiders coming in to tell the Pacific Islands how to manage it, all based on conclusions that outsiders are making based on sensor data with no context,” alerted one discussant. “Instead, we should be counseling our people, our countries to figure out what they want to do with these sensors and with this data and asking them what they need to strengthen their own capacities.”

“This is not for the SDGs and ticking off boxes,” she noted. “We need to get people on the ground involved. We need to decentralize this so that people can make their own decisions and manage their own knowledge. This is where the real empowerment is – where local people and country leaders know how to collect data and use it to make their own decisions. The thing here is ownership — deploying your own infrastructure and knowing what to do with it.”

How can we balance the shiny devices with the necessary capacities?

Although the critical need to invest in and support country-level capacity to manage the IoT has been raised, this type of back-end work is always much less ‘sexy’ and less interesting for donors than measuring some development programming with a flashy sensor. “No one wants to fund this capacity strengthening,” said one discussant. “Everyone just wants to fund the shiny sensors. This chase after innovation is really damaging the impact that technology can actually have. No one just lets things sit and develop — to rest and brew — instead we see everyone rushing onto the next big thing. This is not a good thing for a small country that doesn’t have the capacity to jump right into it.”

All kinds of things can go wrong if people are not trained on how to manage the IoT. Devices can be hacked and they may be collecting and sharing data without an individuals’ knowledge (see Geoff Huston on The Internet of Stupid Things). Electrical short outs, common in places with poor electricity ecosystems, can also cause big problems. In addition, the Internet is affected by legacy systems – so we need interoperability that goes backwards, said one discussant. “If we don’t make at least a small effort to respect those legacy systems, we’re basically saying ‘if you don’t have the funding to update your system, you’re out.’ This then reinforces a power dynamic where countries need the international community to give them equipment, or they need to buy this or buy that, and to bring in international experts from the outside….’ The pressure on poor countries to make things work, to do new kinds of M&E, to provide evidence is huge. With that pressure comes a higher risk of falling behind very quickly. We are also seeing pilot projects that were working just fine without fancy tech being replaced by new fangled tech-type programs instead of being supported over the longer term,” she said.

Others agreed that the development sector’s fascination with shiny and new is detrimental. “There is very little concern for the long-term, the legacy system, future upgrades,” said one participant. “Once the blog post goes up about the cool project, the sensors go bad or stop working and no one even knows because people have moved on.” Another agreed, citing that when visiting numerous clinics for a health monitoring program in one country, the running joke among the M&E staff was “OK, now let’s go and find the broken solar panel.” “When I think of the IoT,” she said, “I think of a lot of broken devices in 5 years.” The aspect of eWaste and the IoT has not even begun to be examined or quantified, noted another.

It is increasingly important for governments to understand how the Internet works, because they are making policy about it. Manufacturers need to better understand how the tech works on the ground, especially in different contexts that they are not accustomed to working in. Users need a better understanding of all of this because their privacy is at risk. Legal frameworks around data and national laws need more attention as well. “When you are working with restrictive governments, your organization’s or start-up’s idea might actually be illegal or close to a sedition law and you may end up in jail,” noted one discussant.

What choices will organizations need to make regarding the IoT?

When it comes to actually making decisions on how involved an organization should and can be in supporting or using the IoT, one critical choice will be related the suites of devices, said our third discussant. Will it be a cloud device? A local computing device? A computer?

Organizations will need to decide if they want a vendor that gives them a package, or if they want a modular, interoperable approach of units. They will need to think about aspects like whether they want to go with proprietary or open source and will it be plug and play?

There are trade-offs here and key technical infrastructure choices will need to be made based on a certain level of expertise and experience. If organizations are not sure what they need, they may wish to get some advice before setting up a system or investing heavily.

As one discussant put it, “When I talk about the IOT, I often say to think about what the Internet was in the 90s. Think about that hazy idea we had of what the Internet was going to be. We couldn’t have predicted in the 90s what today’s internet would look like, and we’re in the same place with the IoT,” he said. “There will be seismic change. The state of the whole sector is immature now. There are very hard choices to make.”

Another aspect that’s representative of the IoT’s early stage, he noted, is that the discussion is all focusing on http and the Internet. “The IOT doesn’t necessarily even have to involve the Internet,” he said.

Most vendors are offering a solution with sensors to deploy, actuators to control and a cloud service where you log in to find your data. The default model is that the decision logic takes place there in the cloud, where data is stored. In this model, the cloud is in the middle, and the devices are around it, he said, but the model does not have to be that way.

Other models can offer more privacy to users, he said. “When you think of privacy and security – the healthcare maxim is ‘do no harm.’ However this current, familiar model for the IoT might actually be malicious.” The reason that the central node in the commercial model is the cloud is because companies can get more and more detailed information on what people are doing. IoT vendors and IoT companies are interested in extending their profiles of people. Data on what people do in their virtual life can now be combined with what they do in their private lives, and this has huge commercial value.

One option to look at, he shared, is a model that has a local connectivity component. This can be something like bluetooth mesh, for example. In this way, the connectivity doesn’t have to go to the cloud or the Internet at all. This kind of set-up may make more sense with local data, and it can also help with local ownership, he said. Everything that happens in the cloud in the commercial model can actually happen on a local hub or device that opens just for the community of users. In this case, you don’t have to share the data with the world. Although this type of a model requires greater local tech capacity and can have the drawback that it is more difficult to push out software updates, it’s an option that may help to enhance local ownership and privacy.

This requires a ‘person first’ concept of design. “When you are designing IOT systems, he said, “start with the value you are trying to create for individuals or organizations on the ground. And then implement the local part that you need to give local value. Then, only if needed, do you add on additional layers of the onion of connectivity, depending on the project.” The first priority here are the goals that the technology design will achieve for individual value, for an individual client or community, not for commercial use of people’s data.

Another point that this discussant highlighted was the need to conduct threat modeling and to think about unintended consequences. “If someone hacked this data – what could go wrong?” He suggested working backwards and thinking: “What should I take offline? How do I protect it better? How do I anonymize it better.”

In conclusion….

It’s critical to understand the purpose of an IoT project or initiative, discussants agreed, to understand if and why scale is needed, and to be clear about the drivers of a project. In some cases, the cloud is desirable for quicker, easier set up and updates to software. At the same time, if an initiative is going to be sustainable, then community and/or country capacity to run it, sustain it, keep it protected and private, and benefit from it needs to be built in. A big part of that capacity includes the ability to understand the different layers that surround the IoT and to make grounded decisions on the various trade-offs that will come to a head in the process of design and implementation. These skills and capacities need to be developed and supported within communities, countries and organizations if the IoT is to contribute ethically and robustly to global development.

Thanks to APNIC for sponsoring and supporting this Salon and to our friends at ThoughtWorks for hosting! If you’d like to join discussions like this one in cities around the world, sign up at Technology Salon

Salons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

Our December 2015 Technology Salon discussion in NYC focused on approaches to girls’ digital privacy, safety and security. By extension, the discussion included ways to reduce risk for other vulnerable populations. Our lead discussants were Ximena BenaventeGirl Effect Mobile (GEM) and Jonathan McKay, Praekelt Foundation. I also shared a draft Girls’ Digital Privacy, Safety and Security Policy and Toolkit I’ve been working on with both organizations over the past year.

Girls’ digital privacy, safety and security risks

Our first discussant highlighted why it’s important to think specifically about girls and digital security. In part, this is because different factors and vulnerabilities combine, exacerbating girls’ levels of risk. For example, girls living on less than $2 per day likely only have access to basic mobile phones, which are often borrowed from parents or siblings. The organization she works with always starts with deep research on aspects like ownership vs. borrowship and whether girls’ mobile usage is free/unlimited and un-supervised or controlled by gatekeepers such as parents, brothers, or other relatives. This helps to design better tools, services and platforms and to design for safety and security, she said. “Gatekeepers are very restrictive in many cases, but parental oversight is not necessarily a bad thing. We always work with parents and other gatekeepers as well as with girls themselves when we design and test.” When girls are living in more traditional or conservative societies, she said, we also need to think about how content might affect girls both online and offline. For example, “is content sufficiently progressive in terms of girls’ rights, yet safe for girls to read, comment on or discuss with friends and family without severe retaliation?”

Research suggests that girls who are more vulnerable offline (due to poverty or other forms of marginalization), are likely also more vulnerable to certain risks online, so we design with that in mind, she said. “When we started off on this project, our team members were experts in digital, but we had less experience with the safety and privacy aspects when it comes to girls living under $2/day or who were otherwise vulnerable. “Having additional guidance and developing a policy on this aspect has helped immensely – but has also slowed our processes down and sometimes made them more expensive,” she noted. “We had to go back to everything and add additional layers of security to make it as safe as possible for girls. We have also made sure to work very closely with our local partners to be sure that everyone involved in the project is aware of girls’ safety and security.”

Social media sites: Open, Closed, Private, Anonymous?

One issue that came up was safety for children and youth on social media networks. A Salon participant said his organization had thought about developing this type of a network several years back but decided in the end that the security risks outweighed the advantages. Participants discussed whether social media networks can ever be safe. One school of thought is that the more open a platform, the safer it is, as “there is no interaction in private spaces that cannot be constantly monitored or moderated.” Some worry about open sites, however, and set up smaller, closed, private groups that were closely monitored. “We work with victims of violence to share their stories and coping mechanisms, so, for us, private groups are a better option.”

Some suggested that anonymity on a social media site can protect girls and other vulnerable groups, however there is also research showing that Internet anonymity contributes to an increase in activities such as bullying and harassment. Some Salon participants felt that it was better to leverage existing platforms and try to use them safely. Others felt that there are no existing social media platforms that have enough security for girls or other vulnerable groups to use with appropriate levels of risk. “We sometimes recruit participants via existing social media platforms,” said one discussant, “but we move people off of those sites to our own more secure sites as soon as we can.”

Moderation and education on safety

Salon participants working with vulnerable populations said that they moderate their sites very closely and remove comments if users share personal information or use offensive language. “Some project budgets allow us to have a moderator check every 2 hours. For others, we sweep accounts once a day and remove offensive content within 24 hours.” One discussant uses moderation to educate the community. “We always post an explanation about why a comment was removed in order to educate the larger user base about appropriate ways to use the social network,” he said.

Close moderation becomes difficult and costly, however, as the user base grows and a platform scales. This means individual comments cannot be screened and pre-approved, because that would take too long and defeat the purpose of an engaging platform. “We need to acknowledge the very real tension between building a successful and engaging community and maintaining privacy and security,” said one Salon participant. “The more you lock it down and the more secure it is, the harder you find it is to create a real and active community.”

Another participant noted that they use their safe, closed youth platform to educate and reinforce messaging about what is safe and positive use of social media in hopes that young people will practice safe behaviors when they use other platforms. “We know that education and awareness raising can only go so far, however,” she said, “and we are not blind to that fact.” She expressed concern about risk for youth who speak out about political issues, because more and more governments are passing laws that punish critics and censor information. The organization, however, does not want to encourage youth to stop voicing opinions or participating politically.

Data breaches and project close-out

One Salon participant asked if organizations had examples of actual data breaches, and how they had handled them. Though no one shared examples, it was recommended that every organization have a contingency plan in place for accidental data leaks or a data breach or data hack. “You need to assume that you will get hacked,” said one person, “and develop your systems with that as a given.”

In addition to the day-to-day security issues, we need to think about project close-out, said one person. “Most development interventions are funded for a short, specific period of time. When a project finishes, you get a report, you do your M&E, and you move on. However, the data lives on, and the effects of the data live on. We really need to think more about budgeting for proper project wind-down and ensure that we are accountable beyond the lifetime of a project.”

Data security, anonymization, consent

Another question was related to using and keeping girls’ (and others’) data safe. “Consent to collect and use data on a website or via a mobile platform can be tricky, especially if we don’t know how to explain what we might do with the data,” said one Salon participant. Others suggested it would be better not to collect any data at all. “Why do we even need to collect this data? Who is it for?” he asked. Others countered that this data is often the only way to understand what people are doing on the site, to make adjustments and to measure impact.

One scenario was shared where several partner organizations discussed opening up a country’s cell phone data records to help contain a massive public health epidemic, but the privacy and security risks were too great, so the idea was scrapped. “Some said we could anonymize the data, but you can never really and truly anonymize data. It would have been useful to have a policy or a rubric that would have guided us in making that decision.”

Policy and Guidelines on Girls Privacy, Security and Safety

Policy guidelines related to aspects such as responsible data for NGOs, data security, privacy and other aspects of digital security in general do exist. (Here are some that we compiled along with some other resources). Most IT departments also have strict guidelines when it comes to donor data (in the case of credit card and account information, for example). This does not always cross over to program-level ICT or M&E efforts that involve the populations that NGOs are serving through their programming.

General awareness around digital security is increasing, in part due to recent major corporate data hacks (e.g., Target, Sony) and the Edward Snowden revelations from a few years back, but much more needs to be done to educate NGO staff and management on the type of privacy and security measures that need to be taken to protect the data and mitigate risk for those who participate in their programs.  There is an argument that NGOs should have specific digital privacy, safety and security policies that are tailored to their programming and that specifically focus on the types of digital risks that girls, women, children or other vulnerable people face when they are involved in humanitarian or development programs.

One such policy (focusing on vulnerable girls) and toolkit (its accompanying principles and values, guidelines, checklists and a risk matrix template); was shared at the Salon. (Disclosure: – This policy toolkit is one that I am working on. It should be ready to share in early 2016). The policy and toolkit take program implementers through a series of issues and questions to help them assess potential risks and tradeoffs in a particular context, and to document decisions and improve accountability. The toolkit covers:

  1. data privacy and security –using approaches like Privacy by Design, setting limits on the data that is collected, achieving meaningful consent.
  2. platform content and design –ensuring that content produced for girls or that girls produce or volunteer is not putting girls at risk.
  3. partnerships –vetting and managing partners who may be providing online/offline services or who may partner on an initiative and want access to data, monetizing of girls’ data.
  4. monitoring, evaluation, research and learning (MERL) – how will program implementers gather and store digital data when they are collecting it directly or through third parties for organizational MERL purposes.

Privacy, Security and Safety Implications

Our final discussant spoke about the implications of implementing the above-mentioned girls’ privacy, safety and security policy. He started out saying that the policy starts off with a manifesto: We will not compromise a girl in any way, nor will we opt for solutions that cut corners in terms of cost, process or time at the expense of her safety. “I love having this as part of our project manifesto, he said. “It’s really inspiring! On the flip side, however, it makes everything I do more difficult, time consuming and expensive!”

To demonstrate some of the trade-offs and decisions required when working with vulnerable girls, he gave examples of how the current project (implemented with girls’ privacy and security as a core principle) differed from that of a commercial social media platform and advertising campaign he had previously worked on (where the main concern was the reputation of the corporation, not that of the users of the platform and the potential risks they might put themselves in by using the platform).

Moderation

On the private sector platform, said the discussant, “we didn’t have the option of pre-moderating comments because of the budget and because we had 800 thousand users. To meet the campaign goals, it was more important for users to be engaged than to ensure content was safe. We focused on removing pornographic photos within 24 hours, using algorithms based on how much skin tone was in the photo.” In the fields of marketing and social media, it’s a fairly well-known issue that heavy-handed moderation kills platform engagement. “The more we educated and informed users about comment moderation, or removed comments, the deader the community became. The more draconian the moderation, the lower the engagement.”

The discussant had also worked on a platform for youth to discuss and learn about sexual health and practices, where he said that users responded angrily to moderators and comments that restricted their participation. “We did expose our participants to certain dangers, but we also knew that social digital platforms are more successful when they provide their users with sense of ownership and control. So we identified users that exhibited desirable behaviors and created a different tier of users who could take ownership (super users) to police and flag comments as inappropriate or temporarily banned users.” This allowed a 25% decrease in moderation. The organization discovered, however, that they had to be careful about how much power these super users had. “They ended up creating certain factions on the platform, and we then had to develop safeguards and additional mechanisms by which we moderated our super users!”

Direct Messages among users

In the private sector project example, engagement was measured by the number of direct or private messages sent between platform users. In the current scenario, however, said the discussant, “we have not allowed any direct messages between platform users because of the potential risks to girls of having places on the site that are hidden from moderators. So as you can see, we are removing some of our metrics by disallowing features because of risk. These activities are all things that would make the platform more engaging but there is a big fear that they could put girls at risk.”

Adopting a privacy, security, and safety policy

One discussant highlighted the importance of having privacy, safety and security policies before a project or program begins. “If you start thinking about it later on, you may have to go back and rebuild things from scratch because your security holes are in the design….” The way a database is set up to capture user data can make it difficult to query in the future or for users to have any control of what information is or is not being shared about them. “If you don’t set up the database with security and privacy in mind from the beginning, it might be impossible to make the platform safe for girls without starting from scratch all over again,” he said.

He also cautioned that when making more secure choices from the start, platform and tool development generally takes longer and costs more. It can be harder to budget because designers may not have experience with costing and developing the more secure options.

“A valuable lesson is that you have to make sure that what you’re trying to do in the first place is worth it if it’s going to be that expensive. It is worth a girls’ while to use a platform if she first has to wade through a 5-page terms and conditions on a small mobile phone screen? Are those terms and conditions even relevant to her personally or within her local context? Every click you ask a user to make will reduce their interest in reaching the platform. And if we don’t imagine that a girl will want to click through 5 screens of terms and conditions, the whole effort might not be worth it.” Clearly, aspects such as terms and conditions and consent processes need to be designed specifically to fit new contexts and new kinds of users.

Making responsible tradeoffs

The Girls Privacy, Security and Safety policy and toolkit shared at the Salon includes a risk matrix where project implementers rank the intensity and probability of risks as high, medium and low. Based on how a situation, feature or other potential aspect is ranked and the possibility to mitigate serious risks, decisions are made to proceed or not. There will always be areas with a certain level of risk to the user. The key is in making decisions and trade-offs that balance the level of risk with the potential benefits or rewards of the tool, service, or platform. The toolkit can also help project designers to imagine potential unintended consequences and mitigate risk related to them. The policy also offers a way to systematically and pro-actively consider potential risks, decide how to handle them, and document decisions so that organizations and project implementers are accountable to girls, peers and partners, and organizational leadership.

“We’ve started to change how we talk about user data in our organization,” said one discussant. “We have stopped thinking about it as something WE create and own, but more as something GIRLS own. Banks don’t own people’s money – they borrow it for a short time. We are trying to think about data that way in the conversations we’re having about data, funding, business models, proposals and partnerships. You don’t get to own your users’ data, we’re not going to share de-anonymized data with you. We’re seeing legislative data in some of the countries we work that are going that way also, so it’s good to be thinking about this now and getting prepared”

Take a look at our list of resources on the topic and add anything we may have missed!

 

Thanks to our friends at ThoughtWorks for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology SalonSalons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

Screen Shot 2015-04-23 at 8.59.45 PMBy Mala Kumar and Linda Raftree

Our April 21st NYC Technology Salon focused on issues related to the LGBT ICT4D community, including how LGBTQI issues are addressed in the context of stakeholders and ICT4D staff. We examined specific concerns that ICT4D practitioners who identify as LGBTQI have, as well as how LGBTQI stakeholders are (or are not) incorporated into ICT4D projects, programs and policies. Among the many issues covered in the Salon, the role of the Internet and mobile devices for both community building and surveillance/security concerns played a central part in much of the discussion.

To frame the discussion, participants were asked to think about how LGBTQI issues within ICT4D (and more broadly, development) are akin to gender. Mainstreaming gender in development starts with how organizations treat their own staff. Implementing programs, projects and policies with a focus on gender cannot happen if the implementers do not first understand how to treat staff, colleagues and those closest to them (i.e. family, friends). Likewise, without a proper understanding of LGBTQI colleagues and staff, programs that address LGBTQI stakeholders will be ineffective.

The lead discussants of the Salon were Mala Kumar, writer and former UN ICT4D staff, Tania Lee, current IRC ICT4D Program Officer, and Robert Valadéz, current UN ICT4D staff. Linda Raftree moderated the discussion.

Unpacking LGBTQI

The first discussant pointed out how we as ICT4D/development practitioners think of the acronym LGBTQI, particularly the T and I – transgender and intersex. Often, development work focuses on the sexual identity portion of the acronym (the LGBQ), and not what is considered in Western countries as transgenderism.

As one participant said, the very label of “transgender” is hard to convey in many countries where “third gender” and “two-spirit gender” exist. These disagreements in terminology have – in Bangladesh and Nepal for example – resulted in creating conflict and division of interest within LGBTQI communities. In other countries, such as Thailand and parts of the Middle East, “transgenderism” can be considered more “normal” or societally acceptable than homosexuality. Across Africa, Latin America, North America and Europe, homosexuality is a better understood – albeit sometimes severely criminalized and socially rejected – concept than transgenderism.

One participant cited that in her previous first-hand work on services for lesbian, gay and bisexual people; often in North America, transgender communities are prioritized less in LGBTQI services. In many cases she saw in San Francisco, homeless youth would identify as anything in order to gain access to needed services. Only after the services were provided did the beneficiaries realize the consequences of self-reporting or incorrectly self-reporting.

Security concerns within Unpacking LGBTQI

For many people, the very notion of self-identifying as LGBTQI poses severe security risks. From a data collection standpoint, this results in large problems in accurate representation of populations. It also results in privacy concerns. As one discussant mentioned, development and ICT4D teams often do not have the technical capacity (i.e. statisticians, software engineers) to properly anonymize data and/or keep data on servers safe from hackers. On the other hand, the biggest threat to security may just be “your dad finding your phone and reading a text message,” as one person noted.

Being an LGBTQI staff in ICT4D

 Our second lead discussant spoke about being (and being perceived as) an LGBTQI staff member in ICT4D. She noted that many of the ICT4D hubs, labs, centers, etc. are in countries that are notoriously homophobic. Examples include Uganda (Kampala), Kenya (Nairobi), Nigeria (Abuja, Lagos), Kosovo and Ethiopia (Addis). This puts people who are interested in technology for development and are queer at a distinct disadvantage.

Some of the challenges she highlighted include that ICT4D attracts colleagues from around the world who are the most likely to be adept at computers and Internet usage, and therefore more likely to seek out and find information about other staff/colleagues online. If those who are searching are homophobic, finding “evidence” against colleagues can be both easy and easy to disseminate. Along those lines, ICT4D practitioners are encouraged (and sometimes necessitated) to blog, use social media, and keep an online presence. In fact, many people in ICT4D find posts and contracts this way. However, keeping online professional and personal presences completely separate is incredibly challenging. Since ICT4D practitioners are working with colleagues most likely to actually find colleagues online, queer ICT4D practitioners are presented with a unique dilemma.

ICT4D practitioners are arguably the set of people within development that are the best fitted to utilize technology and programmatic knowledge to self-advocate as LGBT staff and for LGBT stakeholder inclusion. However, how are queer ICT4D staff supposed to balance safety concerns and professional advancement limitations when dealing with homophobic staff? This issue is further compounded (especially in the UN, as one participant noted) by being awarded the commonly used project-based contracts, which give staff little to no job security, bargaining power or general protection when working overseas.

Security concerns within being an LGBTQI staff in ICT4D

A participant who works in North America for a Kenyan-based company said that none of her colleagues ever mentioned her orientation, even though they must have found her publicly viewable blog on gender and she is not able to easily disguise her orientation. She talked about always finding and connecting to the local queer community wherever she goes, often through the Internet, and tries to support local organizations working on LGBT issues. Still, she and several other participants and discussants emphasized their need to segment online personal and professional lives to remain safe.

Another participant mentioned his time working in Ethiopia. The staff from the center he worked with made openly hostile remarks about gays, which reinforced his need to stay closeted. He noticed that the ICT staff of the organization made a concerted effort to research people online, and that Facebook made it difficult, if not impossible, to keep personal and private lives separate.

Another person reiterated this point by saying that as a gay Latino man, and the first person in his family to go to university, grad school and work in a professional job, he is a role model to many people in his community. He wants to offer guidance and support, and used to do so with a public online presence. However, at his current internationally-focused job he feels the need to self-censor and has effectively limited talking about his public online presence, because he often interacts with high level officials who are hostile towards the LGBTQI community.

One discussant also echoed this idea, saying that she is becoming a voice for the queer South Asian community, which is important because much of LGBT media is very white. The tradeoff for becoming this voice is compromising her career in the field because she cannot accept a lot of posts because they do not offer adequate support and security.

Intersectionality

Several participants and discussants offered their own experiences on the various levels of hostility and danger involved with even being suspected as gay. One (female) participant began a relationship with a woman while working in a very conservative country, and recalled being terrified at being killed over the relationship. Local colleagues began to suspect, and eventually physically intervened by showing up at her house. This participant cited her “light skinned privilege” as one reason that she did not suffer serious consequences from her actions.

Another participant recounted his time with the US Peace Corps. After a year, he started coming out and dating people in host country. When one relationship went awry and he was turned into the police for being gay, nothing came of the charges. Meanwhile, he saw local gay men being thrown into – and sometimes dying in – jail for the same charges. He and some other participants noted their relative privilege in these situations because they are white. This participant said he felt that as a white male, he felt a sense of invincibility.

In contrast, a participant from an African country described his experience growing up and using ICTs as an escape because any physical indication he was gay would have landed him in jail, or worse. He had to learn how to change his mannerisms to be more masculine, had to learn how to disengage from social situations in real life, and live in the shadows.

One of the discussants echoed these concerns, saying that as a queer woman of color, everything is compounded. She was recruited for a position at a UN Agency in Kenya, but turned the post down because of the hostility towards gays and lesbians there. However, she noted that some queer people she has met – all white men from the States or Europe – have had overall positive experiences being gay with the UN.

Perceived as predators

One person brought up the “predator” stereotype often associated with gay men. He and his partner have had to turn down media opportunities where they could have served as role models for the gay community, especially poor, gay queer men of color, (who are one of the most difficult socioeconomic classes to reach) out of fear that this stereotype may impact on their being hired to work in organizations that serve children.

Monitoring and baiting by the government

One participant who grew up in Cameroon mentioned that queer communities in his country use the Internet cautiously, even though it’s the best resource to find other queer people. The reason for the caution is that government officials have been known to pose as queer people to bait real users for illegal gay activity.

Several other participants cited this same phenomenon in different forms. A recent article talked about Egypt using new online surveillance tactics to find LGBTQI people. Some believe that this type of surveillance will also happen in Nigeria, a notoriously hostile country towards LGBTQI persons and other places.

There was also discussion about what IP or technology is the safest for LGBTQI people. While the Internet can be monitored and traced back to a specific user, being able to connect from multiple access points and with varying levels of security creates a sense of anonymity that phones cannot provide. A person also generally carries phones, so if the government intercepts a message on either the originating or receiving device, implications of existing messages are immediate unless a user can convince the government the device was stolen or used by someone else. In contrast, phones are more easily disposable and in several countries do not require registration (or a registered SIM card) to a specific person.

In Ethiopia, the government has control over the phone networks and can in theory monitor these messages for LGBTQI activity. This poses a particular threat since there is already legal precedent for convictions of illegal activity based on text messages. In some countries, major telecom carriers are owned by a national government. In others, major telecom carries are national subsidiaries of an international company.

Another major concern raised relates back to privacy. Many major international development organizations do not have the capacity or ability to retain necessary software engineers, ICT architects and system operators, statisticians and other technology people to properly prevent Internet hacks and surveillance. In some cases, this work is illegal by national government policy, and thus also requires legal advocacy. The mere collection of data and information can therefore pose a security threat to staff and stakeholders – LGBTQI and allies, alike.

The “queer divide”

One discussant asked the group for data or anecdotal information related to the “queer divide.” A commonly understood problem in ICT4D work are divides – between genders, urban and rural, rich and poor, socially accepted and socially marginalized. There have also been studies to clearly demonstrate that people who are naturally extroverted and not shy benefit more from any given program or project. As such, is there any data to support a “queer divide” between those who are LGBTQI and those who are not, he wondered. As demonstrated in the above sections, many queer people are forced to disengage socially and retreat from “normal” society to stay safe.

Success stories, key organizations and resources

Participants mentioned organizations and examples of more progressive policies for LGBTQI staff and stakeholders (this list is not comprehensive, nor does it suggest these organizations’ policies are foolproof), including:

We also compiled a much more extensive list of resources on the topic here as background reading, including organizations, articles and research. (Feel free to add to it!)

What can we do moving forward?

  • Engage relevant organizations, such as Out in Tech and Lesbians who Tech, with specific solutions, such as coding privacy protocols for online communities and helping grassroots organizations target ads to relevant stakeholders.
  • Lobby smartphone manufacturers to increase privacy protections on mobile devices.
  • Lobby US and other national governments to introduce “Right to be forgotten” law, which allows Internet users to wipe all records of themselves and personal activity.
  • Support organizations and services that offer legal council to those in need.
  • Demand better and more comprehensive protection for LGBTQI staff, consultants and interns in international organizations.

Key questions to work on…

  • In some countries, a government owns telecom companies. In others, telecom companies are national subsidiaries of international corporations. In countries in which the government is actively or planning on actively surveying networks for LGBTQI activity, how does the type of telecom company factor in?
  • What datasets do we need on LGBTQI people for better programming?
  • How do we properly anonymize data collected? What are the standards of best practices?
  • What policies need to be in place to better protect LGBTQI staff, consultants and interns? What kind of sensitizing activities, trainings and programming need to be done for local staff and less LGBTQI sensitive international staff in ICT4D organizations?
  • How much capacity have ICT4D/international organizations lost as a result of their policies for LGBTQI staff and stakeholders?
  • What are the roles and obligations of ICT4D/international organizations to their LGBTQI staff, now and in the future?
  • What are the ICT4D and international development programmatic links with LGBT stakeholders and staff? How does LGBT stakeholders intersect with water? Public health? Nutrition? Food security? Governance and transparency? Human rights? Humanitarian crises? How does LGBT staff intersect with capacity? Trainings? Programming?
  • How do we safely and responsibility increase visibility of LGBTQI people around the world?
  • How do we engage tech companies that are pro-LGBTQI, including Google, to do more for those who cannot or do not engage with their services?
  • What are the economic costs of homophobia, and does this provide a compelling enough case for countries to stop systemic LGBTQI-phobic behavior?
  • How do we mainstream LGBTQI issues in bigger development conferences and discussions?

Thanks to the great folks at ThoughtWorks for hosting and providing a lovely breakfast to us! Technology Salons are carried out under Chatham House Rule, so no attribution has been made. If you’d like to join us for Technology Salons in future, sign up here!

Read Full Post »

It’s been two weeks since we closed out the M&E Tech Conference in DC and the Deep Dive in NYC. For those of you who missed it or who want to see a quick summary of what happened, here are some of the best tweets from the sessions.

We’re compiling blog posts and related documentation and will be sharing more detailed summaries soon. In the meantime, enjoy a snapshot!

https://twitter.com/neuguy/status/515134807672909826

https://twitter.com/dalgoso/status/515136050793291776

https://twitter.com/neuguy/status/515166952378343425

https://twitter.com/neuguy/status/515184242595487744

https://twitter.com/schmutzie/status/515215243388014592

https://twitter.com/prefontaine/status/515222154670252032

https://twitter.com/richmanmax/status/515576201084411904

https://twitter.com/sandhya_c_rao/status/516343304448131072

https://twitter.com/dalgoso/status/519879358370955264

Read Full Post »

Today as we jump into the M&E Tech conference in DC (we’ll also have a Deep Dive on the same topic in NYC next week), I’m excited to share a report I’ve been working on for the past year or so with Michael Bamberger: Emerging Opportunities in a Tech-Enabled World.

The past few years have seen dramatic advances in the use of hand-held devices (phones and tablets) for program monitoring and for survey data collection. Progress has been slower with respect to the application of ICT-enabled devices for program evaluation, but this is clearly the next frontier.

In the paper, we review how ICT-enabled technologies are already being applied in program monitoring and in survey research. We also review areas where ICTs are starting to be applied in program evaluation and identify new areas in which new technologies can potentially be applied. The technologies discussed include hand-held devices for quantitative and qualitative data collection and analysis, data quality control, GPS and mapping devices, environmental monitoring, satellite imaging and big data.

While the technological advances and the rapidly falling costs of data collection and analysis are opening up exciting new opportunities for monitoring and evaluation, the paper also cautions that more attention should be paid to basic quality control questions that evaluators normally ask about representativity of data and selection bias, data quality and construct validity. The ability to use techniques such as crowd sourcing to generate information and feedback from tens of thousands of respondents has so fascinated researchers that concerns about the representativity or quality of the responses have received less attention than is the case with conventional instruments for data collection and analysis.

Some of the challenges include the potential for: selectivity bias and sample design, M&E processes being driven by the requirements of the technology and over-reliance on simple quantitative data, as well as low institutional capacity to introduce ICT and resistance to change, and issues of privacy.

None of this is intended to discourage the introduction of these technologies, as the authors fully recognize their huge potential. One of the most exciting areas concerns the promotion of a more equitable society through simple and cost-effective monitoring and evaluation systems that give voice to previously excluded sectors of the target populations; and that offer opportunities for promoting gender equality in access to information. The application of these technologies however needs to be on a sound methodological footing.

The last section of the paper offers some tips and ideas on how to integrate ICTs into M&E practice and potential pitfalls to avoid. Many of these were drawn from Salons and discussions with practitioners, given that there is little solid documentation or evidence related to the use of ICTs for M&E.

Download the full paper here! 

Read Full Post »

Screen Shot 2014-05-08 at 9.36.00 AMDebate and thinking around data, ethics, ICT have been growing and expanding a lot lately, which makes me very happy!

Coming up on May 22 in NYC, the engine room, Hivos, the Berkman Center for Internet and Society, and Kurante (my newish gig) are organizing the latest in a series of events as part of the Responsible Data Forum.

The event will be hosted at ThoughtWorks and it is in-person only. Space is limited, so if you’d like to join us, let us know soon by filling in this form. 

What’s it all about?

This particular Responsible Data Forum event is an effort to map the ethical, legal, privacy and security challenges surrounding the increased use and sharing of data in development programming. The Forum will aim to explore the ways in which these challenges are experienced in project design and implementation, as well as when project data is shared or published in an effort to strengthen accountability. The event will be a collaborative effort to begin developing concrete tools and strategies to address these challenges, which can be further tested and refined with end users at events in Amsterdam and Budapest.

We will explore the responsible data challenges faced by development practitioners in program design and implementation.

Some of the use cases we’ll consider include:

  • projects collecting data from marginalized populations, aspiring to respect a do no harm principle, but also to identify opportunities for informational empowerment
  • project design staff seeking to understand and manage the lifespan of project data from collection, through maintenance, utilization, and sharing or destruction.
  • project staff that are considering data sharing or joint data collection with government agencies or corporate actors
  • project staff who want to better understand how ICT4D will impact communities
  • projects exploring the potential of popular ICT-related mechanisms, such as hackathons, incubation labs or innovation hubs
  • projects wishing to use development data for research purposes, and crafting responsible ways to use personally identifiable data for academic purposes
  • projects working with children under the age of 18, struggling to balance the need for data to improve programming approaches, and demand higher levels of protection for children

By gathering a significant number of development practitioners grappling with these issues, the Forum aims to pose practical and critical questions to the use of data and ICTs in development programming. Through collaborative sessions and group work, the Forum will identify common pressing issues for which there might be practical and feasible solutions. The Forum will focus on prototyping specific tools and strategies to respond to these challenges.

What will be accomplished?

Some outputs from the event may include:

  • Tools and checklists for managing responsible data challenges for specific project modalities, such as sms surveys, constructing national databases, or social media scraping and engagement.
  • Best practices and ethical controls for data sharing agreements with governments, corporate actors, academia or civil society
  • Strategies for responsible program development
  • Guidelines for data-driven projects dealing with communities with limited representation or access to information
  • Heuristics and frameworks for understanding anonymity and re-identification of large development data sets
  • Potential policy interventions to create greater awareness and possibly consider minimum standards

Hope to see some of you on the 22nd! Sign up here if you’re interested in attending, and read more about the Responsible Data Forum here.

 

Read Full Post »

Last week’s Technology Salon New York City touched on ethics in technology for democracy initiatives. We heard from lead discussants Malavika Jayaram, Berkman Center for Internet and SocietyIvan Sigal, Global Voices; and Amilcar Priestley, Afrolatin@ Project. Though the topic was catalyzed by the Associated Press’ article on ‘Zunzuneo’ (a.k.a. ‘Cuban Twitter’) and subsequent discussions in the press and elsewhere, we aimed to cover some of the wider ethical issues encountered by people and organizations who implement technology for democracy programs.

Salons are off the record spaces, so no attribution is made in this post, but I’ve summarized the discussion points here:

First up: Zunzuneo

The media misinterpreted much of the Zunzuneo story. Zunzuneo was not a secret mission, according to one Salon participant, as it’s not in the remit of USAID to carry out covert operations. The AP article conflated a number of ideas regarding how USAID works and the contracting mechanisms that were involved in this case, he said. USAID and the Office of Transition Initiatives (OTI) frequently disguise members, organizations, and contractors that work for it on the ground for security reasons. (See USAID’s side of the story here). This may still be an ethical question, but it is not technically “spying.” The project was known within the OTI and development community, but on a ‘need to know’ basis. It was not a ‘fly by night’ operation; it was more a ‘quietly and not very effectively run project.’

There were likely ethics breaches in Zunzuneo, from a legal standpoint. It’s not clear whether the data and phone numbers collected from the Cuban public for the project were obtained in a legal or ethical way. Some reports say they were obtained through a mid-level employee (a “Cuban engineer who had gotten the phone list” according to the AP article). (Note: I spoke separately to someone close to the project who told me that user opt-in/opt-out and other standard privacy protocols were in place). It’s also not entirely clear whether, as the AP states, the user information collected was being categorized into segments who were loyal or disloyal to the Cuban government, information which could put users at risk if found out.

Zunzuneo took place in a broader historical and geo-political context. As one person put it, the project followed Secretary Clinton’s speeches on Internet Freedom. There was a rush to bring technology into the geopolitical space, and ‘the articulation of why technology was important collided with a bureaucratic process in USAID and the State Department (the ‘F process’) that absorbed USAID into the State Department and made development part of the State Department’s broader political agenda.’ This agenda had been in the works for quite some time, and was part of a wider strategy of quietly moving into development spaces and combining development, diplomacy, intelligence and military (defense), the so-called 3 D’s.

Implementers failed to think through good design, ethics and community aspects of the work. In a number of projects of this type, the idea was that if you give people technology, they will somehow create bottom up pressure for political social change. As one person noted, ‘in the Middle East, as a counter example, the tech was there to enable and assist people who had spent 8-10 years building networks. The idea that we can drop tech into a space and an uprising will just happen and it will coincidentally push the US geopolitical agenda is a fantasy.’ Often these kinds of programs start with a strategic communications goal that serves a political end of the US Government. They are designed with the idea that a particular input equals some kind of a specific result down the chain. The problem comes when the people doing the seeding of the ideas and inputs are not familiar with the context they will be operating in. They are injecting inputs into a space that they don’t understand. The bigger ethical question is: Why does this thought process prevail in development? Much of that answer is found in US domestic politics and the ways that initiatives get funded.

Zunzuneo was not a big surprise for Afrolatino organizations. According to one discussant, Afrolatino organizations were not surprised when the Zunzuneo article came out, given the geopolitical history and the ongoing presence of the US in Latin America. Zunzuneo was seen as a 21st Century version of what has been happening for decades. Though it was criticized, it was not seen as particularly detrimental. Furthermore, the Afrolatino community (within the wider Latino community) has had a variety of relationships with the US over time – for example, some Afrolatino groups supported the Contras. Many Afrolatino groups have felt that they were not benefiting overall from the mestizo governments who have held power. In addition, much of Latin America’s younger generation is less tainted by the Cold War mentality, and does not see US involvement in the region as necessarily bad. Programs like Zunzuneo come with a lot of money attached, so often wider concerns about their implications are not in the forefront because organizations need to access funding. Central American and Caribbean countries are only just entering into a phase of deeper analysis of digital citizenship, and views and perceptions on privacy are still being developed.

Perceptions of privacy

There are differences in perception when it comes to privacy and these perceptions are contextual. They vary within and across countries and communities based on age, race, gender, economic levels, comfort with digital devices, political perspective and past history. Some older people, for example, are worried about the privacy violation of having their voice or image recorded, because the voice, image and gaze hold spiritual value and power. These angles of privacy need to be considered as we think through what privacy means in different contexts and adapt our discourse accordingly.

Privacy is hard to explain, as one discussant said: ‘There are not enough dead bodies yet, so it’s hard to get people interested. People get mad when the media gets mad, and until an issue hits the media, it may go unnoticed. It’s very hard to conceptualize the potential harm from lack of privacy. There may be a chilling effect but it’s hard to measure. The digital divide comes in as well, and those with less exposure may have trouble understanding devices and technology. They will then have even greater trouble understanding beyond the device to data doubles, disembodied information and de-anonymization, which are about 7 levels removed from what people can immediately see. Caring a lot about privacy can get you labeled as paranoid or a crazy person in many places.’

Fatalism about privacy can also hamper efforts. In the developing world, many feel that everything is corrupt and inept, and that there is no point in worrying about privacy and security. ‘Nothing ever works anyway, so even if the government wanted to spy on us, they’d screw it up,’ is the feeling. This is often the attitude of human rights workers and others who could be at greatest risk from privacy breaches or data collection, such as that which was reportedly happening within Zunzuneo. Especially among populations and practitioners who have less experience with new technologies and data, this can create large-scale risk.

Intent, action, context and consequences

Good intentions with little attention to privacy vs data collection with a hidden political agenda. Where are the lines when data that are collected for a ‘good cause’ (for example, to improve humanitarian response) might be used for a different purpose that puts vulnerable people at risk? What about data that are collected with less altruistic intentions? What about when the two scenarios overlap? Data might be freely given or collected in an emergency that would be considered a privacy violation in a ‘development’ setting, or the data collection may lead to a privacy violation post-emergency. Often, slapping the ‘obviously good and unarguably positive’ label of ‘Internet freedom’ on something implies that it’s unquestionably positive when it may in fact be part of a political agenda with a misleading label. There is a long history of those with power collecting data that helps them understand and/or control those with less power, as one Salon participant noted, and we need to be cognizant of that when we think about data and privacy.

US Government approaches to political development often take an input/output approach, when, in fact, political development is not the same as health development. ‘In political work, there is no clear and clean epidemiological goal we are trying to reach,’ noted a Salon participant. Political development is often contentious and the targets and approaches are very different than those of health. When a health model and rhetoric is used to work on other development issues, it is misleading. The wholesale adoption of these kinds of disease model approaches leaves people and communities out of the decision making process about their own development. Similarly, the rhetoric of strategic communications and its inclusion into the development agenda came about after the War on Terror, and it is also a poor fit for political development. The rhetoric of ‘opening’ and ‘liberating’ data is similar. These arguments may work well for one kind of issue, but they are not transferable to a political agenda. One Salon participant pointed out the rhetoric of the privatization model also, and explained that a profound yet not often considered implication of the privatization of services is that once a service passes over to the private sector, the Freedom of Information Act (FOIA) does not apply, and citizens and human rights organizations lose FOIA as a tool. Examples included the US prison system and the Blackwater case of several years ago.

It can be confusing for implementers to know what to do, what tools to use, what funding to accept and when it is OK to bring in an outside agenda. Salon participants provided a number of examples where they had to make choices and felt ethics could have been compromised. Is it OK to sign people up on Facebook or Gmail during an ICT and education project, given these companies’ marketing and privacy policies? What about working on aid transparency initiatives in places where human rights work or crime reporting can get people killed or individual philanthropists/donors might be kidnapped or extorted? What about a hackathon where the data and solutions are later given to a government’s civilian-military affairs office? What about telling LGBT youth about a social media site that encourages LGBT youth to connect openly with one another (in light of recent harsh legal penalties against homosexuality)? What about employing a user-centered design approach for a project that will eventually be overlaid on top of a larger platform, system or service that does not pass the privacy litmus test? Is it better to contribute to improving healthcare while knowing that your software system might compromise privacy and autonomy because it sits on top of a biometric system, for example? Participants at the Salon face these ethical dilemmas every day, and as one person noted, ‘I wonder if I am just window dressing something that will look and feel holistic and human-centered, but that will be used to justify decisions down the road that are politically negative or go against my values.’ Participants said they normally rely on their own moral compass, but clearly many Salon participants are wrestling with the potential ethical implications of their actions.

What we can do? Recommendations from Salon participants

Work closely with and listen to local partners, who should be driving the process and decisions. There may be a role for an outside perspective, but the outside perspective should not trump the local one. Inculcate and support local communities to build their own tools, narratives, and projects. Let people set their own agendas. Find ways to facilitate long-term development processes around communities rather than being subject to agendas from the outside.

Consider this to be ICT for Discrimination and think in every instance and every design decision about how to dial down discrimination. Data lead to sorting, and data get lumped into clusters. Find ways during the design process to reduce the discrimination that will come from that sorting and clustering process. The ‘Do no harm’ approach is key. Practitioners and designers should also be wary of the automation of development and the potential for automated decisions to be discriminatory.

Call out hypocrisy. Those of us who sit at Salons or attend global meetings hold tremendous privilege and power as compared to most of the rest of the world. ‘It’s not landless farmers or disenfranchised young black youth in Brazil who get to attend global meetings,’ said one Salon attendee. ‘It’s people like us. We need to be cognizant of the advantage we have as holders of power.’ Here in the US, the participant added, we need to be more aware of what private sector US technology companies are doing to take advantage of and maintain their stronghold in the global market and how the US government is working to allow US corporations to benefit disproportionately from the current Internet governance structure.

Use a rights-based approach to data and privacy to help to frame these issues and situations. Disclosure and consent are sometimes considered extraneous, especially in emergency situations. People think ‘this might be the only time I can get into this disaster or conflict zone, so I’m going to Hoover up as much data as possible without worrying about privacy.’ On the other hand, sometimes organizations are paternalistic and make choices for people about their own privacy. Consent and disclosure are not new issues; they are merely manifested in new ways as new technology changes the game and we cannot guarantee anonymity or privacy any more for research subjects. There is also a difference between information a person actively volunteers and information that is passively collected and used without a person’s knowledge. Framing privacy in a human rights context can help place importance on both processes and outcomes that support people’s rights to control their own data and that increase empowerment.

Create a minimum standard for privacy. Though we may not be able to determine a ceiling for privacy, one Salon participant said we should at least consider a floor or a minimum standard. Actors on the ground will always feel that privacy standards are a luxury because they have little know-how and little funding, so creating and working within an ethical standard should be a mandate from donors. The standard could be established as an M&E criterion.

Establish an ethics checklist to decide on funding sources and create policies and processes that help organizations to better understand how a donor or sub-donor would access and/or use data collected as part of a project or program they are funding. This is not always an easy solution, however, especially for cash-strapped local organizations. In India, for example, organizations are legally restricted from receiving certain types of funding based on government concerns that external agencies are trying to bring in Western democracy and Western values. Local organizations have a hard time getting funding for anti-censorship or free speech efforts. As one person at the Salon said, ‘agencies working on the ground are in a bind because they can’t take money from Google because it’s tainted, they can’t take money from the State Department because it’s imperialism and they can’t take money from local donors because there are none.’

Use encryption and other technology solutions. Given the low levels of understanding and awareness of these tools, more needs to be done so that more organizations learn how to use them, and they need to be made simpler, more accessible and user-friendly. ‘Crypto Parties’ can help get organizations familiar with encryption and privacy, but better outreach is needed so that organizations understand the relevance of encryption and feel welcome in tech-heavy environments.

Thanks to participants and lead discussants for the great discussions and to ThoughtWorks for hosting us at their offices!

 If you’d like to attend future Salons, sign up here!

Read Full Post »

This is a cross post from Heather Leson, Community Engagement Director at the Open Knowledge Foundation. The original post appeared here on the School of Data site.

by Heather Leson

What is the currency of change? What can coders (consumers) do with IATI data? How can suppliers deliver the data sets? Last week I had the honour of participating in the Open Data for Development Codeathon and the International Aid Transparency Initiative Technical Advisory Group meetings. IATI’s goal is to make information about aid spending easier to access, use, and understand. It was great that these events were back-to-back to push a big picture view.

My big takeaways included similar themes that I have learned on my open source journey:

You can talk about open data [insert tech or OS project] all you want, but if you don’t have an interactive community (including mentorship programmes), an education strategy, engagement/feedback loops plan, translation/localization plan and a process for people to learn how to contribute, then you build a double-edged barrier: barrier to entry and barrier for impact/contributor outputs.

Currency

About the Open Data in Development Codeathon

At the Codathon close, Mark Surman, Executive Director of Mozilla Foundation, gave us a call to action to make the web. Well, in order to create a world of data makers, I think we should run aid and development processes through this mindset. What is the currency of change? I hear many people talking about theory of change and impact, but I’d like to add ‘currency’. This is not only about money, this is about using the best brainpower and best energy sources to solve real world problems in smart ways. I think if we heed Mark’s call to action with a “yes, and”, then we can rethink how we approach complex change. Every single industry is suffering from the same issue: how to deal with the influx of supply and demand in information. We need to change how we approach the problem. Combined events like these give a window into tackling problems in a new format. It is not about the next greatest app, but more about asking: how can we learn from the Webmakers and build with each other in our respective fields and networks?

Ease of Delivery

The IATI community / network is very passionate about moving the ball forward on releasing data. During the sessions, it was clear that the attendees see some gaps and are already working to fill them. The new IATI website is set up to grow with a Community component. The feedback from each of the sessions was distilled by the IATI – TAG and Civil Society Guidance groups to share with the IATI Secretariat.

In the Open Data in Development, Impact of Open Data in Developing Countries, and CSO Guidance sessions, we discussed some key items about sharing, learning, and using IATI data. Farai Matsika, with International HIV/Aids Alliance, was particularly poignant reminding us of IATI’s CSO purpose – we need to share data with those we serve.

Country edits IATI

One of the biggest themes was data ethics. As we rush to ask NGOs and CSOs to release data, what are some of the data pitfalls? Anahi Ayala Iaccuci of Internews and Linda Raftree of Plan International USA both reminded participants that data needs to be anonymized to protect those at risk. Ms. Iaccuci asked that we consider the complex nature of sharing both sides of the open data story – successes and failures. As well, she advised: don’t create trust, but think about who people are trusting. Turning this model around is key to rethinking assumptions. I would add to her point: trust and sharing are currency and will add to the success measures of IATI. If people don’t trust the IATI data, they won’t share and use it.

Anne Crowe of Privacy International frequently asked attendees to consider the ramifications of opening data. It is clear that the IATI TAG does not curate the data that NGOS and CSOs share. Thus it falls on each of these organizations to learn how to be data makers in order to contribute data to IATI. Perhaps organizations need a lead educator and curator to ensure the future success of the IATI process, including quality data.

I think that School of Data and the Partnership for Open Data have a huge part to play with IATI. My colleague Zara Rahman is collecting user feedback for the Open Development Toolkit, and Katelyn Rogers is leading the Open Development mailing list. We collectively want to help people become data makers and consumers to effectively achieve their development goals using open data. This also means also tackling the ongoing questions about data quality and data ethics.


Here are some additional resources shared during the IATI meetings.

Read Full Post »

Screen Shot 2013-11-23 at 6.14.40 PM

Migration has been a part of the human experience since the dawn of time, and populations have always moved in search of resources and better conditions. Today, unaccompanied children and youth are an integral part of national and global migration patterns, often leaving their place of origin due to violence, conflict, abuse, or other rights violations, or simply to seek better opportunities for themselves.

It is estimated that 33 million (or some 16 percent) of the total migrant population today is younger than age 
20. Child and adolescent migrants make up a significant proportion of the total population of migrants in Africa (28 percent), Asia (21 percent), Oceania (11 percent), Europe (11 percent), and the Americas (10 percent).

The issue of migration is central to the current political debate as well as to the development discussion, especially in conversations about the “post 2015” agenda. Though many organizations are working to improve children’s well-being in their home communities, prevention work with children and youth is not likely to end migration. Civil society organizations, together with children and youth, government, community members, and other stakeholders can help make migration safer and more productive for those young people who do end up on the move.

As the debate around migration rages, access to and use of ICTs is expanding exponentially around the globe. For this reason Plan International USA and the Oak Foundation felt it was an opportune time to take stock of the ways that ICTs are being used in the child and youth migration process.

Our new report, “Modern Mobility: the role of ICTs in child and youth migration” takes a look at:

  • how children and youth are using ICTs to prepare for migration; to guide and facilitate their journey; to keep in touch with families; to connect with opportunities for support and work; and to cope with integration, forced repatriation or continued movement; and
  • how civil society organizations are using ICTs to facilitate and manage their work; to support children and youth on the move; and to communicate and advocate for the rights of child and youth migrants.

In the Modern Mobility paper, we identify and provide examples of three core ways that child and youth migrants are using new ICTs during the different phases of the migration process:

  1. for communicating and connecting with families and friends
  2. for accessing information
  3. for accessing services

We then outline seven areas where we found CSOs are using ICTs in their work with child and youth migrants, and we offer some examples:

Ways that CSOs are using ICTs in their work with child and youth migrants.

Ways that CSOs are using ICTs in their work with child and youth migrants.

Though we were able to identify some major trends in how children and youth themselves use ICTs and how organizations are experimenting with ICTs in programming, we found little information on the impact that ICTs and ICT-enabled programs and services have on migrating children and youth, whether positive or negative. Most CSO practitioners that we talked with said that they had very little awareness of how other organizations or initiatives similar to their own were using ICTs. Most also said they did not know where to find orientation or guidance on good practice in the use of ICTs in child-centered programming, ICTs in protection work (aside from protecting children from online risks), or use of ICTs in work with children and young people at various stages of migration. Most CSO practitioners we spoke with were interested in learning more, sharing experiences, and improving their capacities to use ICTs in their work.

Based on Plan Finland’s “ICT-Enabled Development Guide” (authored by Hannah Beardon), the Modern Mobility report provides CSOs with a checklist to support thinking around the strategic use of ICTs in general.

ICT-enabled development checklist developed by Hannah Beardon for Plan International.

ICT-enabled development checklist developed by Hannah Beardon for Plan International.

We also offer a list of key considerations for practitioners who wish to incorporate new technologies into their work, including core questions to ask about access, age, capacity, conflict, connectivity, cost, disability, economic status, electricity, existing information ecosystems, gender, information literacy, language, literacy, power, protection, privacy, sustainability, and user-involvement.

Our recommendation for taking this area forward is to develop greater awareness and capacity among CSOs regarding the potential uses and risks of ICTs in work with children and youth on the move by:

  1. Establishing an active community of practice on ICTs and children and youth on the move.
  2. Mapping and sharing existing projects and programs.
  3. Creating a guide or toolbox on good practice for ICTs in work with children and youth on the move.
  4. Further providing guidance on how ICTs can help “normal” programs to reach out to and include children and youth on the move.
  5. Further documentation and development of an evidence base.
  6. Sharing and distributing this report for discussion and action.

Download the Modern Mobility report here.

We’d love comments and feedback, and information about examples or documentation/evidence that we did not come across while writing the report!

Read Full Post »

Older Posts »