Feeds:
Posts
Comments

Posts Tagged ‘privacy’

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo courtesy of Amy O'Donnell).

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo by Amy O’Donnell).

At Catholic Relief Services’ ICT4D Conference in May 2016, I worked with Amy O’Donnell  (Oxfam GB) and Paul Perrin (CRS) to facilitate a participatory session that explored notions of Digital Privacy, Security and Safety. We had a full room, with a widely varied set of experiences and expertise.

The session kicked off with stories of privacy and security breaches. One person told of having personal data stolen when a federal government clearance database was compromised. We also shared how a researcher in Denmark scraped very personal data from the OK Cupid online dating site and opened it up to the public.

A comparison was made between the OK Cupid data situation and the work that we do as development professionals. When we collect very personal information from program participants, they may not expect that their household level income, health data or personal habits would be ‘opened’ at some point.

Our first task was to explore and compare the meaning of the terms: Privacy, Security and Safety as they relate to “digital” and “development.”

What do we mean by privacy?

The “privacy” group talked quite a bit about contextuality of data ownership. They noted that there are aspects of privacy that cut across different groups of people in different societies, and that some aspects of privacy may be culturally specific. Privacy is concerned with ownership of data and protection of one’s information, they said. It’s about who owns data and who collects and protects it and notions of to whom it belongs. Private information is that which may be known by some but not by all. Privacy is a temporal notion — private information should be protected indefinitely over time. In addition, privacy is constantly changing. Because we are using data on our mobile phones, said one person, “Safaricom knows we are all in this same space, but we don’t know that they know.”

Another said that in today’s world, “You assume others can’t know something about you, but things are actually known about you that you don’t even know that others can know. There are some facts about you that you don’t think anyone should know or be able to know, but they do.” The group mentioned website terms and conditions, corporate ownership of personal data and a lack of control of privacy now. Some felt that we are unable to maintain our privacy today, whereas others felt that one could opt out of social media and other technologies to remain in control of one’s own privacy. The group noted that “privacy is about the appropriate use of data for its intended purpose. If that purpose shifts and I haven’t consented, then it’s a violation of privacy.”

What do we mean by security?

The Security group considered security to relate to an individual’s information. “It’s your information, and security of it means that what you’re doing is protected, confidential, and access is only for authorized users.” Security was also related to the location of where a person’s information is hosted and the legal parameters. Other aspects were related to “a barrier – an anti-virus program or some kind of encryption software, something that protects you from harm…. It’s about setting roles and permissions on software and installing firewalls, role-based permissions for accessing data, and cloud security of individuals’ data.” A broader aspect of security was linked to the effects of hacking that lead to offline vulnerability, to a lack of emotional security or feeling intimidated in an online space. Lastly, the group noted that “we, not the systems, are the weakest link in security – what we click on, what we view, what we’ve done. We are our own worst enemies in terms of keeping ourselves and our data secure.”

What do we mean by safety?

The Safety group noted that it’s difficult to know the difference between safety and security. “Safety evokes something highly personal. Like privacy… it’s related to being free from harm personally, physically and emotionally.” The group raised examples of protecting children from harmful online content or from people seeking to harm vulnerable users of online tools. The aspect of keeping your online financial information safe, and feeling confident that a service was ‘safe’ to use was also raised. Safety was considered to be linked to the concept of risk. “Safety engenders a level of trust, which is at the heart of safety online,” said one person.

In the context of data collection for communities we work with – safety was connected to data minimization concepts and linked with vulnerability, and a compounded vulnerability when it comes to online risk and safety. “If one person’s data is not safely maintained it puts others at risk,” noted the group. “And pieces of information that are innocuous on their own may become harmful when combined.” Lastly, the notion of safety as related to offline risk or risk to an individual due to a specific online behavior or data breach was raised.

It was noted that in all of these terms: privacy, security and safety, there is an element of power, and that in this type of work, a power relations analysis is critical.

The Digital Data Life Cycle

After unpacking the above terms, Amy took the group through an analysis of the data life cycle (courtesy of the Engine Room’s Responsible Data website) in order to highlight the different moments where the three concepts (privacy, security and safety) come into play.

Screen Shot 2016-05-25 at 6.51.50 AM

  • Plan/Design
  • Collect/Find/Acquire
  • Store
  • Transmit
  • Access
  • Share
  • Analyze/use
  • Retention
  • Disposal
  • Afterlife

Participants added additional stages in the data life cycle that they passed through in their work (coordinate, monitor the process, monitor compliance with data privacy and security policies). We placed the points of the data life cycle on the wall, and invited participants to:

  • Place a pink sticky note under the stage in the data life cycle that resonates or interests them most and think about why.
  • Place a green sticky note under the stage that is the most challenging or troublesome for them or their organizations and think about why.
  • Place a blue sticky note under the stage where they have the most experience, and to share a particular experience or tip that might help others to better manage their data life cycle in a private, secure and safe way.

Challenges, concerns and lessons

Design as well as policy are important!

  • Design drives everScreen Shot 2016-05-25 at 7.21.07 AMything else. We often start from the point of collection when really it’s at the design stage when we should think about the burden of data collection and define what’s the minimum we can ask of people? How we design – even how we get consent – can inform how the whole process happens.
  • When we get part-way through the data life cycle, we often wish we’d have thought of the whole cycle at the beginning, during the design phase.
  • In addition to good design, coordination of data collection needs to be thought about early in the process so that duplication can be reduced. This can also reduce fatigue for people who are asked over and over for their data.
  • Informed consent is such a critical issue that needs to be linked with the entire process of design for the whole data life cycle. How do you explain to people that you will be giving their data away, anonymizing, separating out, encrypting? There are often flow down clauses in some contracts that shifts responsibilities for data protection and security and it’s not always clear who is responsible for those data processes? How can you be sure that they are doing it properly and in a painstaking way?
  • Anonymization is also an issue. It’s hard to know to what level to anonymize things like call data records — to the individual? Township? District Level? And for how long will anonymization actually hold up?
  • The lack of good design and policy contributes to overlapping efforts and poor coordination of data collection efforts across agencies. We often collect too much data in poorly designed databases.
  • Policy is not enough – we need to do a much better job of monitoring compliance with policy.
  • Institutional Review Boards (IRBs) and compliance aspects need to be updated to the new digital data reality. At the same time, sometimes IRBs are not the right instrument for what we are aiming to achieve.

Data collection needs more attention.

  • Data collection is the easy part – where institutions struggle is with analyzing and doing something with the data we collect.
  • Organizations often don’t have a well-structured or systematic process for data collection.
  • We need to be clearer about what type of information we are collecting and why.
  • We need to update our data protection policy.

Reasons for data sharing are not always clear.

  • How can share data securely and efficiently without building duplicative systems? We should be thinking more during the design and collection phase about whether the data is going to be interoperable and who needs to access it.
  • How can we get the right balance in terms of data sharing? Some donors really push for information that can put people in real danger – like details of people who have participated in particular programs that would put them at risk with their home governments. Organizations really need to push back against this. It’s an education thing with donors. Middle management and intermediaries are often the ones that push for this type of data because they don’t really have a handle on the risk it represents. They are the weak points because of the demands they are putting on people. This is a challenge for open data policies – leaving it open to people leaves it to doing the laziest job possible of thinking about the potential risks for that data.
  • There are legal aspects of sharing too – such as the USAID open data policy where those collecting data have to share with the government. But we don’t have a clear understanding of what the international laws are about data sharing.
  • There are so many pressures to share data but they are not all fully thought through!

Data analysis and use of data are key weak spots for organizations.

  • We are just beginning to think through capturing lots of data.
  • Data is collected but not always used. Too often it’s extractive data collection. We don’t have the feedback loops in place, and when there are feedback loops we often don’t use the the feedback to make changes.
  • We forget often to go back to the people who have provided us with data to share back with them. It’s not often that we hold a consultation with the community to really involve them in how the data can be used.

Secure storage is a challenge.

  • We have hundreds of databases across the agency in various formats, hard drives and states of security, privacy and safety. Are we able to keep these secure?
  • We need to think more carefully about where we hold our data and who has access to it. Sometimes our data is held by external consultants. How should we be addressing that?

Disposing of data properly in a global context is hard!

  • Screen Shot 2016-05-25 at 7.17.58 AMIt’s difficult to dispose of data when there are multiple versions of it and a data footprint.
  • Disposal is an issue. We’re doing a lot of server upgrades and many of these are remote locations. How do we ensure that the right disposal process is going on globally, short of physically seeing that hard drives are smashed up!
  • We need to do a better job of disposal on personal laptops. I’ve done a lot of data collection on my personal laptop – no one has ever followed up to see if I’ve deleted it. How are we handling data handover? How do you really dispose of data?
  • Our organization hasn’t even thought about this yet!

Tips and recommendations from participants

  • Organizations should be using different tools. They should be using Pretty Good Privacy techniques rather than relying on free or commercial tools like Google or Skype.
  • People can be your weakest link if they are not aware or they don’t care about privacy and security. We send an email out to all staff on a weekly basis that talks about taking adequate measures. We share tips and stories. That helps to keep privacy and security front and center.
  • Even if you have a policy the hard part is enforcement, accountability, and policy reform. If our organizations are not doing direct policy around the formation of best practices in this area, then it’s on us to be sure we understand what is best practice, and to advocate for that. Let’s do what we can before the policy catches up.
  • The Responsible Data Forum and Tactical Tech have a great set of resources.
  • Oxfam has a Responsible Data Policy and Girl Effect have developed a Girls’ Digital Privacy, Security and Safety Toolkit that can also offer some guidance.

In conclusion, participants agreed that development agencies and NGOs need to take privacy, security and safety seriously. They can no longer afford to implement security at a lower level than corporations. “Times are changing and hackers are no longer just interested in financial information. People’s data is very valuable. We need to change and take security as seriously as corporates do!” as one person said.

 

 

Read Full Post »

At our April 5th Salon in Washington, DC we had the opportunity to take a closer look at open data and privacy and discuss the intersection of the two in the framework of ‘responsible data’. Our lead discussants were Amy O’Donnell, Oxfam GB; Rob Baker, World Bank; Sean McDonald, FrontlineSMS. I had the pleasure of guest moderating.

What is Responsible Data?

We started out by defining ‘responsible data‘ and some of the challenges when thinking about open data in a framework of responsible data.

The Engine Room defines ‘responsible data’ as

the duty to ensure people’s rights to consent, privacy, security and ownership around the information processes of collection, analysis, storage, presentation and reuse of data, while respecting the values of transparency and openness.

Responsible Data can be like walking a tightrope, noted our first discussant, and you need to find the right balance between opening data and sharing it, all the while being ethical and responsible. “Data is inherently related to power – it can create power, redistribute it, make the powerful more powerful or further marginalize the marginalized. Getting the right balance involves asking some key questions throughout the data lifecycle from design of the data gathering all the way through to disposal of the data.

How can organizations be more responsible?

If an organization wants to be responsible about data throughout the data life cycle, some questions to ask include:

  • In whose interest is it to collect the data? Is it extractive or empowering? Is there informed consent?
  • What and how much do you really need to know? Is the burden of collecting and the liability of storing the data worth it when balanced with the data’s ability to represent people and allow them to be counted and served? Do we know what we’ll actually be doing with the data?
  • How will the data be collected and treated? What are the new opportunities and risks of collecting and storing and using it?
  • Why are you collecting it in the first place? What will it be used for? Will it be shared or opened? Is there a data sharing MOU and has the right kind of consent been secured? Who are we opening the data for and who will be able to access and use it?
  • What is the sensitivity of the data and what needs to be stripped out in order to protect those who provided the data?

Oxfam has developed a data deposit framework to help assess the above questions and make decisions about when and whether data can be open or shared.

(The Engine Room’s Responsible Development Data handbook offers additional guidelines and things to consider)

(See: https://wiki.responsibledata.io/Data_in_the_project_lifecycle for more about the data lifecycle)

Is ‘responsible open data’ an oxymoron?

Responsible Data policies and practices don’t work against open data, our discussant noted. Responsible Data is about developing a framework so that data can be opened and used safely. It’s about respecting the time and privacy of those who have provided us with data and reducing the risk of that data being hacked. As more data is collected digitally and donors are beginning to require organizations to hand over data that has been collected with their funding, it’s critical to have practical resources and help staff to be more responsible about data.

Some disagreed that consent could be truly informed and that open data could ever be responsible since once data is open, all control over the data is lost. “If you can’t control the way the data is used, you can’t have informed people. It’s like saying ‘you gave us permission to open your data, so if something bad happens to you, oh well….” Informed consent is also difficult nowadays because data sets are being used together and in ways that were not possible when informed consent was initially obtained.

Others noted that standard informed consent practices are unhelpful, as people don’t understand what might be done with their data, especially when they have low data literacy. Involving local communities and individuals in defining what data they would like to have and use could make the process more manageable and useful for those whose data we are collecting, using and storing, they suggested.

One person said that if consent to open data was not secured initially; the data cannot be opened, say, 10 years later. Another felt that it was one thing to open data for a purpose and something entirely different to say “we’re going to open your data so people can do fun things with it, to play around with it.”

But just what data are we talking about?

USAID was questioned for requiring grantees to share data sets and for leaning towards de-identification rather than raising the standard to data anonymity. One person noted that at one point the agency had proposed a 22-step process for releasing data and even that was insufficient for protecting program participants in a risky geography because “it’s very easy to figure out who in a small community recently received 8 camels.” For this reason, exclusions are an important part of open data processes, he said.

It’s not black or white, said another. Responsible open data is possible, but openness happens along a spectrum. You have financial data on the one end, which should be very open as the public has a right to know how its tax dollars are being spent. Human subjects research is on the other end, and it should not be totally open. (Author’s note: The Open Knowledge Foundation definition of open data says: “A key point is that when opening up data, the focus is on non-personal data, that is, data which does not contain information about specific individuals.” The distinction between personal data, such as that in household level surveys, and financial data on agency or government activities seems to be blurred or blurring in current debates around open data and privacy.) “Open data will blow up in your face if it’s not done responsibly,” he noted. “But some of the open data published via IATI (the International Aid Transparency Initiative) has led to change.”

A participant followed this comment up by sharing information from a research project conducted on stakeholders’ use of IATI data in 3 countries. When people knew that the open data sets existed they were very excited, she said. “These are countries where there is no Freedom of Information Act (FOIA), and where people cannot access data because no one will give it to them. They trusted the US Government’s data more than their own government data, and there was a huge demand for IATI data. People were very interested in who was getting what funding. They wanted information for planning, coordination, line ministries and other logistical purposes. So let’s not underestimate open data. If having open data sets means that governments, health agencies or humanitarian organizations can do a better job of serving people, that may make for a different kind of analysis or decision.”

‘Open by default’ or ‘open by demand’?

Though there are plenty of good intentions and rationales for open data, said one discussant, ‘open by default’ is a mistake. We may have quick wins with a reduction in duplicity of data collection, but our experiences thus far do not merit ‘open by default’. We have not earned it. Instead, he felt that ‘open by demand’ is a better idea. “We can put out a public list of the data that’s available and see what demand for data comes in. If we are proactive on what is available and what can be made available, and we monitor requests, we can avoid putting out information that no one is interested in. This would lower the overhead on what we are releasing. It would also allow us to have a conversation about who needs this data and for what.”

One participant agreed, positing that often the only reason that we collect data is to provide proof and evidence that we’re doing our job, spending the money given to us, and tracking back. “We tend to think that the only way to provide this evidence is to collect data: do a survey, talk to people, look at website usage. But is anyone actually using this data, this evidence to make decisions?”

Is the open data honeymoon over?

“We need to do a better job of understanding the impact at a wider level,” said another participant, “and I think it’s pretty light. Talking about open data is too general. We need to be more service oriented and problem driven. The conversation is very different when you are using data to solve a particular problem and you can focus on something tangible like service delivery or efficiency. Open data is expensive and not sustainable in the current setup. We need to figure this out.”

Another person shared results from an informal study on the use of open data portals around the world. He found around 2,500 open data portals, and only 3.8% of them use https (the secure version of http). Most have very few visitors, possibly due to poor Internet access in the countries whose open data they are serving up, he said. Several exist in countries with a poor Freedom House ranking and/or in countries at the bottom end of the World Bank’s Digital Dividends report. “In other words, the portals have been built for people who can’t even use them. How responsible is this?” he asked, “And what is the purpose of putting all that data out there if people don’t have the means to access it and we continue to launch more and more portals? Where’s all this going?”

Are we conflating legal terms?

Legal frameworks around data ownership were debated. Some said that the data belonged to the person or agency that collected it or paid for the cost of collecting in terms of copyright and IP. Others said that the data belonged to the individual who provided it. (Author’s note: Participants may have been referring to different categories of data, eg., financial data from government vs human subjects data.) The question was raised of whether informed consent for open data in the humanitarian space is basically a ‘contract of adhesion’ (a term for a legally binding agreement between two parties wherein one side has all the bargaining power and uses it to its advantage). Asking a person to hand over data in an emergency situation in order to enroll in a humanitarian aid program is akin to holding a gun to a person’s head in order to get them to sign a contract, said one person.

There’s a world of difference between ‘published data’ and ‘openly licensed data,’ commented our third discussant. “An open license is a complete lack of control, and you can’t be responsible with something you can’t control. There are ways to be responsible about the way you open something, but once it’s open, your responsibility has left the port.” ‘Use-based licensing’ is something else, and most IP is governed by how it’s used. For example, educational institutions get free access to data because they are educational institutions. Others pay and this subsidized their use of this data, he explained.

One person suggested that we could move from the idea of ‘open data’ to sub-categories related to how accessible the data would be and to whom and for what purposes. “We could think about categories like: completely open, licensed, for a fee, free, closed except for specific uses, etc.; and we could also specify for whom, whose data and for what purposes. If we use the term ‘accessible’ rather than ‘open’ perhaps we can attach some restrictions to it,” she said.

Is data an asset or a liability?

Our current framing is wrong, said one discussant. We should think of data as a toxic asset since as soon as it’s in our books and systems, it creates proactive costs and proactive risks. Threat modeling is a good approach, he noted. Data can cause a lot of harm to an organization – it’s a liability, and if it’s not used or stored according to local laws, an agency could be sued. “We’re far under the bar. We are not compliant with ‘safe harbor’ or ECOWAS regulations. There are libel questions and property laws that our sector is ignorant of. Our good intentions mislead us in terms of how we are doing things. There is plenty of room to build good practice here, he noted, for example through Civic Trusts. Another participant noted that insurance underwriters are already moving into this field, meaning that they see growing liability in this space.

How can we better engage communities and the grassroots?

Some participants shared examples of how they and their organizations have worked closely at the grassroots level to engage people and communities in protecting their own privacy and using open data for their own purposes. Threat modeling is an approach that helps improve data privacy and security, said one. “When we do threat modeling, we treat the data that we plan to collect as a potential asset. At each step of collection, storage, sharing process – we ask, ‘how will we protect those assets? What happens if we don’t share that data? If we don’t collect it? If we don’t delete it?’”

In one case, she worked with very vulnerable women working on human rights issues and together the group put together an action plan to protect its data from adversaries. The threats that they had predicted actually happened and the plan was put into action. Threat modeling also helps to “weed the garden once you plant it,” she said, meaning that it helps organizations and individuals keep an eye on their data, think about when to delete data, pay attention to what happens after data’s opened and dedicate some time for maintenance rather than putting all their attention on releasing and opening data.

More funding needs to be made available for data literacy for those whose data has been collected and/or opened. We need to help people think about what data is of use to them also. One person recalled hearing people involved in the creation of the Kenya Open Government Data portal say that the entire process was a waste of time because of low levels of use of any of the data. There are examples, however, of people using open data and verifying it at community level. For example, high school students in one instance found the data on all the so-called grocery stores in their community and went one-by-one checking into them, and identifying that some of these were actually liquor stores selling potato chips, not actual grocery stores. Having this information and engaging with it can be powerful for local communities’ advocacy work.

Are we the failure here? What are we going to do about it?

One discussant felt that ‘data’ and ‘information’ are often and easily conflated. “Data alone is not power. Information is data that is contextualized into something that is useful.” This brings into question the value of having so many data portals, and so much risk, when so little is being done to turn data into information that is useful to the people our sector says it wants to support and empower.

He gave the example of the Weather Channel, a business built around open data sets that are packaged and broadcast, which just got purchased for $2 billion. Channels like radio that would have provided information to the poor were not purchased, only the web assets, meaning that those who benefit are not the disenfranchised. “Our organizations are actually just like the Weather Channel – we are intermediaries who are interested in taking and using open data for public good.”

As intermediaries, we can add value in the dissemination of this open data, he said. If we have the skills, the intention and the knowledge to use it responsibly, we have a huge opportunity here. “However our enlightened intent has not yet turned this data into information and knowledge that communities can use to improve their lives, so are we the failure here? And if so, what are we doing about it? We could immediately begin engaging communities and seeing what is useful to them.” (See this article for more discussion on how ‘open’ may disenfranchise the poor.)

Where to from here?

Some points raised that merit further discussion and attention include:

  • There is little demand or use of open data (such as government data and finances) and preparing and maintaining data sets is costly – ‘open by demand’ may be a more appropriate approach than ‘open by default.’
  • There is a good deal of disagreement about whether data can be opened responsibly. Some of this disagreement may stem from a lack of clarity about what kind of data we are talking about when we talk about open data.
  • Personal data and human subjects data that was never foreseen to be part of “open data” is potentially being opened, bringing with it risks for those who share it as well as for those who store it.
  • Informed consent for personal/human subject data is a tricky concept and it’s not clear whether it is even possible in the current scenario of personal data being ‘opened’ and the lack of control over how it may be used now or in the future, and the increasing ease of data re-identification.
  • We may want to look at data as a toxic asset rather than a beneficial one, because of the liabilities it brings.
  • Rather than a blanket “open” categorization, sub-categorizations that restrict data sets in different ways might be a possibility.
  • The sector needs to improve its understanding of the legal frameworks around data and data collection, storage and use or it may start to see lawsuits in the near future.
  • Work on data literacy and community involvement in defining what data is of interest and is collected, as well as threat modeling together with community groups is a way to reduce risk and improve data quality, demand and use; but it’s a high-touch activity that may not be possible for every kind of organization.
  • As data intermediaries, we need to do a much better job as a sector to see what we are doing with open data and how we are using it to provide services and contextualized information to the poor and disenfranchised. This is a huge opportunity and we have not done nearly enough here.

The Technology Salon is conducted under Chatham House Rule so attribution has not been made in this post. If you’d like to attend future Salons, sign up here

 

Read Full Post »

Our March 18th Technology Salon NYC covered the Internet of Things and Global Development with three experienced discussants: John Garrity, Global Technology Policy Advisor at CISCO and co-author of Harnessing the Internet of Things for Global Development; Sylvia Cadena, Community Partnerships Specialist, Asia Pacific Network Information Centre (APNIC) and the Asia Information Society Innovation Fund (ISIF); and Andy McWilliams, Creative Technologist at ThoughtWorks and founder and director of Art-A-Hack and Hardware Hack Lab.

By Wilgengebroed on Flickr [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)%5D, via Wikimedia Commons

What is the Internet of Things?

One key task at the Salon was clarifying what exactly is the “Internet of Things.” According to Wikipedia:

The Internet of Things (IoT) is the network of physical objects—devices, vehicles, buildings and other items—embedded with electronics, software, sensors, and network connectivity that enables these objects to collect and exchange data.[1] The IoT allows objects to be sensed and controlled remotely across existing network infrastructure,[2] creating opportunities for more direct integration of the physical world into computer-based systems, and resulting in improved efficiency, accuracy and economic benefit;[3][4][5][6][7][8] when IoT is augmented with sensors and actuators, the technology becomes an instance of the more general class of cyber-physical systems, which also encompasses technologies such as smart grids, smart homes, intelligent transportation and smart cities. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure. Experts estimate that the IoT will consist of almost 50 billion objects by 2020.[9]

As one discussant explained, the IoT involves three categories of entities: sensors, actuators and computing devices. Sensors read data in from the world for computing devices to process via a decision logic which then generates some type of action back out to the world (motors that turn doors, control systems that operate water pumps, actions happening through a touch screen, etc.). Sensors can be anything from video cameras to thermometers or humidity sensors. They can be consumer items (like a garage door opener or a wearable device) or industrial grade (like those that keep giant machinery running in an oil field). Sensors are common in mobile phones, but more and more we see them being de-coupled from cell phones and integrated into or attached to all manner of other every day things. The boom in the IoT means that in whereas in the past, a person may have had one URL for their desktop computer, now they might be occupying several URLs:  through their phone, their iPad, their laptop, their Fitbit and a number of other ‘things.’

Why does IoT matter for Global Development?

Price points for sensors are going down very quickly and wireless networks are steadily expanding — not just wifi but macro cellular technologies. According to one lead discussant, 95% of the world is covered by 2G and two-thirds by 3G networks. Alongside that is a plethora of technology that is wide range and low tech. This means that all kinds of data, all over the world, are going to be available in massive quantities through the IoT. Some are excited about this because of how data can be used to track global development indicators, for example, the type of data being sought to measure the Sustainable Development Goals (SDGs). Others are concerned about the impact of data collected via the IoT on privacy.

What are some examples of the IoT in Global Development?

Discussants and others gave many examples of how the IoT is making its way into development initiatives, including:

  • Flow meters and water sensors to track whether hand pumps are working
  • Protecting the vaccine cold chain – with a 2G thermometer, an individual can monitor the cold chain for local use and the information also goes directly to health ministries and to donors
  • Monitoring the environment and tracking animals or endangered species
  • Monitoring traffic routes to manage traffic systems
  • Managing micro-irrigation of small shareholder plots from a distance through a feature phone
  • As a complement to traditional monitoring and evaluation (M&E) — a sensor on a cook stove can track how often a stove is actually used (versus information an individual might provide using recall), helping to corroborate and reduce bias
  • Verifying whether a teacher is teaching or has shown up to school using a video camera

The CISCO publication on the IoT and Global Development provides many more examples and an overview of where the area is now and where it’s heading.

How advanced is the IoT in the development space?

Currently, IoT in global development is very much a hacker space, according to one discussant. There are very few off the shelf solutions that development or humanitarian organizations can purchase and readily implement. Some social enterprises are ramping up activity, but there is no larger ecosystem of opportunities for off the shelf products.

Because the IoT in global development is at an early phase, challenges abound. Technical issues, power requirements, reliability and upkeep of sensors (which need to be calibrated), IP issues, security and privacy, technical capacity, and policy questions all need to be worked out. One discussant noted that these challenges carry on from the mobile for development (m4d) and information and communication technologies for development (ICT4D) work of the past.

Participants agreed that challenges are currently huge. For example, devices are homogeneous, making them very easy to hack and affect a lot of devices at once. No one has completely gotten their head around the privacy and consent issues, which are are very different than those of using FB. There are lots of interoperability issues also. As one person highlighted — there are over 100 different communication protocols being used today. It is more complicated than the old “BetaMax v VHS” question – we have no idea at this point what the standard will be for IoT.

For those who see the IoT as a follow-on from ICT4D and m4d, the big question is how to make sure we are applying what we’ve learned and avoiding the same mistakes and pitfalls. “We need to be sure we’re not committing the error of just seeing the next big thing, the next shiny device, and forgetting what we already know,” said one discussant. There is plenty of material and documentation on how to avoid repeating past mistakes, he noted. “Read ICT works. Avoid pilotitis. Don’t be tech-led. Use open source and so on…. Look at the digital principles and apply them to the IoT.”

A higher level question, as one person commented, is around the “inconvenient truth” that although ICTs drive economic growth at the macro level, they also drive income inequality. No one knows how the IoT will contribute or create harm on that front.

Are there any existing standards for the IoT? Should there be?

Because there is so much going on with the IoT – new interventions, different sectors, all kinds of devices, a huge variety in levels of use, from hacker spaces up to industrial applications — there are a huge range of standards and protocols out there, said one discussant. “We don’t really want to see governments picking winners or saying ‘we’re going to us this or that.’ We want to see the market play out and the better protocols to bubble up to the surface. What’s working best where? What’s cost effective? What open protocols might be most useful?”

Another discussant pointed out that there is a legacy predating the IOT: machine-to-machine (M2M), which has not always been Internet based. “Since this legacy is still there. How can we move things forward with regard to standardization and interoperability yet also avoid leaving out those who are using M2M?”

What’s up with IPv4 and IPv6 and the IoT? (And why haven’t I heard about this?)

Another crucial technical point raised is that of IPv4 and IPv6, something that not many Salon participant had heard of, but that will greatly impact on how the IoT rolls out and expands, and just who will be left out of this new digital divide. (Note: I found this video to be helpful for explaining IPv4 vs IPv6.)

“Remember when we used Netscape and we understood how an IP number translated into an IP address…?” asked one discussant. “Many people never get that lovely experience these days, but it’s important! There is a finite number of IP4 addresses and they are running out. Only Africa and Latin America have addresses left,” she noted.

IPv6 has been around for 20 years but there has not been a serious effort to switch over. Yet in order to connect the next billion and the multiple devices that they may bring online, we need more addresses. “Your laptop, your mobile, your coffee pot, your fridge, your TV – for many of us these are all now connected devices. One person might be using 10 IP addresses. Multiply that by millions of people, and the only thing that makes sense is switching over to IPv6,” she said.

There is a problem with the technical skills and the political decisions needed to make that transition happen. For much of the world, the IoT will not happen very smoothly and entire regions may be left out of the IoT revolution if high level decision makers don’t decide to move ahead with IPv6.

What are some of the other challenges with global roll-out of IoT?

In addition to the IPv4 – IPv6 transition, there are all kinds of other challenges with the IoT, noted one discussant. The technical skills required to make the transition that would enable IoT in some regions, for example Asia Pacific, are sorely needed. Engineers will need to understand how to make this shift happen, and in some places that is going to be a big challenge. “Things have always been connected to the Internet. There are just going to be lots more, different things connected to the Internet now.”

One major challenge is that there are huge ethical questions along with security and connectivity holes (as I will outline later in this summary post, and as discussed in last year’s salon on Wearable Technologies). In addition, noted one discussant, if we are designing networks that are going to collect data for diseases, for vaccines, for all kinds of normal businesses, and put the data in the cloud, developing countries need to have the ability to secure the data, the computing capacity to deal with it, and the skills to do their own data analysis.

“By pushing the IoT onto countries and not supporting the capacity to manage it, instead of helping with development, you are again creating a giant gap. There will be all kinds of data collected on climate change in the Pacific Island Countries, for example, but the countries don’t have capacity to deal with this data. So once more it will be a bunch of outsiders coming in to tell the Pacific Islands how to manage it, all based on conclusions that outsiders are making based on sensor data with no context,” alerted one discussant. “Instead, we should be counseling our people, our countries to figure out what they want to do with these sensors and with this data and asking them what they need to strengthen their own capacities.”

“This is not for the SDGs and ticking off boxes,” she noted. “We need to get people on the ground involved. We need to decentralize this so that people can make their own decisions and manage their own knowledge. This is where the real empowerment is – where local people and country leaders know how to collect data and use it to make their own decisions. The thing here is ownership — deploying your own infrastructure and knowing what to do with it.”

How can we balance the shiny devices with the necessary capacities?

Although the critical need to invest in and support country-level capacity to manage the IoT has been raised, this type of back-end work is always much less ‘sexy’ and less interesting for donors than measuring some development programming with a flashy sensor. “No one wants to fund this capacity strengthening,” said one discussant. “Everyone just wants to fund the shiny sensors. This chase after innovation is really damaging the impact that technology can actually have. No one just lets things sit and develop — to rest and brew — instead we see everyone rushing onto the next big thing. This is not a good thing for a small country that doesn’t have the capacity to jump right into it.”

All kinds of things can go wrong if people are not trained on how to manage the IoT. Devices can be hacked and they may be collecting and sharing data without an individuals’ knowledge (see Geoff Huston on The Internet of Stupid Things). Electrical short outs, common in places with poor electricity ecosystems, can also cause big problems. In addition, the Internet is affected by legacy systems – so we need interoperability that goes backwards, said one discussant. “If we don’t make at least a small effort to respect those legacy systems, we’re basically saying ‘if you don’t have the funding to update your system, you’re out.’ This then reinforces a power dynamic where countries need the international community to give them equipment, or they need to buy this or buy that, and to bring in international experts from the outside….’ The pressure on poor countries to make things work, to do new kinds of M&E, to provide evidence is huge. With that pressure comes a higher risk of falling behind very quickly. We are also seeing pilot projects that were working just fine without fancy tech being replaced by new fangled tech-type programs instead of being supported over the longer term,” she said.

Others agreed that the development sector’s fascination with shiny and new is detrimental. “There is very little concern for the long-term, the legacy system, future upgrades,” said one participant. “Once the blog post goes up about the cool project, the sensors go bad or stop working and no one even knows because people have moved on.” Another agreed, citing that when visiting numerous clinics for a health monitoring program in one country, the running joke among the M&E staff was “OK, now let’s go and find the broken solar panel.” “When I think of the IoT,” she said, “I think of a lot of broken devices in 5 years.” The aspect of eWaste and the IoT has not even begun to be examined or quantified, noted another.

It is increasingly important for governments to understand how the Internet works, because they are making policy about it. Manufacturers need to better understand how the tech works on the ground, especially in different contexts that they are not accustomed to working in. Users need a better understanding of all of this because their privacy is at risk. Legal frameworks around data and national laws need more attention as well. “When you are working with restrictive governments, your organization’s or start-up’s idea might actually be illegal or close to a sedition law and you may end up in jail,” noted one discussant.

What choices will organizations need to make regarding the IoT?

When it comes to actually making decisions on how involved an organization should and can be in supporting or using the IoT, one critical choice will be related the suites of devices, said our third discussant. Will it be a cloud device? A local computing device? A computer?

Organizations will need to decide if they want a vendor that gives them a package, or if they want a modular, interoperable approach of units. They will need to think about aspects like whether they want to go with proprietary or open source and will it be plug and play?

There are trade-offs here and key technical infrastructure choices will need to be made based on a certain level of expertise and experience. If organizations are not sure what they need, they may wish to get some advice before setting up a system or investing heavily.

As one discussant put it, “When I talk about the IOT, I often say to think about what the Internet was in the 90s. Think about that hazy idea we had of what the Internet was going to be. We couldn’t have predicted in the 90s what today’s internet would look like, and we’re in the same place with the IoT,” he said. “There will be seismic change. The state of the whole sector is immature now. There are very hard choices to make.”

Another aspect that’s representative of the IoT’s early stage, he noted, is that the discussion is all focusing on http and the Internet. “The IOT doesn’t necessarily even have to involve the Internet,” he said.

Most vendors are offering a solution with sensors to deploy, actuators to control and a cloud service where you log in to find your data. The default model is that the decision logic takes place there in the cloud, where data is stored. In this model, the cloud is in the middle, and the devices are around it, he said, but the model does not have to be that way.

Other models can offer more privacy to users, he said. “When you think of privacy and security – the healthcare maxim is ‘do no harm.’ However this current, familiar model for the IoT might actually be malicious.” The reason that the central node in the commercial model is the cloud is because companies can get more and more detailed information on what people are doing. IoT vendors and IoT companies are interested in extending their profiles of people. Data on what people do in their virtual life can now be combined with what they do in their private lives, and this has huge commercial value.

One option to look at, he shared, is a model that has a local connectivity component. This can be something like bluetooth mesh, for example. In this way, the connectivity doesn’t have to go to the cloud or the Internet at all. This kind of set-up may make more sense with local data, and it can also help with local ownership, he said. Everything that happens in the cloud in the commercial model can actually happen on a local hub or device that opens just for the community of users. In this case, you don’t have to share the data with the world. Although this type of a model requires greater local tech capacity and can have the drawback that it is more difficult to push out software updates, it’s an option that may help to enhance local ownership and privacy.

This requires a ‘person first’ concept of design. “When you are designing IOT systems, he said, “start with the value you are trying to create for individuals or organizations on the ground. And then implement the local part that you need to give local value. Then, only if needed, do you add on additional layers of the onion of connectivity, depending on the project.” The first priority here are the goals that the technology design will achieve for individual value, for an individual client or community, not for commercial use of people’s data.

Another point that this discussant highlighted was the need to conduct threat modeling and to think about unintended consequences. “If someone hacked this data – what could go wrong?” He suggested working backwards and thinking: “What should I take offline? How do I protect it better? How do I anonymize it better.”

In conclusion….

It’s critical to understand the purpose of an IoT project or initiative, discussants agreed, to understand if and why scale is needed, and to be clear about the drivers of a project. In some cases, the cloud is desirable for quicker, easier set up and updates to software. At the same time, if an initiative is going to be sustainable, then community and/or country capacity to run it, sustain it, keep it protected and private, and benefit from it needs to be built in. A big part of that capacity includes the ability to understand the different layers that surround the IoT and to make grounded decisions on the various trade-offs that will come to a head in the process of design and implementation. These skills and capacities need to be developed and supported within communities, countries and organizations if the IoT is to contribute ethically and robustly to global development.

Thanks to APNIC for sponsoring and supporting this Salon and to our friends at ThoughtWorks for hosting! If you’d like to join discussions like this one in cities around the world, sign up at Technology Salon

Salons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

Our December 2015 Technology Salon discussion in NYC focused on approaches to girls’ digital privacy, safety and security. By extension, the discussion included ways to reduce risk for other vulnerable populations. Our lead discussants were Ximena BenaventeGirl Effect Mobile (GEM) and Jonathan McKay, Praekelt Foundation. I also shared a draft Girls’ Digital Privacy, Safety and Security Policy and Toolkit I’ve been working on with both organizations over the past year.

Girls’ digital privacy, safety and security risks

Our first discussant highlighted why it’s important to think specifically about girls and digital security. In part, this is because different factors and vulnerabilities combine, exacerbating girls’ levels of risk. For example, girls living on less than $2 per day likely only have access to basic mobile phones, which are often borrowed from parents or siblings. The organization she works with always starts with deep research on aspects like ownership vs. borrowship and whether girls’ mobile usage is free/unlimited and un-supervised or controlled by gatekeepers such as parents, brothers, or other relatives. This helps to design better tools, services and platforms and to design for safety and security, she said. “Gatekeepers are very restrictive in many cases, but parental oversight is not necessarily a bad thing. We always work with parents and other gatekeepers as well as with girls themselves when we design and test.” When girls are living in more traditional or conservative societies, she said, we also need to think about how content might affect girls both online and offline. For example, “is content sufficiently progressive in terms of girls’ rights, yet safe for girls to read, comment on or discuss with friends and family without severe retaliation?”

Research suggests that girls who are more vulnerable offline (due to poverty or other forms of marginalization), are likely also more vulnerable to certain risks online, so we design with that in mind, she said. “When we started off on this project, our team members were experts in digital, but we had less experience with the safety and privacy aspects when it comes to girls living under $2/day or who were otherwise vulnerable. “Having additional guidance and developing a policy on this aspect has helped immensely – but has also slowed our processes down and sometimes made them more expensive,” she noted. “We had to go back to everything and add additional layers of security to make it as safe as possible for girls. We have also made sure to work very closely with our local partners to be sure that everyone involved in the project is aware of girls’ safety and security.”

Social media sites: Open, Closed, Private, Anonymous?

One issue that came up was safety for children and youth on social media networks. A Salon participant said his organization had thought about developing this type of a network several years back but decided in the end that the security risks outweighed the advantages. Participants discussed whether social media networks can ever be safe. One school of thought is that the more open a platform, the safer it is, as “there is no interaction in private spaces that cannot be constantly monitored or moderated.” Some worry about open sites, however, and set up smaller, closed, private groups that were closely monitored. “We work with victims of violence to share their stories and coping mechanisms, so, for us, private groups are a better option.”

Some suggested that anonymity on a social media site can protect girls and other vulnerable groups, however there is also research showing that Internet anonymity contributes to an increase in activities such as bullying and harassment. Some Salon participants felt that it was better to leverage existing platforms and try to use them safely. Others felt that there are no existing social media platforms that have enough security for girls or other vulnerable groups to use with appropriate levels of risk. “We sometimes recruit participants via existing social media platforms,” said one discussant, “but we move people off of those sites to our own more secure sites as soon as we can.”

Moderation and education on safety

Salon participants working with vulnerable populations said that they moderate their sites very closely and remove comments if users share personal information or use offensive language. “Some project budgets allow us to have a moderator check every 2 hours. For others, we sweep accounts once a day and remove offensive content within 24 hours.” One discussant uses moderation to educate the community. “We always post an explanation about why a comment was removed in order to educate the larger user base about appropriate ways to use the social network,” he said.

Close moderation becomes difficult and costly, however, as the user base grows and a platform scales. This means individual comments cannot be screened and pre-approved, because that would take too long and defeat the purpose of an engaging platform. “We need to acknowledge the very real tension between building a successful and engaging community and maintaining privacy and security,” said one Salon participant. “The more you lock it down and the more secure it is, the harder you find it is to create a real and active community.”

Another participant noted that they use their safe, closed youth platform to educate and reinforce messaging about what is safe and positive use of social media in hopes that young people will practice safe behaviors when they use other platforms. “We know that education and awareness raising can only go so far, however,” she said, “and we are not blind to that fact.” She expressed concern about risk for youth who speak out about political issues, because more and more governments are passing laws that punish critics and censor information. The organization, however, does not want to encourage youth to stop voicing opinions or participating politically.

Data breaches and project close-out

One Salon participant asked if organizations had examples of actual data breaches, and how they had handled them. Though no one shared examples, it was recommended that every organization have a contingency plan in place for accidental data leaks or a data breach or data hack. “You need to assume that you will get hacked,” said one person, “and develop your systems with that as a given.”

In addition to the day-to-day security issues, we need to think about project close-out, said one person. “Most development interventions are funded for a short, specific period of time. When a project finishes, you get a report, you do your M&E, and you move on. However, the data lives on, and the effects of the data live on. We really need to think more about budgeting for proper project wind-down and ensure that we are accountable beyond the lifetime of a project.”

Data security, anonymization, consent

Another question was related to using and keeping girls’ (and others’) data safe. “Consent to collect and use data on a website or via a mobile platform can be tricky, especially if we don’t know how to explain what we might do with the data,” said one Salon participant. Others suggested it would be better not to collect any data at all. “Why do we even need to collect this data? Who is it for?” he asked. Others countered that this data is often the only way to understand what people are doing on the site, to make adjustments and to measure impact.

One scenario was shared where several partner organizations discussed opening up a country’s cell phone data records to help contain a massive public health epidemic, but the privacy and security risks were too great, so the idea was scrapped. “Some said we could anonymize the data, but you can never really and truly anonymize data. It would have been useful to have a policy or a rubric that would have guided us in making that decision.”

Policy and Guidelines on Girls Privacy, Security and Safety

Policy guidelines related to aspects such as responsible data for NGOs, data security, privacy and other aspects of digital security in general do exist. (Here are some that we compiled along with some other resources). Most IT departments also have strict guidelines when it comes to donor data (in the case of credit card and account information, for example). This does not always cross over to program-level ICT or M&E efforts that involve the populations that NGOs are serving through their programming.

General awareness around digital security is increasing, in part due to recent major corporate data hacks (e.g., Target, Sony) and the Edward Snowden revelations from a few years back, but much more needs to be done to educate NGO staff and management on the type of privacy and security measures that need to be taken to protect the data and mitigate risk for those who participate in their programs.  There is an argument that NGOs should have specific digital privacy, safety and security policies that are tailored to their programming and that specifically focus on the types of digital risks that girls, women, children or other vulnerable people face when they are involved in humanitarian or development programs.

One such policy (focusing on vulnerable girls) and toolkit (its accompanying principles and values, guidelines, checklists and a risk matrix template); was shared at the Salon. (Disclosure: – This policy toolkit is one that I am working on. It should be ready to share in early 2016). The policy and toolkit take program implementers through a series of issues and questions to help them assess potential risks and tradeoffs in a particular context, and to document decisions and improve accountability. The toolkit covers:

  1. data privacy and security –using approaches like Privacy by Design, setting limits on the data that is collected, achieving meaningful consent.
  2. platform content and design –ensuring that content produced for girls or that girls produce or volunteer is not putting girls at risk.
  3. partnerships –vetting and managing partners who may be providing online/offline services or who may partner on an initiative and want access to data, monetizing of girls’ data.
  4. monitoring, evaluation, research and learning (MERL) – how will program implementers gather and store digital data when they are collecting it directly or through third parties for organizational MERL purposes.

Privacy, Security and Safety Implications

Our final discussant spoke about the implications of implementing the above-mentioned girls’ privacy, safety and security policy. He started out saying that the policy starts off with a manifesto: We will not compromise a girl in any way, nor will we opt for solutions that cut corners in terms of cost, process or time at the expense of her safety. “I love having this as part of our project manifesto, he said. “It’s really inspiring! On the flip side, however, it makes everything I do more difficult, time consuming and expensive!”

To demonstrate some of the trade-offs and decisions required when working with vulnerable girls, he gave examples of how the current project (implemented with girls’ privacy and security as a core principle) differed from that of a commercial social media platform and advertising campaign he had previously worked on (where the main concern was the reputation of the corporation, not that of the users of the platform and the potential risks they might put themselves in by using the platform).

Moderation

On the private sector platform, said the discussant, “we didn’t have the option of pre-moderating comments because of the budget and because we had 800 thousand users. To meet the campaign goals, it was more important for users to be engaged than to ensure content was safe. We focused on removing pornographic photos within 24 hours, using algorithms based on how much skin tone was in the photo.” In the fields of marketing and social media, it’s a fairly well-known issue that heavy-handed moderation kills platform engagement. “The more we educated and informed users about comment moderation, or removed comments, the deader the community became. The more draconian the moderation, the lower the engagement.”

The discussant had also worked on a platform for youth to discuss and learn about sexual health and practices, where he said that users responded angrily to moderators and comments that restricted their participation. “We did expose our participants to certain dangers, but we also knew that social digital platforms are more successful when they provide their users with sense of ownership and control. So we identified users that exhibited desirable behaviors and created a different tier of users who could take ownership (super users) to police and flag comments as inappropriate or temporarily banned users.” This allowed a 25% decrease in moderation. The organization discovered, however, that they had to be careful about how much power these super users had. “They ended up creating certain factions on the platform, and we then had to develop safeguards and additional mechanisms by which we moderated our super users!”

Direct Messages among users

In the private sector project example, engagement was measured by the number of direct or private messages sent between platform users. In the current scenario, however, said the discussant, “we have not allowed any direct messages between platform users because of the potential risks to girls of having places on the site that are hidden from moderators. So as you can see, we are removing some of our metrics by disallowing features because of risk. These activities are all things that would make the platform more engaging but there is a big fear that they could put girls at risk.”

Adopting a privacy, security, and safety policy

One discussant highlighted the importance of having privacy, safety and security policies before a project or program begins. “If you start thinking about it later on, you may have to go back and rebuild things from scratch because your security holes are in the design….” The way a database is set up to capture user data can make it difficult to query in the future or for users to have any control of what information is or is not being shared about them. “If you don’t set up the database with security and privacy in mind from the beginning, it might be impossible to make the platform safe for girls without starting from scratch all over again,” he said.

He also cautioned that when making more secure choices from the start, platform and tool development generally takes longer and costs more. It can be harder to budget because designers may not have experience with costing and developing the more secure options.

“A valuable lesson is that you have to make sure that what you’re trying to do in the first place is worth it if it’s going to be that expensive. It is worth a girls’ while to use a platform if she first has to wade through a 5-page terms and conditions on a small mobile phone screen? Are those terms and conditions even relevant to her personally or within her local context? Every click you ask a user to make will reduce their interest in reaching the platform. And if we don’t imagine that a girl will want to click through 5 screens of terms and conditions, the whole effort might not be worth it.” Clearly, aspects such as terms and conditions and consent processes need to be designed specifically to fit new contexts and new kinds of users.

Making responsible tradeoffs

The Girls Privacy, Security and Safety policy and toolkit shared at the Salon includes a risk matrix where project implementers rank the intensity and probability of risks as high, medium and low. Based on how a situation, feature or other potential aspect is ranked and the possibility to mitigate serious risks, decisions are made to proceed or not. There will always be areas with a certain level of risk to the user. The key is in making decisions and trade-offs that balance the level of risk with the potential benefits or rewards of the tool, service, or platform. The toolkit can also help project designers to imagine potential unintended consequences and mitigate risk related to them. The policy also offers a way to systematically and pro-actively consider potential risks, decide how to handle them, and document decisions so that organizations and project implementers are accountable to girls, peers and partners, and organizational leadership.

“We’ve started to change how we talk about user data in our organization,” said one discussant. “We have stopped thinking about it as something WE create and own, but more as something GIRLS own. Banks don’t own people’s money – they borrow it for a short time. We are trying to think about data that way in the conversations we’re having about data, funding, business models, proposals and partnerships. You don’t get to own your users’ data, we’re not going to share de-anonymized data with you. We’re seeing legislative data in some of the countries we work that are going that way also, so it’s good to be thinking about this now and getting prepared”

Take a look at our list of resources on the topic and add anything we may have missed!

 

Thanks to our friends at ThoughtWorks for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology SalonSalons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

by Hila Mehr and Linda Raftree

On March 31, 2015, nearly 40 participants, joined by lead discussants Robert Fabricant, Dalberg Design Team; Despina Papadopoulos, Principled Design; and Roop Pal, PicoSatellite eXploration Lab; came together for Technology Salon New York City where we discussed the future of wearables in international development. As follows is a summary of our discussion.

While the future of wearables is uncertain, major international development stakeholders are already incorporating wearables into their programs. UNICEF Kid Power is introducing wearables into the fight against malnutrition, and is launching a Global Wearables Challenge. The MUAC (mid-upper arm circumference) band already exists in international health. Other participants present were working on startups using wearables to tackle global health and climate change.

As Kentaro Toyama often says “technology is an amplifier of human intent” and the Tech Salon discussion certainly resonated with that sentiment. The future of wearables in international development is one that we–the stakeholders as consumers, makers, and planners–will create. It’s important to recognize the history of technology interventions in international development, and that while wearables enable a new future, technology interventions are not new; there is a documented history of failures and successes to learn from. Key takeaways from the Salon, described below, include reframing our concept of wearables, envisioning what’s possible, tackling behavior change, designing for context, and recognizing the tension between data and privacy.

Reframing our Concept of Wearables

Our first discussant shared historical and current examples of wearables, some from as far back as the middle ages, and encouraged participants to rethink the concept of wearables by moving beyond the Apple Watch and existing, primarily health-related, use cases. While Intel, Arm, and Apple want to put chips on and in our bodies, and we think these are the first cases of wearables, glasses have always been wearable, and watches are wearables that change our notions of time and space. In short, technology has always been wearable. If we stay focused on existing, primarily luxury, use cases like FitBit and Apple Watch, we lose our creativity in new use cases for varying scenarios, he said.

In many cases of technology introduction into a ‘developing world’ context, the technology adds a burden rather than contributing ease. We should be thinking about how wearables can capture data without requiring input, for example. There is also an intimacy with wearables that could eliminate or reframe some of the ingrained paradigms with existing technologies, he noted.

In the most common use cases of wearables and other technology in international development, data is gathered and sent up the chain. Participants should rethink this model and use of wearables and ensure that any data collected benefits people in the moment. This, said the discussant, can help justify the act of wearing something on the body. The information gathered must be better incorporated into a personal-level feedback loop. “The more intimate technology becomes, the greater responsibility you have for how you use it,” he concluded. 

In the discussion of reframing our notion of wearables, our second discussant offered a suggestion as to why people are so fascinated with wearables. “It’s about the human body connected to the human mind,” she explained. “What is it to be human? That’s why we’re so fascinated with wearables. They enlarge the notion of technology, and the relationship between machine, human, and animal.”

Envisioning What’s Possible

In discussing the prominent use of wearables for data collection, one participant asked, “What is possible to collect from the body? Are we tracking steps because that is what we want to track or because that is what’s possible? What are those indicators that we’ve chosen and why?”

We need to approach problems by thinking about both our priorities and what’s possible with wearable technology, was one reply. “As consumers, designers, and strategists, we need to push more on what we want to see happen. We have a 7-year window to create technology that we want to take root,” noted our lead discussant.

She then shared Google Glass as an example of makers forgetting what it is to be human. While Google Glass is a great use case for doctors in remote areas or operators of complex machinery, Google Glass at dinner parties and in other social interactions quickly became problematic, requiring Google to publish guidelines for social uses cases. “It’s great that it’s out there as a blatant failure to teach other designers to take care of this space,” she said. 

Another discussant felt that the greatest opportunity is the hybrid space between specialized and the generalized. The specialized use cases for wearables are with high medical value. And then there are the generalized cases. With expensive and new technology, it becomes cheaper and more accessible as it meets those hybrid use cases in-between specialized and generalized to justify the cost and sophistication of technology. Developing far out and futuristic ideas, such as one lead discussant’s idea for a mind-controlled satellite, can also offer opportunities for those working with and studying technology to unpack and ‘de-scaffold’ the layers between the wearable technology itself and the data and future it may bring with it.

Tackling Behavior Change

One of the common assumptions with wearables is that our brains work in a mechanical way, and that if we see a trend in our data, we will change our behavior. But wearables have proven that is not the case. 

The challenge with wearables in the international development context is making sure that the data collected serves a market and consumer need — what people want to know about themselves — and that wearables are not only focused on what development organizations and researchers want to know. Additionally, the data needs to be valuable and useful to individuals. For example, if a wearable tracks iron levels but the individual doesn’t understand the intricacies of nutrition, their fluctuations in iron levels will be of no use.

Nike Plus and its FuelBand has been one of the most successful activity trackers to date, argued one discussant, because of the online community created around the device. “It wasn’t the wearable device that created behavior change, but the community sharing that went with it.” One participant trained in behavioral economics noted the huge potential for academic research and behavioral economists with the data collected from wearables. A program she had worked on looked closely at test-taking behaviors of boys versus those of girls, and wearables were able to track and detect specific behaviors that were later analyzed and compared.

Designing for Context

Mainstream wearables are currently tailored for the consumer profile of the 35-year-old male fitness buff. But how do we think about the broader population, on the individual and community level? How might wearables serve the needs of those in emergency, low resource, or conflict settings? And what are some of the concerns with wearables?

One participant urged the group to think more creatively. “I’m having trouble envisioning this in the humanitarian space. 5-10 years out, what are concrete examples of someone in Mali, Chad, or Syria with a wearable. How is it valuable? And is there an opportunity to leapfrog with this technology?”

Humanitarian disaster contexts often face massive chaos, low literacy rates, and unreliable Internet connectivity, if Internet exists at all. How can wearables be useful in these cases? One participant suggested they could be used for better ways of coordinating and organizing — such as a warning siren signal wearable for individuals in warzones, or water delivery signal wearable for when water arrives — while keeping in mind real restrictions. For example, there are fears today about vaccines and other development agency interventions. This may escalate with wearable devices or edible tracking devices.

No amount of creativity, however, replaces the realistic and sustainable value of developing technology that addresses real needs in local contexts. That’s where human-centered design and participatory processes play a vital role. Wearable products cannot be built in isolation without users, as various participants highlighted.

As one lead discussant said, we too often look at technology as a magic bullet and we need to avoid doing this again when it comes to wearables. We can only know if wearable technology is an appropriate use case by analyzing the environment and understanding the human body. In Afghanistan, she noted, everyone has an iPhone now, and that’s powerful. But not everyone will have a FitBit, because there is no compelling use case.

Appropriate use cases can be discovered by involving the community of practice from day one, making no assumptions, and showing and sharing methodology and processes. Makers and planners should also be wary of importing resources and materials, creating an entire new ecosystem. If a foreign product breaks with no access to materials and training, it won’t be fixed or sustainable. Designing for context also means designing with local resources and tailored to what the community currently has access to. At the same time, international development efforts and wearable technology should be about empowering people, and not infantilizing them.

The value of interdisciplinary teams and systems maps cannot be overlooked, participants added. Wearables highlight our individual-centric nature, while systems thinking and mapping shows how we relate with ourselves, our community, and the world. Thinking about all of these levels will be important if wearables are to contribute to development in a positive way.

Tensions around Privacy, Data, and Unethical Uses

Wearables exist in tension with identity, intimacy, and privacy. As consumers, users, makers, and planners of wearables, we have to think critically and deeply about how we want our data to be shared. One discussant emphasized that we need to involve VCs, industry, and politicians in discussion around the ethical implications of wearable technology products. The political implications and erosion of trust may be even more complex in developing world contexts, making a consortia and standards even more necessary. 

One participant noted the risks of medical wearable technology and the lack of HIPAA privacy requirements in other countries. The lack of HIPAA should not mean that privacy concerns are glossed over. The ethics of testing apply no matter the environment, and testing completely inappropriate technology in a developing context just for the captive audience is ethically questionable.

Likewise, other participants raised the issue of wearables and other types of technology being used for torture, mind control and other nefarious purposes, especially as the science of ‘mind hacking’ and the development of wearables and devices inserted under the skin becomes more sophisticated.

Participants noted the value in projects like the EU’s Ethics Inside and the pressure for a UN Representative on privacy rights. But there is still much headway to be made as data privacy and ethical concerns only grow.

The Future We Wear

The rapid evolution of technology urges us to think about how technology affects our relationships with our body, family, community, and society. What do we want those relationships to look like in the future? We have an opportunity, as consumers, makers and planners of wearables for the international context to view ourselves as stakeholders in building the future opportunities of this space. Wearables today are where the Internet was during its first five mainstream years. Now is the perfect time to put our stake in the ground and create the future we wish to exist in.

***

Our Wearables and Development background reading list is available here. Please add articles or other relevant resources or links.

Other posts about the Salon, from Eugenia Lee and Hila Mehr.

Many thanks to our lead discussants and participants for joining us, and a special thank you to ThoughtWorks for hosting us and providing breakfast!

Technology Salons run under Chatham House Rule, therefore no attribution has been made in this summary post. If you’d like to join future Salons to discuss these and related issues at the intersection of technology and development, sign up at Technology Salon.

Read Full Post »

This is a cross post from Heather Leson, Community Engagement Director at the Open Knowledge Foundation. The original post appeared here on the School of Data site.

by Heather Leson

What is the currency of change? What can coders (consumers) do with IATI data? How can suppliers deliver the data sets? Last week I had the honour of participating in the Open Data for Development Codeathon and the International Aid Transparency Initiative Technical Advisory Group meetings. IATI’s goal is to make information about aid spending easier to access, use, and understand. It was great that these events were back-to-back to push a big picture view.

My big takeaways included similar themes that I have learned on my open source journey:

You can talk about open data [insert tech or OS project] all you want, but if you don’t have an interactive community (including mentorship programmes), an education strategy, engagement/feedback loops plan, translation/localization plan and a process for people to learn how to contribute, then you build a double-edged barrier: barrier to entry and barrier for impact/contributor outputs.

Currency

About the Open Data in Development Codeathon

At the Codathon close, Mark Surman, Executive Director of Mozilla Foundation, gave us a call to action to make the web. Well, in order to create a world of data makers, I think we should run aid and development processes through this mindset. What is the currency of change? I hear many people talking about theory of change and impact, but I’d like to add ‘currency’. This is not only about money, this is about using the best brainpower and best energy sources to solve real world problems in smart ways. I think if we heed Mark’s call to action with a “yes, and”, then we can rethink how we approach complex change. Every single industry is suffering from the same issue: how to deal with the influx of supply and demand in information. We need to change how we approach the problem. Combined events like these give a window into tackling problems in a new format. It is not about the next greatest app, but more about asking: how can we learn from the Webmakers and build with each other in our respective fields and networks?

Ease of Delivery

The IATI community / network is very passionate about moving the ball forward on releasing data. During the sessions, it was clear that the attendees see some gaps and are already working to fill them. The new IATI website is set up to grow with a Community component. The feedback from each of the sessions was distilled by the IATI – TAG and Civil Society Guidance groups to share with the IATI Secretariat.

In the Open Data in Development, Impact of Open Data in Developing Countries, and CSO Guidance sessions, we discussed some key items about sharing, learning, and using IATI data. Farai Matsika, with International HIV/Aids Alliance, was particularly poignant reminding us of IATI’s CSO purpose – we need to share data with those we serve.

Country edits IATI

One of the biggest themes was data ethics. As we rush to ask NGOs and CSOs to release data, what are some of the data pitfalls? Anahi Ayala Iaccuci of Internews and Linda Raftree of Plan International USA both reminded participants that data needs to be anonymized to protect those at risk. Ms. Iaccuci asked that we consider the complex nature of sharing both sides of the open data story – successes and failures. As well, she advised: don’t create trust, but think about who people are trusting. Turning this model around is key to rethinking assumptions. I would add to her point: trust and sharing are currency and will add to the success measures of IATI. If people don’t trust the IATI data, they won’t share and use it.

Anne Crowe of Privacy International frequently asked attendees to consider the ramifications of opening data. It is clear that the IATI TAG does not curate the data that NGOS and CSOs share. Thus it falls on each of these organizations to learn how to be data makers in order to contribute data to IATI. Perhaps organizations need a lead educator and curator to ensure the future success of the IATI process, including quality data.

I think that School of Data and the Partnership for Open Data have a huge part to play with IATI. My colleague Zara Rahman is collecting user feedback for the Open Development Toolkit, and Katelyn Rogers is leading the Open Development mailing list. We collectively want to help people become data makers and consumers to effectively achieve their development goals using open data. This also means also tackling the ongoing questions about data quality and data ethics.


Here are some additional resources shared during the IATI meetings.

Read Full Post »

This is a guest post from Anna Crowe, Research Officer on the Privacy in the Developing World Project, and  Carly Nyst, Head of International Advocacy at Privacy International, a London-based NGO working on issues related to technology and human rights, with a focus on privacy and data protection. Privacy International’s new report, Aiding Surveillance, which covers this topic in greater depth was released this week.

by Anna Crowe and Carly Nyst

NOV 21 CANON 040

New technologies hold great potential for the developing world, and countless development scholars and practitioners have sung the praises of technology in accelerating development, reducing poverty, spurring innovation and improving accountability and transparency.

Worryingly, however, privacy is presented as a luxury that creates barriers to development, rather than a key aspect to sustainable development. This perspective needs to change.

Privacy is not a luxury, but a fundamental human right

New technologies are being incorporated into development initiatives and programmes relating to everything from education to health and elections, and in humanitarian initiatives, including crisis response, food delivery and refugee management. But many of the same technologies being deployed in the developing world with lofty claims and high price tags have been extremely controversial in the developed world. Expansive registration systems, identity schemes and databases that collect biometric information including fingerprints, facial scans, iris information and even DNA, have been proposed, resisted, and sometimes rejected in various countries.

The deployment of surveillance technologies by development actors, foreign aid donors and humanitarian organisations, however, is often conducted in the complete absence of the type of public debate or deliberation that has occurred in developed countries. Development actors rarely consider target populations’ opinions when approving aid programmes. Important strategy documents such as the UN Office for Humanitarian Affairs’ Humanitarianism in a Networked Age and the UN High-Level Panel on the Post-2015 Development Agenda’s A New Global Partnership: Eradicate Poverty and Transfer Economies through Sustainable Development give little space to the possible impact adopting new technologies or data analysis techniques could have on individuals’ privacy.

Some of this trend can be attributed to development actors’ systematic failure to recognise the risks to privacy that development initiatives present. However, it also reflects an often unspoken view that the right to privacy must necessarily be sacrificed at the altar of development – that privacy and development are conflicting, mutually exclusive goals.

The assumptions underpinning this view are as follows:

  • that privacy is not important to people in developing countries;
  • that the privacy implications of new technologies are not significant enough to warrant special attention;
  • and that respecting privacy comes at a high cost, endangering the success of development initiatives and creating unnecessary work for development actors.

These assumptions are deeply flawed. While it should go without saying, privacy is a universal right, enshrined in numerous international human rights treaties, and matters to all individuals, including those living in the developing world. The vast majority of developing countries have explicit constitutional requirements to ensure that their policies and practices do not unnecessarily interfere with privacy. The right to privacy guarantees individuals a personal sphere, free from state interference, and the ability to determine who has information about them and how it is used. Privacy is also an “essential requirement for the realization of the right to freedom of expression”. It is not an “optional” right that only those living in the developed world deserve to see protected. To presume otherwise ignores the humanity of individuals living in various parts of the world.

Technologies undoubtedly have the potential to dramatically improve the provision of development and humanitarian aid and to empower populations. However, the privacy implications of many new technologies are significant and are not well understood by many development actors. The expectations that are placed on technologies to solve problems need to be significantly circumscribed, and the potential negative implications of technologies must be assessed before their deployment. Biometric identification systems, for example, may assist in aid disbursement, but if they also wrongly exclude whole categories of people, then the objectives of the original development intervention have not been achieved. Similarly, border surveillance and communications surveillance systems may help a government improve national security, but may also enable the surveillance of human rights defenders, political activists, immigrants and other groups.

Asking for humanitarian actors to protect and respect privacy rights must not be distorted as requiring inflexible and impossibly high standards that would derail development initiatives if put into practice. Privacy is not an absolute right and may be limited, but only where limitation is necessary, proportionate and in accordance with law. The crucial aspect is to actually undertake an analysis of the technology and its privacy implications and to do so in a thoughtful and considered manner. For example, if an intervention requires collecting personal data from those receiving aid, the first step should be to ask what information is necessary to collect, rather than just applying a standard approach to each programme. In some cases, this may mean additional work. But this work should be considered in light of the contribution upholding human rights and the rule of law make to development and to producing sustainable outcomes. And in some cases, respecting privacy can also mean saving lives, as information falling into the wrong hands could spell tragedy.

A new framing

While there is an increasing recognition among development actors that more attention needs to be paid to privacy, it is not enough to merely ensure that a programme or initiative does not actively harm the right to privacy; instead, development actors should aim to promote rights, including the right to privacy, as an integral part of achieving sustainable development outcomes. Development is not just, or even mostly, about accelerating economic growth. The core of development is building capacity and infrastructure, advancing equality, and supporting democratic societies that protect, respect and fulfill human rights.

The benefits of development and humanitarian assistance can be delivered without unnecessary and disproportionate limitations on the right to privacy. The challenge is to improve access to and understanding of technologies, ensure that policymakers and the laws they adopt respond to the challenges and possibilities of technology, and generate greater public debate to ensure that rights and freedoms are negotiated at a societal level.

Technologies can be built to satisfy both development and privacy.

Download the Aiding Surveillance report.

Read Full Post »

Older Posts »