Feeds:
Posts
Comments

Archive for the ‘technology salon’ Category

Western perspectives on technology tend to dominate the media, despite the fact that technology impacts on people’s lives are nuanced, diverse and contextually specific. At our March 8 Technology Salon NYC (hosted by Thoughtworks) we discussed how structural issues in journalism and technology lead to narrowed perspectives and reduced nuance in technology reporting.

Joining the discussion were folks from for-profit and non-profit US-based media houses with global reporting remits, including: Nabiha Syed, CEO, The Markup; Tekendra Parmar, Tech Features Editor, Business Insider; Andrew Deck, Reporter, Rest of World and Vittoria Elliot, Reporter, WIRED. Salon participants working for other media outlets and in adjacent fields contributed to our discussion as well.

Power dynamics are at the center. English language technology media establishments tend to report as if tech stories begin and end in Silicon Valley. This affects who media talks and listens to, what stories are found and who is doing the finding, which angles and perspectives are centered, and who decides what is published. As one Salon participant said, “we came to the Salon for a conversation about tech journalism, but bigger issues are coming up. This is telling, because, no matter what type of journalism you’re doing, you’re reckoning with wider systemic issues in journalism… [like] how we pay for it, who the audiences are, how we shift the sense of who we’re reporting for, and all the existential questions in journalism.”

Some media outlets are making an intentional effort to better ground stories in place, cultural context, political context, and non-Western markets in order to challenge certain assumptions and biases in Silicon Valley. Their work aims to bring non-US-centric stories to a wider general audience in the US and abroad and to enter the media diet of Silicon Valley itself to change perspectives and expand world views using narrative, character, and storytelling that is not laced with US biases.

Challenges remain with building global audiences, however. Most publications have only a handful of people focusing on stories outside of their headquarter country. Yet “in addition to getting the stories – you also have to build global and local networks so that the stories get distributed,” as one person said. US media outlets don’t often invest in building relationships with local influencers and policy makers who could help to spread a story, react or act on it. This can mean there is little impact and low readership, leading to decision makers at media outlets saying “see, we didn’t have good metrics, those kinds of stories don’t perform well.”  This is not only the case for journalism in the US. An Indian reader may not be interested in reading about the Philippines and vice versa. So, almost every story needs a different conceptualization of audience, which is difficult for publications to afford and achieve.

Ad-revenue business models are part of the problem.  While the vision of a global audience with wide perspectives and nuance is lofty, the practicalities of implementation make it difficult. Business models based on ad revenue (clicks, likes, time spent on a page) tend to reinforce status quo content at the cost of excluding non-Western voices and other marginalized users of technology. Moving to alternative ways to measure impact can be hard for editors that have been working in the for-profit industry for several years. Even in non-profit media, “there is a shadow cast from these old metrics…. Donors will say, ‘okay, great, wonderful story, super glad that there was a regulatory change… but how many people saw it?’ And so there’s a lot of education that needs to happen.”

Identifying new approaches and metrics. Some Salon participants are looking at how to get beyond clicks to measure impact and journalism’s contribution to change without committing the sin of centering the story on the journalist. Some teams are testing “impact meetings,” with the reporting team looking at “who has power – Consumers? Regulators? Legislators? Civil society? Mapping that out, and figuring out what form the information needs to be in to get into audiences’ hands and heads… Cartoons? Instagram? An academic conversation? We identify who in the room has some power, get something into their hands, and then they do all the work.”

Another person talked about creating Listening Circles to develop participatory and grounded narratives that will have greater impact. In this case, journalists convene groups of experts and people with lived experiences on a particular topic to learn who are the power brokers, what key topics need to be raised, what is the media covering too much or too little of, and what stories or perspectives are missing from this coverage. This is similar to how a journalist normally works — talking with sources — except that the sources are in a group together and can sharpen each other’s ideas. In this sense, media works as a convener to better understand the issue and themes. It makes space for smaller more grounded organizations to join the conversations. It also helps media outlets identify key influencers and involve them from the start so that they are more interested in sharing the story when it’s ready to go. This can help catalyze ongoing movement on the theme or topic among these organizations.

These approaches look familiar to advocacy, community development, communication for development and social and behavior change communication approaches used in the development sector, since they include an entryway, a plan for inclusion from the start, an off ramp and hand over, and an understanding that the media agency is not the center of the story but can feed extra energy into a topic to help it move forward.

The difference between journalism and advocacy has emerged as a concern as traditional approaches to reporting change. Participatory work is often viewed as being less “objective” and more like advocacy. “Should journalists be advocates or not?” is a key question. Yet, as noted during the Salon discussion, journalists have always interrogated the actions of powerful people – e.g., the Elon Musks of the world. “If we’re going to interrogate power, then it’s not a huge jump to say we want to inform people about the power they already have, and all we’re doing is being intentional about getting this information to where it needs to go,” one person commented.

‘Another Salon participant agreed. ‘If you break a story about a corrupt politician, you expect that corrupt politician to be hauled before whatever institutions exist or for them to lose their job. No one is hand wringing there about whether we’ve done our jobs well, right? It is when we start to take active interest in areas that are considered outside of traditional media, when you move from politics and the economy to technology or gender or any of these other areas considered softer’, that there is a sense that you have shifted into activism and are less focused on hard-hitting journalism.” Another participant said “there’s a real discomfort when activist organizations like our work… even though the idea is that you’re supposed to be creating impact, but you’re not supposed to want that activist label.”’

Another Salon participant agreed. ‘If you break a story about a corrupt politician, you expect that corrupt politician to be hauled before whatever institutions exist or for them to lose their job. No one is hand wringing there about whether we’ve done our jobs well, right? It is when we start to take active interest in areas that are considered outside of traditional media, when you move from politics and the economy to technology or gender or any of these other areas considered ‘softer’, that there is a sense that you have shifted into activism and are less focused on hard-hitting journalism.” Another participant said “there’s a real discomfort when activist organizations like our work… Even though the idea is that you’re supposed to be creating impact, you’re not supposed to want that activist label.”

Identity and objectivity came up in the discussion as well. “The people who are most precious about whether we are objective tend to be a cohort at the intersection of gender, race, and class. Upper middle class white guys are the ones who can go anywhere in the world and report any story and are still ‘objective’. But if you try and think about other communities reporting on themselves or working in different ways, the question is always, ‘wait, how can that be done objectively?’”

A Pew Research Poll in 2022 found that overall,76% of journalists in the US are white and 51% are male. In science and tech beats, 60% of political reporters and 58% of tech journalists are men, and 77% of science and tech reporters are white, 7% Asian, 3% Black, and 3% Hispanic. Some Salon participants pointed out that this is a human resource and hiring problem that derives from structural issues both in journalism and the wider world. In tech reporting and the media space in general, those who tend to be hired are English speaking, highly educated, upper or upper middle class people from a major metropolitan area in their country. There are very few, media outlets that bring in other perspectives.

Salon participants pointed to these statistics and noted that white, US-born journalists are considered able to “objectively” report on any story in any part of the world. They can “parachute in and cover anything they want.” Yet non-white and/or non-US-born and queer journalists are either shoehorned into being experts for their own race, gender, sexual orientation or ethnicity./national identity or seen as unable to be objective because of their identities. “If you’re an English speaking, educated person from the motherland, [it’s assumed that] your responsibility is to tell the story of your people.”

In addition, the US flattens nuance in racism, classism, and other equity issues. Because the US is in an era of diversity, said one Salon participant, media outlets think it’s enough to find a Brown person and put them in leadership. They don’t often look at other issues like race, class, caste or colorism or how those play out within communities of color. “You also have to ask the question of, okay, which people from this place have the resources, the access to get the kind of education that makes them the people that institutions rely on to tell the stories of an entire country or region. How does the system reinforce, again, that internal class dynamic or that broader class and racial dynamic, even as it’s counting for ‘diversity’ on the internal side.”

Waiting for harm to happen. Another challenge raised with tech reporting is the tendency to wait until something terrible happens before a story or issue is covered. News outlets wait until a problem is acute and then write an article and say “look over here, this is happening, isn’t that awful, someone should do something,” as one Salon participant said. The mandate tends to be to “wait until harm is bad enough to be visible before reporting” rather than reducing or mitigating harm. “With technology, the speed of change is so rapid – there needs to be something beyond the horse-race journalism of ‘here’s some investment, here’s a new technology, here’s a hot take and here’s why that matters,’. There needs to be something more meaningful than that.”

Newsworthiness is sometimes weaponized to kill reporting on marginalized communities, said one person. Pitches are informed by the subjectivity and lived experiences of senior editors who may not have a nuanced understanding of how technologies and related issues affect queer communities and/or people of color. Reporters often have to find an additional “hook” to get approval to run a story about these groups or populations because the story itself is not considered newsworthy enough. The hook will often be something that ties it back to Silicon Valley — for example, a story deemed “not newsworthy” might suddenly become important when it can be linked to something that a powerful person in tech does. Reporters have to be creative to get buy in for international stories whose importance is not fully grasped by editors; for example, by pitching how a story will bring in subscriptions, traffic, or an award, or by running a US-focused story that does well, and then pitching the international version of the story.

Reporting on structural challenges in tech. Media absolutely helps bring issues to the forefront, said one Salon participant, and there are lots of great examples recently of dynamic investigative reporting and layered, nuanced storytelling. It remains difficult, however, to report on structural issues or infrastructure. Many of the harms that happen due to technology need to be resolved at the policy, regulatory, or structural level. “This is the ‘boring’ part of the story, but it’s where everything is getting cemented in terms of what technology can do and what harms will result.”

One media outlet tackled this by conducting research to show structural barriers to equity in technology access. A project measured broadband speeds in different parts of cities across the US during COVID to show how inequalities in bandwidth affected people’s access to jobs, income and services. The team joined up with other media groups and shared the data so that it could reach different audiences through a variety of story lines, some national and some local.

The field is shifting, as one Salon participant concluded, and it’s all about owning the moment. “You must own the choices that you’re making…. I do not care if this thing called journalism and these people called journalists continue to exist in the way that they do now… We must rediscover the role of the storyteller who keeps us alive and gives meaning to our societies. This model [of journalism] was not built for someone like me to engage in it fully, to see myself reflected in it fully. Institutional journalism was not made for many of people in this room. It was not made for us to imagine that we are leaders in it, bearers of it, creators of it, or anything other than just its subjects in some sort of ‘National Geographic’ way. And that means owning the moment that we’re in and the opportunities it’s bringing us.”

Technology Salons run under Chatham House Rule, so no attribution has been made in this post. If you’d like to join us for a Salon, sign up here. If you’d like to suggest a topic or provide funding support to Salons in NYC please get in touch!

Read Full Post »

 

Read Full Post »

On Thursday September 19, we gathered at the OSF offices for the Technology Salon on “Automated Decision Making in Aid: What could possibly go wrong?” with lead discussants Jon Truong, and Elyse Voegeli, two of the creators of Automating NYC; and Genevieve Fried and Varoon Mathur, Fellows at the AI Now Institute at NYU.

To start off, we asked participants whether they were optimistic or skeptical about the role of Automated Decision-making Systems (ADS) in the aid space. The response was mixed: about half skeptics and half optimists, most of whom qualified their optimism as “cautious optimism” or “it depends on who I’m talking to” or “it depends on the day and the headlines” or “if we can get the data, governance, and device standards in place.”

What are ADS?

Our next task was to define ADS. (One reason that the New York City ADS task force was unable to advance is that its members were unable to agree on the definition of an ADS).

One discussant explained that NYC’s provisional definition was something akin to:

  • Any system that uses data algorithms or computer programs to replace or assist a human decision-making process.

This may seem straightforward, yet, as she explained, “if you go too broad you might include something like ‘spellcheck’ which feels like overkill. On the other hand, spellcheck is a good case for considering how complex things can get. What if spellcheck only recognized Western names? That would be an example of encoding bias into the ADS. However, the degree of harm that could come from spellcheck as compared to using ADS for predictive policing is very different. Defining ADS is complex.”

Other elements of the definition of an ADS are that it includes computational implementation of an algorithm. Algorithms are basically clear instructions or criteria followed in order to make a decision. Algorithms can be manual. ADS include the power of computation, noted another discussant. And perhaps a computer and complex system should be included as well, and a decision-making point or cut off; for example, an algorithm that determines who gets a loan. It is also important to consider statistical modeling and forecasting, which allow for prediction.

Using data and criteria for making decisions is nothing new, and it’s often done without specific systems or computers. People make plenty of very bad decisions without computers, and the addition of computers and algorithms is sometimes considered a more objective approach, because instructions can be set and run by a computer.

Why are there issues with ADS?

In practice things are not as clear cut as they might seem, explained one of our discussants. We live in a world where people are treated differently because of their demographic identity, and curation of data can represent some populations over others or misrepresent certain populations because of how they have been treated historically. These current and historic biases make their way into the algorithms, which are created by humans, and this encodes human biases into an ADS. When feeding existing data into a computer so that it can learn, we bring our historical biases into decision-making. The data we feed into an ADS may not reflect changing demographics or shifts in the data, and algorithms may not reflect ongoing institutional policy changes.

As another person said, “systems are touted as being neutral, but they are subject to human fallacies. We live in a world that is full of injustice, and that is reflected in a data set or in an algorithm. The speed of the system, once it’s computerized, replicates injustices more quickly and at greater scale.” When people or institutions believe that the involvement of a computer means the system is neutral, we have a problem. “We need to take ADS with a grain of salt, similar to how we tell children not to believe everything they see on the Internet.”

Many people are unaware of how an algorithm works. Yet over time, we tend to rely on algorithms and believe in them as unbiased truth. When ADS are not monitored, tested, and updated, this becomes problematic. ADS can begin to make decisions for people rather than supporting people in making decisions, and this can go very wrong, for example when decisions are unquestioningly made based on statistical forecasting models.

Are there ways to curb these issues with ADS?

Consistent monitoring. ADS should also be monitored constantly over time by humans. One Salon participant suggested setting up checkpoints in the decision-making process to alert humans that something is afoul. Another suggested that research and proof of concept are critical. For example, running the existing human-only system alongside the ADS and comparing the decisions over time help to flag differences that can then be examined to see which of the processes is working better and to adjust or discontinue the ADS if it is incorrect. (In some cases, this process may actually flag biases in the human system). Random checks can be set up as can control situations where some decisions are made without using an ADS so that results can be compared between the two.

Recourse and redress. There should be simple and accessible ways for people affected by ADS to raise issues and make complaints. All ADS can make mistakes – there can be false positives (where an error points falsely to a match or the presence of a condition) and false negatives (where an error points to the absence of a match or a condition when indeed it is present). So there needs to be recourse for people affected by errors or in cases where biased data is leading to further discrimination or harm. Anyone creating an ADS needs to build in a way for mistakes to be managed and corrected.

Education and awareness. A person may not be aware that an ADS has affected them, and they likely won’t understand how an ADS works. Even people using ADS for decisions about others often forget that it’s an ADS deciding. This is similar to how people forget that their newsfeed on Facebook is based on their historical choices in content and their ‘likes’ and is not a neutral serving of objective content.

Improving the underlying data. Algorithms will only get better when there are constant feedback loops and new data that help the computer learn, said one Salon participant. Currently most algorithms are trained on highly biased samples that do not reflect marginalized groups and communities. For example, there is very little data about many of the people participating in or eligible for aid and development programs.

So we need proper data sets that are continually updated if we are to use ADS in aid work. This is a problem, however, if the data that is continually fed into the ADS remains biased. One person shared this example: If some communities are policed more because of race, economic status, etc., there will continually be more data showing that people in those communities are committing crimes. In whiter or wealthier communities, where there is less policing, less people are arrested. If we update our data continually without changing the fact that some communities are policed more than others (thus will appear to have higher crime rates), we are simply creating a feedback loop that confirms our existing biases.

Privacy concerns also enter the picture. We may want to avoid collecting data on race, gender, ethnicity or economic status so that we don’t expose people to discrimination, stigma, or harm. For example, in the case of humanitarian work or conflict zones, sensitive data can make people or groups a target for governments or unfriendly actors. However, it’s hard to make decisions that benefit people if their data is missing. It ends up being a catch 22.

Transparency is another way to improve ADS. “In the aid sector, we never tell people how decisions are made, regardless of whether those are human or machine-made decisions,” said one Salon participant. When the underlying algorithm is obscured, it cannot be reviewed for value judgments. Some compared this to some of the current non-algorithmic decision-making processes in the aid system (which are also not transparent) and suggested that aid systems could get more intelligent if they began to surface their own specific biases.

The objectives of the ADS can be reviewed. Is the system used to further marginalize or discriminate against certain populations, or can this be turned on its head? asked one discussant. ADS could be used to try to determine which police officers might commit violence against civilians rather than to predict which people might commit a crime. (See the Algorithmic Justice League’s work). 

ADS in the aid system – limited to the powerful few?

Because of the underlying challenges with data (quality, standards, lack of) in the aid sector, ADS is still a challenge. One area where data is available and where ADS are being built and used is in supply chain management, for example, at massive UN agencies like the World Food Program.

Some questioned whether this exacerbates concentration of power in these large agencies, running counter to agreed-upon sector goals to decentralize power and control to smaller, local organizations who are ‘on the ground’ and working directly in communities. Does ADS then bring even more hierarchy, bias, and exclusion into an already problematic system of power and privilege? Could there be ways of using ADS differently in the aid system that would not replicate existing power structures? Could ADS itself be used to help people see their own biases? “Could we build that into an ADS? Could we have a read out of decisions we came to and then see what possible biases were?” asked one person.

How can we improve trust in ADS?

Most aid workers, national organizations, and affected communities have a limited understanding of ADS, leading to lower levels of trust in ADS and the decisions they produce. Part of the issue is the lack of participation and involvement in the design, implementation, validation, and vetting of ADS. On the other hand, one Salon participant pointed out that given all the issues with bias and exclusion, “maybe they would trust an ADS even less if they understood how an ADS works.”

Involving both users of an ADS and the people affected by ADS decisions is crucial. This needs to happen early in the process, said one person. It shouldn’t be limited to having people complain or report once the ADS has wronged them. They need to be at the table when the system is being developed and trialed.

If trust is to be built, the explainability of an algorithm needs consideration. “How can you explain the algorithm to people who are affected by it? Humanitarian workers cannot describe an ADS if they don’t understand it. We need to find ways to explain ADS to a non-technical audience so that they can be involved,” said one person. “We’ve shown sophisticated models to leaders, and they defaulted to spreadsheets.”

This brought up the need for change management if ADS are introduced. Involving and engaging decision-makers in the design and creation of ADS systems is a critical step for their adoption. This means understanding how decisions are made currently and based on what factors. Technology and data teams need to be in the room to understand the open and hidden nature of decision-making.

Isn’t decision making without ADS also highly biased and obscured?

People are often resistant to talking about or sharing how decisions have been made in the past, however, because those decisions may have been biased or inconsistent, based on faulty data, or made for political or other reasons.

As one person pointed out, both government and the aid system are deeply politicized and suffer from local biases, corruption and elite capture. A spatial analysis of food distribution in two countries, for example, showed extreme biases along local political leader lines. A related analysis of the road network and aid distribution allowed a clear view into the unfairness of food distribution and efficiency losses.

Aid agencies themselves make highly-biased decisions all the time, it was noted. Decisions are often political, situational, or made to enhance the reputation of an individual or agency. These decisions are usually not fully documented. Is this any less transparent than the ‘black box’ of an algorithm? Not to mention that agencies have countless dashboards that are aimed at helping them make efficient, unbiased decisions, yet recommendations based on the data may run counter to what is needed politically or for other reasons in a given moment.

Could (should) the humanitarian sector assume greater leadership on ADS?

Most ADS are built by private sector partners. When they are sold to the public or INGO sector, these companies indemnify themselves against liability and keep their trade secrets. It becomes impossible to hold them to account for any harm produced. One person asked whether the humanitarian sector could lead by bringing in different incentives – transparency, multi-stakeholder design, participation, and a focus on wellbeing? Could we try this and learn from it and develop and document processes whereby this could be done at scale? Could the aid sector open source how ADS are designed and created so that data scientists and others could improve?

Some were skeptical about whether the aid sector would be capable of this. “Theoretically we could do this,” said one person, “but it would then likely be concentrated in the hands of these few large agencies. In order to have economies of scale, it will have to be them because automation requires large scale. If that is to happen, then the smaller organizations will have to trust the big ones, but currently the small organizations don’t trust the big ones to manage or protect data.” And what about the involvement of governments, said another person, we would need to consider the role of the public sector.

“I like the idea of the humanitarian sector leading,” added one person, “but aid agencies don’t have the greatest track record for putting their constituencies in the driving seat. That’s not how it works. A lot of people are trying to correct that, but aid sector employees are not the people who will be affected by these systems in the end. We could think about working with organizations who have the outreach capacity to do work with these groups, but again, these organizations are not made up of the affected people. We have to remember that.”

How can we address governance and accountability?

When you bring in government, private sector, aid agencies, software developers, data, and the like, said another person, you will have issues of intellectual property, ownership, and governance. What are the local laws related to data transmission and storage? Is it enough to open source just the code or ADS framework without any data in it? If you work with local developers and force them to open source the algorithm, what does that mean for them and their own sustainability as local businesses?

Legal agreements? Another person suggested that we focus on open sourcing legal agreements rather than algorithms. “There are always risks, duties, and liabilities listed in contracts and legal agreements. The private sector in particular will always play the indemnity card. And that means there is no commercial incentive to fix the tools that are being used. What if we pivoted this conversation to commercial liability? If a model is developed in Manhattan, it won’t work in Malawi — a company has a commercial duty to flag and recognize that. This type of issue is hidden if we focus the conversation on open software or open models. It’s rare that all the technology will be open and transparent. What we should push for is open contracting, and that could help a lot with governance.”

Certification? Others suggested that we adapt existing audit systems like the LEED certification (which allows engineers and architects to audit whether buildings are actually environmentally sustainable) or the IRB process (external boards that review research to flag ethical issues). “What if there were a team of data scientists and others who could audit ADS and determine the flaws and biases?” suggested one person. “That way the entire thing wouldn’t need to be open, but it could still be audited independently”. This was questioned, however, in that a stamp of approval on a single system could lead people to believe that every system designed by a particular group would pass the test.

Ethical frameworks could be a tool, yet which framework? A recent article cited 84 different ethical frameworks for Artificial Intelligence.

Regulation? Self-regulation has failed, said one person. Why aren’t we talking about actual regulation? The General Data Protection Regulation (GDPR) in Europe has a specific article (Article 22) about ADS that states that people have a right to know when ADS are used to made decisions that affect them, the right to contest decisions made by ADS, and right to request that humans review ADS decisions.

SPHERE Standards / Core Humanitarian Standard? Because of the legal complexities of working across multiple countries and with different entities in different jurisdictions (including some like the UN who are exempt from the law), an add-on to the SPHERE standards might be considered, said one person. Or something linked to the Core Humanitarian Standard (CHS), which includes a certification process. Donors will often ask whether an agency is CHS certified.

So, is there any good to come from ADS?

We tend to judge ADS with higher standards than we judge humans, said one Salon participant. Loan officers have been making biased decisions for years. How can we apply the standards of impartiality and transparency to both ADS and human decision making? ADS may be able to fix some of our current faulty and biased decisions. This may be useful for large systems, where we can’t afford to deploy humans at scale. Let’s find some potential bright spots for ADS.

Some positive examples shared by participants included:

  • Human rights organizations are using satellite imagery to identify areas that have been burned or otherwise destroyed during conflict. This application of automated decision making doesn’t deal directly with people or allocation of resources, it supports human rights research.
  • In California, ADS has been used to expunge the records of people convicted for marijuana-related violations now that marijuana has been legalized. This example supports justice and fairness.
  • During Hurricane Irma, an organization in the Virgin Islands used an excel spreadsheet to track whether people met the criteria for assistance. Aid workers would interview people and the sheet would calculate automatically whether they were eligible. This was not high tech or sexy, but it was automated and fast. The government created the criteria and these were open and transparently communicated to people ahead of time so that if they didn’t receive benefits, they were clear about why.
  • Flood management is an area where there is a lot of data and forecasting. Governments have been using ADS to evacuate people before it’s too late. This sector can gain in efficiency with ADS, which could be expanded to other weather-based hazards. Because it is a straightforward use case that involves satellites and less personal data it may be a less political space, making deployment easier.
  • Drones also use ADS to stitch together hundreds of thousands of photos to create large images of geographical areas. Though drone data still needs to be ground truthed, it is less of an ethical minefield than when personal or household level data is collected, said one participant. Other participants, however, had issues with the portrayal of drones as less of an ethical minefield, citing surveillance, privacy, and challenges with the ownership and governance of the final knowledge product, the data for which was likely collected without people’s consent.

How can the humanitarian sector prepare for ADS?

In conclusion, one participant summed up that decision making has always been around. As ADS is explored more in-depth with groups like the one at this Salon and as we delve into the ethics and improve on ADS, there is great potential. ADS will probably never totally replace humans but can supplement humans to make better decisions.

How are we in the humanitarian sector preparing people at all levels of the system to engage with these systems, design them ethically, reduce harm, and make them more transparent? How are we working to build capacities at the local level to understand and use ADS? How are we figuring out ways to ensure that the populations who will be affected by ADS are aware of what is happening? How are we ensuring recourse and redress in the case of bad decisions or bias? What jobs might be created (rather than eliminated) with the introduction of more ADS?

ADS are not going to go away, and the humanitarian sector doesn’t have to wait until they are perfected to get involved in shaping and improving them so that they support our work in ethical and useful ways rather than in harmful or unethical ways.

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

Karen Palmer is a digital filmmaker and storyteller from London who’s doing a dual residence at ThoughtWorks in Manhattan and TED New York to further develop a project called RIOT, described as an ‘emotionally responsive, live-action film with 3D sound.’ The film uses artificial intelligence, machine learning, various biometric readings, and facial recognition to take a person through a personalized journey during dangerous riot.

Karen Palmer, the future of immersive filmmaking, Future of Storytelling (FoST) 

Karen describes RIOT as ‘bespoke film that reflects your reality.’ As you watch the film, the film is also watching you and adapting to your experience of viewing it. Using a series of biometric readings (the team is experimenting with eye tracking, facial recognition, gait analysis, infrared to capture body temperature, and an emerging technology that tracks heart rate by monitoring the capillaries under a person’s eyes) the film shifts and changes. The biometrics and AI create a “choose your own adventure” type of immersive film experience, except that the choice is made by your body’s reactions to different scenarios. A unique aspect of Karen’s work is that the viewer doesn’t need to wear any type of gear for the experience. The idea is to make RIOT as seamless and immersive as possible. Read more about Karen’s ideas and how the film is shaping up in this Fast Company article and follow along with the project on the RIOT project blog.

When we talked about her project, the first thing I thought of was “The Feelies” in Aldous Huxley’s 1932 classic ‘Brave New World.’ Yet the feelies were pure escapism, and Karen’s work aims to draw people in to a challenging experience where they face their own emotions.

On Friday, December 15, I had the opportunity to facilitate a Salon discussion with a number of people from related disciplines who are intrigued by RIOT and the various boundaries it tests and explores. We had perspectives from people working in the areas of digital storytelling and narrative, surveillance and activism, media and entertainment, emotional intelligence, digital and immersive theater, brand experience, 3D sound and immersive audio, agency and representation, conflict mediation and non-state actors, film, artificial intelligence, and interactive design.

Karen has been busy over the past month as interest in the project begins to swell. In mid-November, at Montreal’s Phi Centre’s Lucid Realities exhibit, she spoke about how digital storytelling is involving more and more of our senses, bringing an extra layer of power to the experience. This means that artists and creatives have an added layer of responsibility. (Research suggests, for example, that the brain has trouble deciphering between virtual reality [VR] and actual reality, and children under the age of 8 have had problems differentiating between a VR experience and actual memory.)

At a recent TED Talk, Karen described the essence of her work as creating experiences where the participant becomes aware of how their emotions affect the narrative of the film while they are in it, and this helps them to see how their emotions affect the narrative of their life. Can this help to create new neural pathways in the brain, she asks. Can it help a person to see how their own emotions are impacting on them but also how others are reading their emotions and reacting to those emotions in real life?

Race and sexuality are at the forefront in the US – and the Trump elections further heightened the tensions. Karen believes it’s ever more important to explore different perspectives and fears in the current context where the potential for unrest is growing. Karen hopes that RIOT can be ‘your own personal riot training tool – a way to become aware of your own reactions and of moving through your fear.’

Core themes that we discussed on Friday include:

How can we harness the power of emotion? Despite our lives being emotionally hyper-charged, (especially right now in the US), we keep using facts and data to try to change hearts and minds. This approach is ineffective. In addition, people are less trusting of third-party sources because of the onslaught of misinformation, disinformation and false information. Can we use storytelling to help us get through this period? Can immersive storytelling and creative use of 3D sound help us to trust more, to engage and to witness? Can it help us to think about how we might react during certain events, like police violence? (See Tahera Aziz’ project [re]locate about the murder of Stephen Lawrence in South London in 1993). Can it help us to better understand various perspectives? The final version of RIOT aims to bring in footage from several angles, such as CCTV from a looted store, a police body cam, and someone’s mobile phone footage shot as they ran past, in an effort to show an array of perspectives that would help viewers see things in different lights.

How do we catch the questions that RIOT stirs up in people’s minds? As someone experiences RIOT, they will have all sorts of emotions and thoughts, and these will depend on a their identity and lived experiences. At one showing of RIOT, a young white boy said he learned that if he’s feeling scared he should try to stay calm. He also said that when the cop yelled at him in the film, he assumed that he must have done something wrong. A black teenager might have had an entirely different reaction to the police. RIOT is bringing in scent, haze, 3D sound, and other elements which have started to affect people more profoundly. Some have been moved to tears or said that the film triggered anger and other strong emotions for them.

Does the artist have a responsibility to accompany people through the full emotional experience? In traditional VR experiences, a person waits in line, puts on a VR headset, experiences something profound (and potentially something triggering), then takes off the headset and is rushed out so that the next person can try it. Creators of these new and immersive media experiences are just now becoming fully aware of how to manage the emotional side of the experiences and they don’t yet have a good handle on what their responsibilities are toward those who are going through them. How do we debrief people afterwards? How do we give them space to process what has been triggered? How do we bring people into the co-creation process so that we better understand what it means to tell or experience these stories? The Columbia Digital Storytelling Lab is working on gaining a better understanding of all this and the impact it can have on people.

How do we create the grammar and frameworks for talking about this? The technologies and tactics for this type of digital immersive storytelling are entirely new and untested. Creators are only now becoming more aware of the consequences of the experiences that they are creating ‘What am I making? Why? How will people go through it? How will they leave? What are the structures and how do I make it safe for them?’ The artist can open someone up to an intense experience, but then they are often just ushered out, reeling, and someone else is rushed in. It’s critical to build time for debriefing into the experience and to have some capacity for managing the emotions and reactions that could be triggered.

SAFE Lab, for example, works with students and the community in Chicago, Harlem, and Brooklyn on youth-driven solutions to de-escalation of violence. The project development starts with the human experience and the tech comes in later. Youth are part of the solution space, but along the way they learn hard and soft skills related to emerging tech. The Lab is testing a debriefing process also. The challenge is that this is a new space for everyone; and creation, testing and documentation are happening simultaneously. Rather than just thinking about a ‘user journey,’ creators need to think about the emotionality of the full experience. This means that as opposed to just doing an immersive film – neuroscience, sociology, behavioral psychology, and lots of other fields and research are included in the dialogue. It’s a convergence of industries and sectors.

What about algorithmic bias? It’s not possible to create an unbiased algorithm, because humans all have bias. Even if you could create an unbiased algorithm, as soon as you started inputting human information into it, it would become biased. Also, as algorithms become more complex, it becomes more and more difficult to understand how they arrive to decisions. This results in black boxes that are putting out decisions that even the humans that build them can’t understand. The RIOT team is working with Dr. Hongying Meng of Brunel University London, an expert in the creation of facial and emotion detection algorithms, to develop an open source algorithm for RIOT. Even if the algorithm itself isn’t neutral, the process by which it computes will be transparent.

Most algorithms are not open. Because the majority of private companies have financial goals rather than social goals in using or creating algorithms, they have little incentive for being transparent about how an algorithm works or what biases are inherent. Ad agencies want to track how a customer reacts to a product. Facebook wants to generate more ad revenue so it adjusts what news you see on your feed. The justice system wants to save money and time by using sentencing algorithms. Yet the biases in their algorithms can cause serious harm in multiple ways. (See this 2016 report from ProPublica). The problem with these commercial algorithms is that they are opaque and the biases in them are not shared. This lack of transparency is considered by some to be more problematic than the bias itself.

Should there be a greater push for regulation of algorithms? People who work in surveillance are often ignored because they are perceived as paranoid. Yet fears that AI will be totally controlled by the military, the private sector and tech companies in ways that are hidden and opaque are real and it’s imperative to find ways to bring the actual dangers home to people. This could be partly accomplished through narrative and stories. (See John Oliver’s interview with Edward Snowden) Could artists create projects that drive conversations around algorithmic bias, help the public see the risks, and push for greater regulation? (Also of note: the New York City government recently announced that it will start a task force to look more deeply into algorithmic bias).

How is the RIOT team developing its emotion recognition algorithm? The RIOT team is collecting data to feed into the algorithm by capturing facial emotions and labeling them. The challenge is that one person may think someone looks calm, scared, or angry and another person may read it a different way. They are also testing self-reported emotions to reduce bias. The purpose of the RIOT facial detection algorithm is to measure what the person is actually feeling and how others perceive that the person is feeling. For example, how would a police officer read your face? How would a fellow protester see you? The team is developing the algorithm with the specific bias that is needed for the narrative itself. The process will be documented in a peer-reviewed research paper that considers these issues from the angle of state control of citizens. Other angles to explore would be how algorithms and biometrics are used by societies of control and/or by non-state actors such as militia in the Middle East or by right wing and/or white supremacist groups in the US. (See this article on facial recognition tools being used to identify sexual orientation)

Stay tuned to hear more…. We’ll be meeting again in the new year to go more in-depth on topics such as responsibly guiding people through VR experiences; exploring potential unintended consequences of these technologies and experiences, especially for certain racial groups; commercial applications for sensory storytelling and elements of scale; global applications of these technologies; practical development and testing of algorithms; prototyping, ideation and foundational knowledge for algorithm development.

Garry Haywood of Kinicho from also wrote his thoughts up from the day.

Read Full Post »

On November 14 Technology Salon NYC met to discuss issues related to the role of film and video in development and humanitarian work. Our lead discussants were Ambika Samarthya from Praekelt.org; Lina Srivastava of CIEL, and Rebekah Stutzman, from Digital Green’s DC office.

How does film support aid and development work?

Lina proposed that there are three main reasons for using video, film, and/or immersive media (such as virtual reality or augmented reality) in humanitarian and development work:

  • Raising awareness about an issue or a brand and serving as an entry point or a way to frame further actions.
  • Community-led discussion/participatory media, where people take agency and ownership and express themselves through media.
  • Catalyzing movements themselves, where film, video, and other visual arts are used to feed social movements.

Each of the above is aimed at a different audience. “Raising awareness” often only scratches the surface of an issue and can have limited impact if done on its own without additional actions. Community-led efforts tend to go deeper and focus on the learning and impact of the process (rather than the quality of the end product) but they usually reach fewer people (thus have a higher cost per person and less scale). When using video for catalyzing moments, the goal is normally bringing people into a longer-term advocacy effort.

In all three instances, there are issues with who controls access to tools/channels, platforms, and distribution channels. Though social media has changed this to an extent, there are still gatekeepers that impact who gets to be involved and whose voice/whose story is highlighted, funders who determine which work happens, and algorithms that dictate who will see the end products.

Participants suggested additional ways that video and film are used, including:

  • Social-emotional learning, where video is shown and then discussed to expand on new ideas and habits or to encourage behavior change.
  • Personal transformation through engaging with video.

Becky shared Digital Green’s approach, which is participatory and where community members to use video to help themselves and those around them. The organization supports community members to film videos about their agricultural practices, and these are then taken to nearby communities to share and discuss. (More on Digital Green here). Video doesn’t solve anyone’s development problem all by itself, Becky emphasized. If an agricultural extensionist is no good, having a video as part of their training materials won’t solve that. “If they have a top-down attitude, don’t engage, don’t answer questions, etc., or if people are not open to changing practices, video or no video, it won’t work.”

How can we improve impact measurement?

Questions arose from Salon participants around how to measure impact of film in a project or wider effort. Overall, impact measurement in the world of film for development is weak, noted one discussant, because change takes a long time and it is hard to track. We are often encouraged to focus on the wrong things like “vanity measurements” such as “likes” and “clicks,” but these don’t speak to longer-term and deeper impact of a film and they are often inappropriate in terms of who the audience is for the actual films (E.g., are we interested in impact on the local audience who is being impacted by the problem or the external audience who is being encouraged to care about it?)

Digital Green measures behavior change based on uptake of new agriculture practices. “After the agriculture extension worker shows a video to a group, they collect data on everyone that’s there. They record the questions that people ask, the feedback about why they can’t implement a particular practice, and in that way they know who is interested in trying a new practice.” The organization sets indicators for implementing the practice. “The extension worker returns to the community to see if the family has implemented a, b, c and if not, we try to find out why. So we have iterative improvement based on feedback from the video.” The organization does post their videos on YouTube but doesn’t know if the content there is having an impact. “We don’t even try to follow it up as we feel online video is much less relevant to our audience.” An organization that is working with social-emotional learning suggested that RCTs could be done to measure which videos are more effective. Others who work on a more individual or artistic level said that the immediate feedback and reactions from viewers were a way to gauge impact.

Donors often have different understandings of useful metrics. “What is a valuable metric? How can we gather it? How much do you want us to spend gathering it?” commented one person. Larger, longer-term partners who are not one-off donors will have a better sense of how to measure impact in reasonable ways. One person who formerly worked at a large public television station noted that it was common to have long conversation about measurement, goals, and aligning to the mission. “But we didn’t go by numbers, we focused on qualitative measurement.” She highlighted the importance of having these conversations with donors and asking them “why are you partnering with us?” Being able to say no to donors is important, she said. “If you are not sharing goals and objectives you shouldn’t be working together. Is gathering these stories a benefit to the community ? If you can’t communicate your actual intent, it’s very complicated.”

The goal of participatory video is less about engaging external (international) audiences or branding and advocacy. Rather it focuses on building skills and capacities through the process of video making. Here, the impact measurement is more related to individual, and often self-reported, skills such as confidence, finding your voice, public speaking, teamwork, leadership skills, critical thinking and media literacy. The quality of video production in these cases may be low, and videos unsuitable for widespread circulation, however the process and product can be catalysts for local-level change and locally-led advocacy on themes and topics that are important to the video-makers.

Participatory video suffers from low funding levels because it doesn’t reach the kind of scale that is desired by funders, though it can often contribute to deep, personal and community-level change. Some felt that even if community-created videos were of high production quality and translated to many languages, large-scale distribution is not always feasible because they are developed in and speak to/for hyper-local contexts, thus their relevance can be limited to smaller geographic areas. Expectation management with donors can go a long way towards shifting perspectives and understanding of what constitutes “impact.”

Should we re-think compensation?

Ambika noted that there are often challenges related to incentives and compensation when filming with communities for organizational purposes (such as branding or fundraising). Organizations are usually willing to pay people for their time in places such New York City and less inclined to do so when working with a rural community that is perceived to benefit from an organization’s services and projects. Perceptions by community members that a filmmaker is financially benefiting from video work can be hard to overcome, and this means that conflict may arise during non-profit filmmaking aimed at fundraising or building a brand. Even when individuals and communities are aware that they will not be compensated directly, there is still often some type of financial expectation, noted one Salon participant, such as the purchase of local goods and products.

Working closely with gatekeepers and community leaders can help to ease these tensions. When filmmaking takes several hours or days, however, participants may be visibly stressed or concerned about household or economic chores that are falling to the side during filming, and this can be challenging to navigate, noted one media professional. Filming in virtual reality can exacerbate this problem, since VR filming is normally over-programmed and repetitive in an effort to appear realistic.

One person suggested a change in how we approach incentives. “We spent about two years in a community filming a documentary about migration. This was part of a longer research project. We were not able to compensate the community, but we were able to invest directly in some of the local businesses and to raise funds for some community projects.” It’s difficult to understand why we would not compensate people for their time and their stories, she said. “This is basically their intellectual property, and we’re stealing it. We need a sector rethink.” Another person agreed, “in the US everyone gets paid and we have rules and standards for how that happens. We should be developing these for our work elsewhere.”

Participatory video tends to have less of a challenge with compensation. “People see the videos, the videos are for their neighbors. They are sharing good agricultural or nutrition approaches with people that they already know. They sometimes love being in the videos and that is partly its own reward. Helping people around them is also an incentive,” said one person.

There were several other rabbit holes to explore in relation to film and development, so look for more Salons in 2018!

To close out the year right, join us for ICT4Drinks on December 14th at Flatiron Hall from 7-9pm. If you’re signed up for Technology Salon emails, you’ll find the invitation in your inbox!

Salons run under Chatham House Rule so no attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

For our Tuesday, July 27th Salon, we discussed partnerships and interoperability in global health systems. The room housed a wide range of perspectives, from small to large non-governmental organizations to donors and funders to software developers to designers to healthcare professionals to students. Our lead discussants were Josh Nesbit, CEO at Medic Mobile; Jonathan McKay, Global Head of Partnerships and Director of the US Office of Praekelt.org; and Tiffany Lentz, Managing Director, Office of Social Change Initiatives at ThoughtWorks

We started by hearing from our discussants on why they had decided to tackle issues in the area of health. Reasons were primarily because health systems were excluding people from care and organizations wanted to find a way to make healthcare inclusive. As one discussant put it, “utilitarianism has infected global health. A lack of moral imagination is the top problem we’re facing.”

Other challenges include requests for small scale pilots and customization/ bespoke applications, lack of funding and extensive requirements for grant applications, and a disconnect between what is needed on the ground and what donors want to fund. “The amount of documentation to get a grant is ridiculous, and then the system that is requested to be built is not even the system that needs to be made,” commented one person. Another challenge is that everyone is under constant pressure to demonstrate that they are being innovative. [Sidenote: I’m reminded of this post from 2010….] “They want things that are not necessarily in the best interest of the project, but that are seen to be innovations. Funders are often dragged along by that,” noted another person.

The conversation most often touched on the unfulfilled potential of having a working ecosystem and a common infrastructure for health data as well as the problems and challenges that will most probably arise when trying to develop these.

“There are so many uncoordinated pilot projects in different districts, all doing different things,” said one person. “Governments are doing what they can, but they don’t have the funds,” added another, “and that’s why there are so many small pilots happening everywhere.” One company noted that it had started developing a platform for SMS but abandoned it in favor of working with an existing platform instead. “Can we create standards and protocols to tie some of this work together? There isn’t a common infrastructure that we can build on,” was the complaint. “We seem to always start from scratch. I hope donors and organizations get smart about applying pressure in the right areas. We need an infrastructure that allows us to build on it and do the work!” On the other hand, someone warned of the risks of pushing everyone to “jump on a mediocre software or platform just because we are told to by a large agency or donor.”

The benefits of collaboration and partnership are apparent: increased access to important information, more cooperation, less duplication, the ability to build on existing knowledge, and so on. However, though desirable, partnerships and interoperability is not easy to establish. “Is it too early for meaningful partnerships in mobile health? I was wondering if I could say that…” said one person. “I’m not even sure I’m actually comfortable saying it…. But if you’re providing essential basic services, collecting sensitive medical data from patients, there should be some kind of infrastructure apart from private sector services, shouldn’t there?” The question is who should own this type of a mediator platform: governments? MNOs?

Beyond this, there are several issues related to control and ownership. Who would own the data? Is there a way to get to a point where the data would be owned by the patients and demonetized? If the common system is run by the private sector, there should be protections surrounding the patients’ sensitive information. Perhaps this should be a government-run system. Should it be open source?

Open source has its own challenges. “Well… yes. We’ve practiced ‘hopensource’,” said one person (to widespread chuckles).

Another explained that the way we’ve designed information systems has held back shifts in health systems. “When we’re comparing notes and how we are designing products, we need to be out ahead of the health systems and financing shifts. We need to focus on people-centered care. We need to gather information about a person over time and place. About the teams who are caring for them. Many governments we’re working with are powerless and moneyless. But even small organizations can do something. When we show up and treat a government as a systems owner that is responsible to deliver health care to their citizens, then we start to think about them as a partner, and they begin to think about how they could support their health systems.”

One potential model is to design a platform or system such that it can eventually be handed off to a government. This, of course, isn’t a simple idea in execution. Governments can be limited by their internal expertise. The personnel that a government has at the time of the handoff won’t necessarily be there years or months later. So while the handoff itself may be successful in the short term, there’s no firm guarantee that the system will be continually operational in the future. Additionally, governments may not be equipped with the knowledge to make the best decisions about software systems they purchase. Governments’ negotiating capacity must be expanded if they are to successfully run an interoperable system. “But if we can bring in a snazzy system that’s already interoperable, it may be more successful,” said one person.

Having a common data infrastructure is crucial. However, we must also spend some time thinking about what the data itself should look like. Can it be standardized? How can we ensure that it is legible to anyone with access to it?

These are only some of the relevant political issues, and at a more material level, one cannot ignore the technical challenges of maintaining a national scale system. For example, “just getting a successful outbound dialing rate is hard!” said one person. “If you are running servers in Nigeria it just won’t always be up! I think human centered design is important. But there is also a huge problem simply with making these things work at scale. The hardcore technical challenges are real. We can help governments to filter through some of the potential options. Like, can a system demonstrate that it can really operate at massive scale?” Another person highlighted that “it’s often non-profits who are helping to strengthen the capacity of governments to make better decisions. They don’t have money for large-scale systems and often don’t know how to judge what’s good or to be a strong negotiator. They are really in a bind.”

This is not to mention that “the computers have plastic over them half the time. Electricity, computers, literacy, there are all these issues. And the TelCo infrastructure! We have layers of capacity gaps to address,” said one person.

There are also donors to consider. They may come into a project with unrealistic expectations of what is normal and what can be accomplished. There is a delicate balance to be struck between inspiring the donors to take up the project and managing expectations so that they are not disappointed.” One strategy is to “start hopeful and steadily temper expectations.” This is true also with other kinds of partnerships. “Building trust with organizations so that when things do go bad, you can try to manage it is crucial. Often it seems like you don’t want to be too real in the first conversation. I think, ‘if I lay this on them at the start it can be too real and feel overwhelming.…'” Others recommended setting expectations about how everyone together is performing. “It’s more like, ‘together we are going to be looking at this, and we’ll be seeing together how we are going to work and perform together.”

Creating an interoperable data system is costly and time-consuming, oftentimes more so than donors and other stakeholders imagine, but there are real benefits. Any step in the direction of interoperability must deal with challenges like those considered in this discussion. Problems abound. Solutions will be harder to come by, but not impossible.

So, what would practitioners like to see? “I would like to see one country that provides an incredible case study showing what good partnership and collaboration looks like with different partners working at different levels and having a massive impact and improved outcomes. Maybe in Uganda,” said one person. “I hope we see more of us rally around supporting and helping governments to be the system owners. We could focus on a metric or shared cause – I hope in the near future we have a view into the equity measure and not just the vast numbers. I’d love to see us use health equity as the rallying point,” added another. From a different angle, one person felt that “from a for-profit, we could see it differently. We could take on a country, a clinic or something as our own project. What if we could sponsor a government’s health care system?”

A participant summed the Salon up nicely: “I’d like to make a flip-side comment. I want to express gratitude to all the folks here as discussants. This is one of the most unforgiving and difficult environments to work in. It’ SO difficult. You have to be an organization super hero. We’re among peers and feel it as normal to talk about challenges, but you’re really all contributing so much!”

Salons are run under Chatham House Rule so not attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

Our latest Technology Salon, at the African Evaluation Association (AfrEA) Conference in Uganda on March 29th, focused on how mobile and social media platforms are being used in monitoring and evaluation processes. Our lead discussants were Jamie Arkin from Human Network International (soon to be merging with VotoMobile) who spoke about interactive voice response (IVR); John Njovu, an independent consultant working with the Ministry of National Development Planning of the Zambian government, who shared experiences with technology tools for citizen feedback to monitor budgets and support transparency and accountability; and Noel Verrinder from Genesis who talked about using WhatsApp in a youth financial education program.

Using IVR for surveys

Jamie shared how HNI deploys IVR surveys to obtain information about different initiatives or interventions from a wide public or to understand the public’s beliefs about a particular topic. These surveys come in three formats: random dialing of telephone numbers until someone picks up; asking people to call in, for example, on a radio show; or using an existing list of phone numbers. “If there is an 80% phone penetration or higher, it is equal to a normal household level survey,” she said. The organization has list of thousands of phone numbers and can segment these to create a sample. “IVR really amplifies people’s voices. We record in local language. We can ask whether the respondent is a man or a woman. People use their keypads to reply or we can record their voices providing an open response to the question.” The voice responses are later digitized into text for analysis. In order to avoid too many free voice responses, the HNI system can cut the recording off after 30 seconds or limit voice responses to the first 100 calls. Often keypad responses are most effective as people are not used to leaving voice mails.

IVR is useful in areas where there is low literacy. “In Rwanda, 80% of women cannot read a full sentence, so SMS is not a silver bullet,” Jamie noted. “Smartphones are coming, and people want them, but 95% of people in Uganda have a simple feature phone, so we cannot reach them by Facebook or WhatsApp. If you are going with those tools, you will only reach the wealthiest 5% of the population.”

In order to reduce response bias, the survey question order can be randomized. Response rates tend to be ten times higher on IVR than on SMS surveys, Jamie said, in part, because IVR is cheaper for respondents. The HNI system can provide auto-analysis for certain categories such as most popular response. CSV files can also be exported for further analysis. Additionally, the system tracks length of session, language, time of day and other meta data about the survey exercise.

Regulatory and privacy implications in most countries are unclear about IVR, and currently there are few legal restrictions against calling people for surveys. “There are opt-outs for SMS but not for IVRs, if you don’t want to participate you just hang up.” In some case, however, like Rwanda, there are certain numbers that are on “do not disturb” lists and these need to be avoided, she said.

Citizen-led budget monitoring through Facebook

John shared results of a program where citizens were encouraged to visit government infrastructure projects to track whether budget allocations had been properly done. Citizens would visit a health center or a school to inquire about these projects and then fill out a form on Facebook to share their findings. A first issue with the project was that voters were interested in availability and quality of service delivery, not in budget spending. “”I might ask what money you got, did you buy what you said, was it delivered and is it here. Yes. Fine. But the bigger question is: Are you using it? The clinic is supposed to have 1 doctor, 3 nurses and 3 lab technicians. Are they all there? Yes. But are they doing their jobs? How are they treating patients?”

Quantity and budget spend were being captured but quality of service was not addressed, which was problematic. Another challenge with the program was that people did not have a good sense of what the dollar can buy, thus it was difficult for them to assess whether budget had been spent. Additionally, in Zambia, it is not customary for citizens to question elected officials. The idea that the government owes the people something, or that citizens can walk into a government office to ask questions about budget is not a traditional one. “So people were not confident in asking question or pushing government for a response.”

The addition of technology to the program did not resolve any of these underlying issues, and on top of this, there was an apparent mismatch with the idea of using mobile phones to conduct feedback. “In Zambia it was said that everyone has a phone, so that’s why we thought we’d put in mobiles. But the thing is that the number of SIMs doesn’t equal the number of phone owners. The modern woman may have a good phone or two, but as you go down to people in the compound they don’t have even basic types of phones. In rural areas it’s even worse,” said John, “so this assumption was incorrect.” When the program began running in Zambia, there was surprise that no one was reporting. It was then realized that the actual mobile ownership statistics were not so clear.

Additionally, in Zambia only 11% of women can read a full sentence, and so there are massive literacy issues. And language is also an issue. In this case, it was assumed that Zambians all speak English, but often English is quite limited among rural populations. “You have accountability language that is related to budget tracking and people don’t understand it. Unless you are really out there working directly with people you will miss all of this.”

As a result of the evaluation of the program, the Government of Zambia is rethinking ways to assess the quality of services rather than the quantity of items delivered according to budget.

Gathering qualitative input through WhatsApp 

Genesis’ approach to incorporating WhatsApp into their monitoring and evaluation was more emergent. “We didn’t plan for it, it just happened,” said Noel Verrinder. Genesis was running a program to support technical and vocational training colleges in peri-urban and rural areas in the Northwest part of South Africa. The young people in the program are “impoverished in our context, but they have smartphones, WhatsApp and Facebook.”

Genesis had set up a WhatsApp account to communicate about program logistics, but it morphed into a space for the trainers to provide other kinds of information and respond to questions. “We started to see patterns and we could track how engaged the different youth were based on how often they engaged on WhatsApp.” In addition to the content, it was possible to gain insights into which of the participants were more engage based on their time and responses on WhatsApp.

Genesis had asked the youth to create diaries about their experiences, and eventually asked them to photograph their diaries and submit them by WhatsApp, given that it made for much easier logistics as compared to driving around to various neighborhoods to track down the diaries. “We could just ask them to provide us with all of their feedback by WhatsApp, actually, and dispense with the diaries at some point,” noted Noel.

In future, Genesis plans to incorporate WhatsApp into its monitoring efforts in a more formal way and to consider some of the privacy and consent aspects of using the application for M&E. One challenge with using WhatsApp is that the type of language used in texting is short and less expressive, so the organization will have to figure out how to understand emoticons. Additionally, it will need to ask for consent from program participants so that WhatsApp engagement can be ethically used for M&E purposes.

Read Full Post »

Our Tech Salon on Thursday March 9th focused on the potential of Microwork to support youth economic empowerment. Joining us as lead discussants were Lis Meyers, Banyan Global; Saul Miller, Samasource; and Elena Matsui, The Rockefeller Foundation. Banyan Global recently completed a report on “The Nexus of Microwork and Impact Sourcing: Implications for Youth Employment,” supported by the Global Center for Youth Employment and RTI, who also sponsored this Salon. (Disclosure: I worked on the report with the team at Banyan)

Definitions: To frame the discussion, we provided some core definitions and an explanation of the premise of microwork and its role within Impact sourcing.

  • Business Process Outsourcing (BPO): the practice of reducing business costs by transferring portions of work to outside suppliers rather than completing it internally.
  • Online Outsourcing: Contracting a third-party provider (often in a different country) to supply products or services that are delivered and paid for via the Internet. The third party is normally an individual (e-lancing), an online community(crowdsourcing) or a firm.
  • Microwork: a segment of online outsourcing where projects or complex tasks are broken into simple tasks that can be completed in seconds or minutes. Workers require numeracy and understanding of internet and computer technology, and advanced literacy, and are usually paid small amounts of money for each completed task.
  • Impact sourcing: (also known as socially responsible outsourcing), is a business practice in which companies outsource to suppliers that employ individuals from the lowest economic segments of the population.

The premise: It is believed that if microwork is done within an impact sourcing framework, it has the potential to create jobs for disadvantaged youth and disconnected, vulnerable populations and to provide them with income opportunities to support themselves and their families. Proponents of microwork believe it can equip workers with skills and experience that can enable them to enhance their employability regardless of gender, age, socio-economic status, previous levels of employment, or physical ability. Microwork is not always intentionally aimed at vulnerable populations, however. It is only when impact sourcing is adopted as the business strategy that microwork directly benefits the most disadvantaged.

The ecosystem: The microwork industry includes a variety of stakeholders, including: clients (looking to outsource work), service providers (who facilitate the outsourcing by liaising with these clients, breaking tasks down into micro tasks, employing and managing micro workers, and providing overall management and quality control), workers (individual freelancers, groups of people, direct employees, or contractors working through a service provider on assigned micro tasks), donors/investors, government, and communities.

Models of Microwork: The report identifies three main models for microwork (as shown below); micro-distribution (e.g., Amazon Mechanical Turk or CrowdFlower); the direct model (e.g., Digital Divide Data or iMerit) and the indirect model (e.g., Samasource or Rural Shores).

 

Implementer Case Study. With the framework settled, we moved over to hearing from our first discussant, from Samasource, who provided the “implementer” point of view. Samasource has been operating since 2008. Their goal is to connect marginalized women and/or youth with dignified work through the Internet. The organization sees itself as an intermediary or a bridge and believes that work offers the best solution to the complex problem of poverty. The organization works through 3 key programs: SamaSchools, Microwork and SamaHub. At the Samaschool, potential micro workers are trained on the end-to-end process.

The organization puts potential micro workers through an assessment process (former employment history, level of education, context) to predict and select which of the potential workers will offer the highest impact. Most of Samasources’ workers were underemployed or unemployed before coming to Samasource. At Samaschool they learn digital literacy, soft skills, and the technical skills that will enable them to succeed on the job and build their resumes. Research indicates that after 4 years with Samasource, these workers show a 4-fold increase in income.

The organization has evolved over the past couple of years to opening its own delivery center in Nairobi with 650 agents (micro workers). They will also launch in Mumbai, as they’ve learned that hands-on delivery center. Samasource considers that their model (as opposed to the micro-distribution model) offers more control over recruitment and training, quality control, worker preparation, and feedback loops to help workers improve their own performance. This model also offers workers wrap-around programs and benefits like full-time employment with financial literacy training, mentorship, pensions and healthcare.

In closing, it was highlighted that Impact measurement has been a top priority for Samaource. The organization was recently audited with 8 out of 9 stars in terms of quality of impact, evidence and M&E systems. Pending is an RCT that will aim to address the counterfactual (what would happen if Samasource was not operating here?). The organization is experiencing substantial growth, doubling its revenue last year and projecting to grow another 50%. The organization achieved financial sustainability for the first time in the last quarter of 2016. Growth in the industries that require data processing and cleaning and the expansion of AI has driven this growth.

Questions on sustainability. One participant asked why the organization took 8 years to become sustainable. Samasource explained that they had been heavily subsidized by donors, and part of the journey has been to reduce subsidies and increase paid clients. A challenge is keeping costs down and competing with other service providers while still offering workers dignified work. As one of our other discussants noted, this is a point of contention with some local service providers who are less well-known to donors. Because they are not heavily subsidized, they have not been able to focus as much on the “impact” part.

For Digital Divide Data (DDD), who was also present at the Salon, the goal was not quickly getting to profit. Rather the initial objective was social. Now that the organization is maturing it has begun thinking more about profitability and sustainability. It remains a non-profit organization however.

Retention and scale. Both Samasource and DDD noted that workers are staying with them for longer periods of time (up to 4 years). This works well for individual employees (who then have stable work with benefits). It also works well for clients, because employees learn the work, meaning it will be of higher quality – and because the BPO industry has a lot of turnover, and if micro workers are stable it benefits the BPO. This, however, is less useful for achieving scale, because workers don’t move through the program quickly, opening up space for new recruits. For Samasource, the goal would be for workers to move on within 2 years. At DDD, workers complete university while working for DDD, so 4 years is the norm. Some stay for 6 years, which also impacts scaling potential. DDD is looking at a new option for workers to be credentialed and certified, potentially through a 6 month or 1-year program.

The client perspective. One perspective highlighted in the Banyan report is the client perspective. Some loved microwork and impact sourcing. Others said it was challenging. Many are interested in partnering with microwork service providers like iMerit and Daiprom because it offers more data security (you can sign an NDA with service provider, whereas you can’t with individual workers who are coming in through micro-distribution and crowdsourcing). Working with a service provider also means that you have an entity that is responsible for quality control. Experiences with service providers have varied, however, and some companies had signed on to jobs that they were unprepared to train workers on and this resulted in missed deadlines and poor quality work. Clients were clear that their top priority was business – they cared first about quality, cost, and timeliness. “Impact was the cherry on top,” as one discussant noted.

The worker perspective. An aspect missing from the study and the research is that of worker experiences. (As Banyan noted, this would require additional resources for a proper in-depth study). Do workers really seek career growth? Or are they simply looking for something flexible that can help them generate some income in a pinch or supplement their incomes during hard times. In Venezuela, for example, the number of micro workers on CrowdFlower has jumped astronomically during the current political and economic crisis, demonstrating that these type of platforms may serve as supplemental income for those in the most desperate situations. What is the difference in what different workers need?

One small study of micro workers in Kenya noted that when trying to work on their own through the micro-distribution model, they had major challenges: they were not able to collect electronic payments; they got shut out of the system because there were several youth using the same IP address and it was flagged as fraud; language and time zones affected the work was available to them; some companies only wanted workers from certain countries whom they trusted or felt could align culturally; and young women were wary of scams and sexual harassment if accessing work online, as this was their experience with work offline. Some participants wondered what the career path was for a micro worker. Did they go back to school? Did they move ahead to a higher level, higher paying job? Samasource and DDD have some evidence that micro workers in their programs do go on to more dignified, higher paying, more formal jobs, however much of this is due to the wraparound programming that they offer.

The role of government was questioned by Salon participants. Is there a perfect blend of private sector, government and an impact sourcing intermediary? Should government be using micro workers and purposefully thinking about impact sourcing? Could government help to scale microwork and impact sourcing? To date the role of government has been small, noted one discussant. Others wondered if there would be touch points through existing government employment or vocational programs, but it was pointed out that most of the current micro workers are those that have already fallen through the cracks on education and vocational training programming.

A participant outlined her previous experiences with a local municipality in India that wanted to create local employment. The contracting process excluded impact sourcing providers for inexplicable reasons. There were restrictions such as having been in operation for at least 3 years, having a certain minimal level of turnover, number of employees in the system, etc. “So while the government talked about work that needed to be digitized and wanted rural employees, and we went on a three year journey with them to make it inclusive of impact sourcers, it didn’t really work.”

What about social safeguards? One Salon participant raised concerns about the social services and legal protections in place for micro workers. In the absence of regulations, are these issues being swept under the carpet, she wondered. Another noted that minimum standards would be a positive development, but that this will be a long process, as currently there is not even a standard definition of impact sourcing, and it’s unclear what is meant by ‘impact’ and how it’s measured.

This is one area where government could and should play a role. In the past, for example, government has pushed procurement from women-owned or minority owned businesses. Something similar could happen with impact sourcing, but we need standards in order for it to happen. Not all clients who use micro workers are doing it within a framework of impact sourcing and social impact goals. For example, some clients said they were doing “impact sourcing” simply because they were sourcing work from a developing country. In reality, they were simply working with a normal BPO, and so the risk of “impact washing” is real.

Perhaps, noted another participant, the focus should be on drumming up quality clients who actually want to have an impact. “A mandated standard will mean that you lose the private sector.” Some suggested that there would be some type of a ‘certified organic’ or ‘good housekeeping’ seal of approval from a respected entity. Some felt that business were not interested and government would never move something like this. Others disagreed, saying that some large corporation really wanted to be perceived as ethical players.

Definitions proved a major challenge – for example at what point does an ‘impact worker’ cease being an impact worker and how do you count them? Should someone be labeled for life as an impact worker? There was disagreement in the room on this point.

A race to the bottom? Some wondered if microwork was just the same re-hashing of the ‘gig economy’ debate. Would it drive down prices and create extremely unstable work for the most disadvantaged populations? Were there ways that workers could organize if they were working via the micro-distribution model and didn’t even know where to find each other, and if the system was set up to make them bid against each other. It was noted that there was one platform that had been identified that aimed to support workers on Amazon Mechanical Turk, that workers there helped each other with tips on how to get contracts. However as with Uber and other gig economy players, it appeared that all the costs for learning and training were then being pawned off onto the workers themselves.

Working through the direct or indirect models can help to protect individual workers in this aspect, as Samasource, for example, does offer workers contracts and benefits and has a termination policy. The organization is also in a position to negotiate contracts that may be more beneficial to workers, such as extending a 3-week contract with lots of workers over a longer period of time with fewer workers so that income is steadier. Additionally, evaluations have shown that these jobs are pulling in workers who have never had formal jobs before, and that there is an increase in income over time for Samasource workers.

What can donors do? Our third discussant noted that the research is mixed in terms of how different kinds of microwork without any intermediary or wraparound services can actually build a career pathway. Some who are active in the space are still working hard to identify the right partnerships and build support for impact sourcing. It has been difficult to find a “best of breed” or a “gold standard” to date as the work is still evolving. “We’re interested in learning from others what partners need from donors to help scale the work that is effective.” It’s been difficult to evaluate, as she noted, because there has been quite a lot of secrecy involved, as often people do not want to share what is working for fear of losing the competitive edge.

What does the future hold? One Salon participant felt that something very bold was required, given how rapidly economies and technologies are changing. Some of the current microwork will be automated in the near future, he said. The window is closing quickly. Others disagreed, saying that the change in technology was opening up new growth in the sector and that some major players were even delaying their projections because of these rapid shifts and changes in robotics and automation. The BPO sector is fickle and moves quickly – for example voice has shifted rapidly from India to The Philippines. Samasource felt that human components were still required to supplement and train AI and DDD noted that their workers are actually training machines to take over their current jobs. It was also noted that most of the current micro workers are digital natives and a career in data entry is not highly enticing. “We need to find something that helps them feel connected to the global economy. We need to keep focused on relevant skills. The data stuff has a timestamp and it’s on its way out.” DDD is working with universities to bring in courses that are focused on some of the new and emerging skills sets that will be needed.

Conclusions. In short, there are plenty of critical questions remaining in the area of microwork, impact sourcing and around the broader question of the future of youth employment at the global level. How to stay abreast of the rapid changes in economy, business, and technology? What skill sets are needed? A recent article in India’s Business Standard notes constant efforts at re-skilling IT workers. These question are facing not only ‘developing countries’ but the US is also in a similar crisis. Will online work with no wraparound services be a stopgap solution? Will holistic models be pushed so that young people develop additional life skills that will help them in the longer term? Will we learn how to measure and understand the ‘impact’ in ‘impact sourcing?’ Much remains to explore and test!

Thanks to the Global Center for Youth Employment and RTI for supporting this Salon, to our lead discussants and participants, and to ThoughtWorks for hosting us! If you’d like to join us for a future Technology Salon, sign up here!

 

 

 

 

 

Read Full Post »

At our April 5th Salon in Washington, DC we had the opportunity to take a closer look at open data and privacy and discuss the intersection of the two in the framework of ‘responsible data’. Our lead discussants were Amy O’Donnell, Oxfam GB; Rob Baker, World Bank; Sean McDonald, FrontlineSMS. I had the pleasure of guest moderating.

What is Responsible Data?

We started out by defining ‘responsible data‘ and some of the challenges when thinking about open data in a framework of responsible data.

The Engine Room defines ‘responsible data’ as

the duty to ensure people’s rights to consent, privacy, security and ownership around the information processes of collection, analysis, storage, presentation and reuse of data, while respecting the values of transparency and openness.

Responsible Data can be like walking a tightrope, noted our first discussant, and you need to find the right balance between opening data and sharing it, all the while being ethical and responsible. “Data is inherently related to power – it can create power, redistribute it, make the powerful more powerful or further marginalize the marginalized. Getting the right balance involves asking some key questions throughout the data lifecycle from design of the data gathering all the way through to disposal of the data.

How can organizations be more responsible?

If an organization wants to be responsible about data throughout the data life cycle, some questions to ask include:

  • In whose interest is it to collect the data? Is it extractive or empowering? Is there informed consent?
  • What and how much do you really need to know? Is the burden of collecting and the liability of storing the data worth it when balanced with the data’s ability to represent people and allow them to be counted and served? Do we know what we’ll actually be doing with the data?
  • How will the data be collected and treated? What are the new opportunities and risks of collecting and storing and using it?
  • Why are you collecting it in the first place? What will it be used for? Will it be shared or opened? Is there a data sharing MOU and has the right kind of consent been secured? Who are we opening the data for and who will be able to access and use it?
  • What is the sensitivity of the data and what needs to be stripped out in order to protect those who provided the data?

Oxfam has developed a data deposit framework to help assess the above questions and make decisions about when and whether data can be open or shared.

(The Engine Room’s Responsible Development Data handbook offers additional guidelines and things to consider)

(See: https://wiki.responsibledata.io/Data_in_the_project_lifecycle for more about the data lifecycle)

Is ‘responsible open data’ an oxymoron?

Responsible Data policies and practices don’t work against open data, our discussant noted. Responsible Data is about developing a framework so that data can be opened and used safely. It’s about respecting the time and privacy of those who have provided us with data and reducing the risk of that data being hacked. As more data is collected digitally and donors are beginning to require organizations to hand over data that has been collected with their funding, it’s critical to have practical resources and help staff to be more responsible about data.

Some disagreed that consent could be truly informed and that open data could ever be responsible since once data is open, all control over the data is lost. “If you can’t control the way the data is used, you can’t have informed people. It’s like saying ‘you gave us permission to open your data, so if something bad happens to you, oh well….” Informed consent is also difficult nowadays because data sets are being used together and in ways that were not possible when informed consent was initially obtained.

Others noted that standard informed consent practices are unhelpful, as people don’t understand what might be done with their data, especially when they have low data literacy. Involving local communities and individuals in defining what data they would like to have and use could make the process more manageable and useful for those whose data we are collecting, using and storing, they suggested.

One person said that if consent to open data was not secured initially; the data cannot be opened, say, 10 years later. Another felt that it was one thing to open data for a purpose and something entirely different to say “we’re going to open your data so people can do fun things with it, to play around with it.”

But just what data are we talking about?

USAID was questioned for requiring grantees to share data sets and for leaning towards de-identification rather than raising the standard to data anonymity. One person noted that at one point the agency had proposed a 22-step process for releasing data and even that was insufficient for protecting program participants in a risky geography because “it’s very easy to figure out who in a small community recently received 8 camels.” For this reason, exclusions are an important part of open data processes, he said.

It’s not black or white, said another. Responsible open data is possible, but openness happens along a spectrum. You have financial data on the one end, which should be very open as the public has a right to know how its tax dollars are being spent. Human subjects research is on the other end, and it should not be totally open. (Author’s note: The Open Knowledge Foundation definition of open data says: “A key point is that when opening up data, the focus is on non-personal data, that is, data which does not contain information about specific individuals.” The distinction between personal data, such as that in household level surveys, and financial data on agency or government activities seems to be blurred or blurring in current debates around open data and privacy.) “Open data will blow up in your face if it’s not done responsibly,” he noted. “But some of the open data published via IATI (the International Aid Transparency Initiative) has led to change.”

A participant followed this comment up by sharing information from a research project conducted on stakeholders’ use of IATI data in 3 countries. When people knew that the open data sets existed they were very excited, she said. “These are countries where there is no Freedom of Information Act (FOIA), and where people cannot access data because no one will give it to them. They trusted the US Government’s data more than their own government data, and there was a huge demand for IATI data. People were very interested in who was getting what funding. They wanted information for planning, coordination, line ministries and other logistical purposes. So let’s not underestimate open data. If having open data sets means that governments, health agencies or humanitarian organizations can do a better job of serving people, that may make for a different kind of analysis or decision.”

‘Open by default’ or ‘open by demand’?

Though there are plenty of good intentions and rationales for open data, said one discussant, ‘open by default’ is a mistake. We may have quick wins with a reduction in duplicity of data collection, but our experiences thus far do not merit ‘open by default’. We have not earned it. Instead, he felt that ‘open by demand’ is a better idea. “We can put out a public list of the data that’s available and see what demand for data comes in. If we are proactive on what is available and what can be made available, and we monitor requests, we can avoid putting out information that no one is interested in. This would lower the overhead on what we are releasing. It would also allow us to have a conversation about who needs this data and for what.”

One participant agreed, positing that often the only reason that we collect data is to provide proof and evidence that we’re doing our job, spending the money given to us, and tracking back. “We tend to think that the only way to provide this evidence is to collect data: do a survey, talk to people, look at website usage. But is anyone actually using this data, this evidence to make decisions?”

Is the open data honeymoon over?

“We need to do a better job of understanding the impact at a wider level,” said another participant, “and I think it’s pretty light. Talking about open data is too general. We need to be more service oriented and problem driven. The conversation is very different when you are using data to solve a particular problem and you can focus on something tangible like service delivery or efficiency. Open data is expensive and not sustainable in the current setup. We need to figure this out.”

Another person shared results from an informal study on the use of open data portals around the world. He found around 2,500 open data portals, and only 3.8% of them use https (the secure version of http). Most have very few visitors, possibly due to poor Internet access in the countries whose open data they are serving up, he said. Several exist in countries with a poor Freedom House ranking and/or in countries at the bottom end of the World Bank’s Digital Dividends report. “In other words, the portals have been built for people who can’t even use them. How responsible is this?” he asked, “And what is the purpose of putting all that data out there if people don’t have the means to access it and we continue to launch more and more portals? Where’s all this going?”

Are we conflating legal terms?

Legal frameworks around data ownership were debated. Some said that the data belonged to the person or agency that collected it or paid for the cost of collecting in terms of copyright and IP. Others said that the data belonged to the individual who provided it. (Author’s note: Participants may have been referring to different categories of data, eg., financial data from government vs human subjects data.) The question was raised of whether informed consent for open data in the humanitarian space is basically a ‘contract of adhesion’ (a term for a legally binding agreement between two parties wherein one side has all the bargaining power and uses it to its advantage). Asking a person to hand over data in an emergency situation in order to enroll in a humanitarian aid program is akin to holding a gun to a person’s head in order to get them to sign a contract, said one person.

There’s a world of difference between ‘published data’ and ‘openly licensed data,’ commented our third discussant. “An open license is a complete lack of control, and you can’t be responsible with something you can’t control. There are ways to be responsible about the way you open something, but once it’s open, your responsibility has left the port.” ‘Use-based licensing’ is something else, and most IP is governed by how it’s used. For example, educational institutions get free access to data because they are educational institutions. Others pay and this subsidized their use of this data, he explained.

One person suggested that we could move from the idea of ‘open data’ to sub-categories related to how accessible the data would be and to whom and for what purposes. “We could think about categories like: completely open, licensed, for a fee, free, closed except for specific uses, etc.; and we could also specify for whom, whose data and for what purposes. If we use the term ‘accessible’ rather than ‘open’ perhaps we can attach some restrictions to it,” she said.

Is data an asset or a liability?

Our current framing is wrong, said one discussant. We should think of data as a toxic asset since as soon as it’s in our books and systems, it creates proactive costs and proactive risks. Threat modeling is a good approach, he noted. Data can cause a lot of harm to an organization – it’s a liability, and if it’s not used or stored according to local laws, an agency could be sued. “We’re far under the bar. We are not compliant with ‘safe harbor’ or ECOWAS regulations. There are libel questions and property laws that our sector is ignorant of. Our good intentions mislead us in terms of how we are doing things. There is plenty of room to build good practice here, he noted, for example through Civic Trusts. Another participant noted that insurance underwriters are already moving into this field, meaning that they see growing liability in this space.

How can we better engage communities and the grassroots?

Some participants shared examples of how they and their organizations have worked closely at the grassroots level to engage people and communities in protecting their own privacy and using open data for their own purposes. Threat modeling is an approach that helps improve data privacy and security, said one. “When we do threat modeling, we treat the data that we plan to collect as a potential asset. At each step of collection, storage, sharing process – we ask, ‘how will we protect those assets? What happens if we don’t share that data? If we don’t collect it? If we don’t delete it?’”

In one case, she worked with very vulnerable women working on human rights issues and together the group put together an action plan to protect its data from adversaries. The threats that they had predicted actually happened and the plan was put into action. Threat modeling also helps to “weed the garden once you plant it,” she said, meaning that it helps organizations and individuals keep an eye on their data, think about when to delete data, pay attention to what happens after data’s opened and dedicate some time for maintenance rather than putting all their attention on releasing and opening data.

More funding needs to be made available for data literacy for those whose data has been collected and/or opened. We need to help people think about what data is of use to them also. One person recalled hearing people involved in the creation of the Kenya Open Government Data portal say that the entire process was a waste of time because of low levels of use of any of the data. There are examples, however, of people using open data and verifying it at community level. For example, high school students in one instance found the data on all the so-called grocery stores in their community and went one-by-one checking into them, and identifying that some of these were actually liquor stores selling potato chips, not actual grocery stores. Having this information and engaging with it can be powerful for local communities’ advocacy work.

Are we the failure here? What are we going to do about it?

One discussant felt that ‘data’ and ‘information’ are often and easily conflated. “Data alone is not power. Information is data that is contextualized into something that is useful.” This brings into question the value of having so many data portals, and so much risk, when so little is being done to turn data into information that is useful to the people our sector says it wants to support and empower.

He gave the example of the Weather Channel, a business built around open data sets that are packaged and broadcast, which just got purchased for $2 billion. Channels like radio that would have provided information to the poor were not purchased, only the web assets, meaning that those who benefit are not the disenfranchised. “Our organizations are actually just like the Weather Channel – we are intermediaries who are interested in taking and using open data for public good.”

As intermediaries, we can add value in the dissemination of this open data, he said. If we have the skills, the intention and the knowledge to use it responsibly, we have a huge opportunity here. “However our enlightened intent has not yet turned this data into information and knowledge that communities can use to improve their lives, so are we the failure here? And if so, what are we doing about it? We could immediately begin engaging communities and seeing what is useful to them.” (See this article for more discussion on how ‘open’ may disenfranchise the poor.)

Where to from here?

Some points raised that merit further discussion and attention include:

  • There is little demand or use of open data (such as government data and finances) and preparing and maintaining data sets is costly – ‘open by demand’ may be a more appropriate approach than ‘open by default.’
  • There is a good deal of disagreement about whether data can be opened responsibly. Some of this disagreement may stem from a lack of clarity about what kind of data we are talking about when we talk about open data.
  • Personal data and human subjects data that was never foreseen to be part of “open data” is potentially being opened, bringing with it risks for those who share it as well as for those who store it.
  • Informed consent for personal/human subject data is a tricky concept and it’s not clear whether it is even possible in the current scenario of personal data being ‘opened’ and the lack of control over how it may be used now or in the future, and the increasing ease of data re-identification.
  • We may want to look at data as a toxic asset rather than a beneficial one, because of the liabilities it brings.
  • Rather than a blanket “open” categorization, sub-categorizations that restrict data sets in different ways might be a possibility.
  • The sector needs to improve its understanding of the legal frameworks around data and data collection, storage and use or it may start to see lawsuits in the near future.
  • Work on data literacy and community involvement in defining what data is of interest and is collected, as well as threat modeling together with community groups is a way to reduce risk and improve data quality, demand and use; but it’s a high-touch activity that may not be possible for every kind of organization.
  • As data intermediaries, we need to do a much better job as a sector to see what we are doing with open data and how we are using it to provide services and contextualized information to the poor and disenfranchised. This is a huge opportunity and we have not done nearly enough here.

The Technology Salon is conducted under Chatham House Rule so attribution has not been made in this post. If you’d like to attend future Salons, sign up here

 

Read Full Post »

Our March 18th Technology Salon NYC covered the Internet of Things and Global Development with three experienced discussants: John Garrity, Global Technology Policy Advisor at CISCO and co-author of Harnessing the Internet of Things for Global Development; Sylvia Cadena, Community Partnerships Specialist, Asia Pacific Network Information Centre (APNIC) and the Asia Information Society Innovation Fund (ISIF); and Andy McWilliams, Creative Technologist at ThoughtWorks and founder and director of Art-A-Hack and Hardware Hack Lab.

By Wilgengebroed on Flickr [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)%5D, via Wikimedia Commons

What is the Internet of Things?

One key task at the Salon was clarifying what exactly is the “Internet of Things.” According to Wikipedia:

The Internet of Things (IoT) is the network of physical objects—devices, vehicles, buildings and other items—embedded with electronics, software, sensors, and network connectivity that enables these objects to collect and exchange data.[1] The IoT allows objects to be sensed and controlled remotely across existing network infrastructure,[2] creating opportunities for more direct integration of the physical world into computer-based systems, and resulting in improved efficiency, accuracy and economic benefit;[3][4][5][6][7][8] when IoT is augmented with sensors and actuators, the technology becomes an instance of the more general class of cyber-physical systems, which also encompasses technologies such as smart grids, smart homes, intelligent transportation and smart cities. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure. Experts estimate that the IoT will consist of almost 50 billion objects by 2020.[9]

As one discussant explained, the IoT involves three categories of entities: sensors, actuators and computing devices. Sensors read data in from the world for computing devices to process via a decision logic which then generates some type of action back out to the world (motors that turn doors, control systems that operate water pumps, actions happening through a touch screen, etc.). Sensors can be anything from video cameras to thermometers or humidity sensors. They can be consumer items (like a garage door opener or a wearable device) or industrial grade (like those that keep giant machinery running in an oil field). Sensors are common in mobile phones, but more and more we see them being de-coupled from cell phones and integrated into or attached to all manner of other every day things. The boom in the IoT means that in whereas in the past, a person may have had one URL for their desktop computer, now they might be occupying several URLs:  through their phone, their iPad, their laptop, their Fitbit and a number of other ‘things.’

Why does IoT matter for Global Development?

Price points for sensors are going down very quickly and wireless networks are steadily expanding — not just wifi but macro cellular technologies. According to one lead discussant, 95% of the world is covered by 2G and two-thirds by 3G networks. Alongside that is a plethora of technology that is wide range and low tech. This means that all kinds of data, all over the world, are going to be available in massive quantities through the IoT. Some are excited about this because of how data can be used to track global development indicators, for example, the type of data being sought to measure the Sustainable Development Goals (SDGs). Others are concerned about the impact of data collected via the IoT on privacy.

What are some examples of the IoT in Global Development?

Discussants and others gave many examples of how the IoT is making its way into development initiatives, including:

  • Flow meters and water sensors to track whether hand pumps are working
  • Protecting the vaccine cold chain – with a 2G thermometer, an individual can monitor the cold chain for local use and the information also goes directly to health ministries and to donors
  • Monitoring the environment and tracking animals or endangered species
  • Monitoring traffic routes to manage traffic systems
  • Managing micro-irrigation of small shareholder plots from a distance through a feature phone
  • As a complement to traditional monitoring and evaluation (M&E) — a sensor on a cook stove can track how often a stove is actually used (versus information an individual might provide using recall), helping to corroborate and reduce bias
  • Verifying whether a teacher is teaching or has shown up to school using a video camera

The CISCO publication on the IoT and Global Development provides many more examples and an overview of where the area is now and where it’s heading.

How advanced is the IoT in the development space?

Currently, IoT in global development is very much a hacker space, according to one discussant. There are very few off the shelf solutions that development or humanitarian organizations can purchase and readily implement. Some social enterprises are ramping up activity, but there is no larger ecosystem of opportunities for off the shelf products.

Because the IoT in global development is at an early phase, challenges abound. Technical issues, power requirements, reliability and upkeep of sensors (which need to be calibrated), IP issues, security and privacy, technical capacity, and policy questions all need to be worked out. One discussant noted that these challenges carry on from the mobile for development (m4d) and information and communication technologies for development (ICT4D) work of the past.

Participants agreed that challenges are currently huge. For example, devices are homogeneous, making them very easy to hack and affect a lot of devices at once. No one has completely gotten their head around the privacy and consent issues, which are are very different than those of using FB. There are lots of interoperability issues also. As one person highlighted — there are over 100 different communication protocols being used today. It is more complicated than the old “BetaMax v VHS” question – we have no idea at this point what the standard will be for IoT.

For those who see the IoT as a follow-on from ICT4D and m4d, the big question is how to make sure we are applying what we’ve learned and avoiding the same mistakes and pitfalls. “We need to be sure we’re not committing the error of just seeing the next big thing, the next shiny device, and forgetting what we already know,” said one discussant. There is plenty of material and documentation on how to avoid repeating past mistakes, he noted. “Read ICT works. Avoid pilotitis. Don’t be tech-led. Use open source and so on…. Look at the digital principles and apply them to the IoT.”

A higher level question, as one person commented, is around the “inconvenient truth” that although ICTs drive economic growth at the macro level, they also drive income inequality. No one knows how the IoT will contribute or create harm on that front.

Are there any existing standards for the IoT? Should there be?

Because there is so much going on with the IoT – new interventions, different sectors, all kinds of devices, a huge variety in levels of use, from hacker spaces up to industrial applications — there are a huge range of standards and protocols out there, said one discussant. “We don’t really want to see governments picking winners or saying ‘we’re going to us this or that.’ We want to see the market play out and the better protocols to bubble up to the surface. What’s working best where? What’s cost effective? What open protocols might be most useful?”

Another discussant pointed out that there is a legacy predating the IOT: machine-to-machine (M2M), which has not always been Internet based. “Since this legacy is still there. How can we move things forward with regard to standardization and interoperability yet also avoid leaving out those who are using M2M?”

What’s up with IPv4 and IPv6 and the IoT? (And why haven’t I heard about this?)

Another crucial technical point raised is that of IPv4 and IPv6, something that not many Salon participant had heard of, but that will greatly impact on how the IoT rolls out and expands, and just who will be left out of this new digital divide. (Note: I found this video to be helpful for explaining IPv4 vs IPv6.)

“Remember when we used Netscape and we understood how an IP number translated into an IP address…?” asked one discussant. “Many people never get that lovely experience these days, but it’s important! There is a finite number of IP4 addresses and they are running out. Only Africa and Latin America have addresses left,” she noted.

IPv6 has been around for 20 years but there has not been a serious effort to switch over. Yet in order to connect the next billion and the multiple devices that they may bring online, we need more addresses. “Your laptop, your mobile, your coffee pot, your fridge, your TV – for many of us these are all now connected devices. One person might be using 10 IP addresses. Multiply that by millions of people, and the only thing that makes sense is switching over to IPv6,” she said.

There is a problem with the technical skills and the political decisions needed to make that transition happen. For much of the world, the IoT will not happen very smoothly and entire regions may be left out of the IoT revolution if high level decision makers don’t decide to move ahead with IPv6.

What are some of the other challenges with global roll-out of IoT?

In addition to the IPv4 – IPv6 transition, there are all kinds of other challenges with the IoT, noted one discussant. The technical skills required to make the transition that would enable IoT in some regions, for example Asia Pacific, are sorely needed. Engineers will need to understand how to make this shift happen, and in some places that is going to be a big challenge. “Things have always been connected to the Internet. There are just going to be lots more, different things connected to the Internet now.”

One major challenge is that there are huge ethical questions along with security and connectivity holes (as I will outline later in this summary post, and as discussed in last year’s salon on Wearable Technologies). In addition, noted one discussant, if we are designing networks that are going to collect data for diseases, for vaccines, for all kinds of normal businesses, and put the data in the cloud, developing countries need to have the ability to secure the data, the computing capacity to deal with it, and the skills to do their own data analysis.

“By pushing the IoT onto countries and not supporting the capacity to manage it, instead of helping with development, you are again creating a giant gap. There will be all kinds of data collected on climate change in the Pacific Island Countries, for example, but the countries don’t have capacity to deal with this data. So once more it will be a bunch of outsiders coming in to tell the Pacific Islands how to manage it, all based on conclusions that outsiders are making based on sensor data with no context,” alerted one discussant. “Instead, we should be counseling our people, our countries to figure out what they want to do with these sensors and with this data and asking them what they need to strengthen their own capacities.”

“This is not for the SDGs and ticking off boxes,” she noted. “We need to get people on the ground involved. We need to decentralize this so that people can make their own decisions and manage their own knowledge. This is where the real empowerment is – where local people and country leaders know how to collect data and use it to make their own decisions. The thing here is ownership — deploying your own infrastructure and knowing what to do with it.”

How can we balance the shiny devices with the necessary capacities?

Although the critical need to invest in and support country-level capacity to manage the IoT has been raised, this type of back-end work is always much less ‘sexy’ and less interesting for donors than measuring some development programming with a flashy sensor. “No one wants to fund this capacity strengthening,” said one discussant. “Everyone just wants to fund the shiny sensors. This chase after innovation is really damaging the impact that technology can actually have. No one just lets things sit and develop — to rest and brew — instead we see everyone rushing onto the next big thing. This is not a good thing for a small country that doesn’t have the capacity to jump right into it.”

All kinds of things can go wrong if people are not trained on how to manage the IoT. Devices can be hacked and they may be collecting and sharing data without an individuals’ knowledge (see Geoff Huston on The Internet of Stupid Things). Electrical short outs, common in places with poor electricity ecosystems, can also cause big problems. In addition, the Internet is affected by legacy systems – so we need interoperability that goes backwards, said one discussant. “If we don’t make at least a small effort to respect those legacy systems, we’re basically saying ‘if you don’t have the funding to update your system, you’re out.’ This then reinforces a power dynamic where countries need the international community to give them equipment, or they need to buy this or buy that, and to bring in international experts from the outside….’ The pressure on poor countries to make things work, to do new kinds of M&E, to provide evidence is huge. With that pressure comes a higher risk of falling behind very quickly. We are also seeing pilot projects that were working just fine without fancy tech being replaced by new fangled tech-type programs instead of being supported over the longer term,” she said.

Others agreed that the development sector’s fascination with shiny and new is detrimental. “There is very little concern for the long-term, the legacy system, future upgrades,” said one participant. “Once the blog post goes up about the cool project, the sensors go bad or stop working and no one even knows because people have moved on.” Another agreed, citing that when visiting numerous clinics for a health monitoring program in one country, the running joke among the M&E staff was “OK, now let’s go and find the broken solar panel.” “When I think of the IoT,” she said, “I think of a lot of broken devices in 5 years.” The aspect of eWaste and the IoT has not even begun to be examined or quantified, noted another.

It is increasingly important for governments to understand how the Internet works, because they are making policy about it. Manufacturers need to better understand how the tech works on the ground, especially in different contexts that they are not accustomed to working in. Users need a better understanding of all of this because their privacy is at risk. Legal frameworks around data and national laws need more attention as well. “When you are working with restrictive governments, your organization’s or start-up’s idea might actually be illegal or close to a sedition law and you may end up in jail,” noted one discussant.

What choices will organizations need to make regarding the IoT?

When it comes to actually making decisions on how involved an organization should and can be in supporting or using the IoT, one critical choice will be related the suites of devices, said our third discussant. Will it be a cloud device? A local computing device? A computer?

Organizations will need to decide if they want a vendor that gives them a package, or if they want a modular, interoperable approach of units. They will need to think about aspects like whether they want to go with proprietary or open source and will it be plug and play?

There are trade-offs here and key technical infrastructure choices will need to be made based on a certain level of expertise and experience. If organizations are not sure what they need, they may wish to get some advice before setting up a system or investing heavily.

As one discussant put it, “When I talk about the IOT, I often say to think about what the Internet was in the 90s. Think about that hazy idea we had of what the Internet was going to be. We couldn’t have predicted in the 90s what today’s internet would look like, and we’re in the same place with the IoT,” he said. “There will be seismic change. The state of the whole sector is immature now. There are very hard choices to make.”

Another aspect that’s representative of the IoT’s early stage, he noted, is that the discussion is all focusing on http and the Internet. “The IOT doesn’t necessarily even have to involve the Internet,” he said.

Most vendors are offering a solution with sensors to deploy, actuators to control and a cloud service where you log in to find your data. The default model is that the decision logic takes place there in the cloud, where data is stored. In this model, the cloud is in the middle, and the devices are around it, he said, but the model does not have to be that way.

Other models can offer more privacy to users, he said. “When you think of privacy and security – the healthcare maxim is ‘do no harm.’ However this current, familiar model for the IoT might actually be malicious.” The reason that the central node in the commercial model is the cloud is because companies can get more and more detailed information on what people are doing. IoT vendors and IoT companies are interested in extending their profiles of people. Data on what people do in their virtual life can now be combined with what they do in their private lives, and this has huge commercial value.

One option to look at, he shared, is a model that has a local connectivity component. This can be something like bluetooth mesh, for example. In this way, the connectivity doesn’t have to go to the cloud or the Internet at all. This kind of set-up may make more sense with local data, and it can also help with local ownership, he said. Everything that happens in the cloud in the commercial model can actually happen on a local hub or device that opens just for the community of users. In this case, you don’t have to share the data with the world. Although this type of a model requires greater local tech capacity and can have the drawback that it is more difficult to push out software updates, it’s an option that may help to enhance local ownership and privacy.

This requires a ‘person first’ concept of design. “When you are designing IOT systems, he said, “start with the value you are trying to create for individuals or organizations on the ground. And then implement the local part that you need to give local value. Then, only if needed, do you add on additional layers of the onion of connectivity, depending on the project.” The first priority here are the goals that the technology design will achieve for individual value, for an individual client or community, not for commercial use of people’s data.

Another point that this discussant highlighted was the need to conduct threat modeling and to think about unintended consequences. “If someone hacked this data – what could go wrong?” He suggested working backwards and thinking: “What should I take offline? How do I protect it better? How do I anonymize it better.”

In conclusion….

It’s critical to understand the purpose of an IoT project or initiative, discussants agreed, to understand if and why scale is needed, and to be clear about the drivers of a project. In some cases, the cloud is desirable for quicker, easier set up and updates to software. At the same time, if an initiative is going to be sustainable, then community and/or country capacity to run it, sustain it, keep it protected and private, and benefit from it needs to be built in. A big part of that capacity includes the ability to understand the different layers that surround the IoT and to make grounded decisions on the various trade-offs that will come to a head in the process of design and implementation. These skills and capacities need to be developed and supported within communities, countries and organizations if the IoT is to contribute ethically and robustly to global development.

Thanks to APNIC for sponsoring and supporting this Salon and to our friends at ThoughtWorks for hosting! If you’d like to join discussions like this one in cities around the world, sign up at Technology Salon

Salons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

Older Posts »