Feeds:
Posts
Comments

Archive for the ‘TSNYC’ Category

As the world became more digital in the wake of COVID-19, the number of mobile applications and online services and support increased exponentially. Many of these apps offer important support to people who live and move in contexts where they are at risk. Digital apps for sensitive services (such as mental health, reproductive health, shelter and support for gender-based violence, and safe spaces for LGBTQI+ communities) can expose people to harm at the family, peer, and wider societal level if not designed carefully. This harm can be severe – for example, detention or death. Though people who habitually face risk have their own coping mechanisms, those designing digital apps and services also have a responsibility to mitigate harm.

At our March 8 Technology Salon NYC (hosted at Thoughtworks), we discussed how to create safe, private digital solutions for sensitive services. Joining were Gerda Binder, UNICEF’s Oky Period Tracker App for Girls; Jonathan McKay, SameSame Collective; Stephanie Mikkelson, United Nations Population Fund; Tania Lee, Trestle, Jane Piercy, Reproductive Equity Now Foundation; and 25 others, making for a rich discussion on this critical topic!

Key Takeaways from the conversation

1. Do constant threat modeling. Threat modeling needs to include a wide range of potential challenges including mis- and disinformation, hostile family and community members, shifting legal landscapes, and law enforcement tactics. The latter are especially important if you are working in environments where people are being persecuted by government. Roughly 70 countries criminalize consensual same-sex activities and some forms of gender expression, most in Sub-Saharan Africa, for example. The US is placing ever greater legal restrictions on gender expression and identity and on reproductive rights, and laws differ from state-to-state, making the legal landscape highly complex. Hate groups are organizing online to perpetrate violence against women, girls and LGBTQI+ people in many other parts of the world as well. In Egypt, police have used the dating app Grindr to entrap, arrest and prosecute gay men. Similar tactics were used in the US to identify and ‘out’ gay priests. Since political and social contexts and the tactics of those who want to do harm change rapidly, ongoing threat modeling is critical. Your threat models will look different in each context and for each digital app.

2. Involve communities and other stakeholders and experts. Co-creation processes are vital for identifying what to design as well as how to design for safety and privacy. By working together with communities, you will have a much better idea of what they need and want, the various challenges they face to access and use a digital tool, and the kinds of risks and harms that need to be reduced through design and during implementation. For example, a lot of apps have emergency buttons designed to protect women, one Salon participant explained. These often alert the police, however that might absolutely be the wrong choice. “Women will tell you about their experiences with police as perpetrators of gender-based violence” (GBV). It’s important to hire tech designers who identify with the groups you are designing for/with. Subject matter experts are key stakeholders, too. There are decades of experience working with groups who are at risk, so don’t re-invent the wheel. Standards exist for how to work on themes like GBV, data protection, and other aspects of safe design of apps and digital services – use them!

3. Collect as little data as possible. Despite the value of data in measuring impact and use and helping to adapt interventions to meet the needs of the target population, collection of personal and sensitive data is extremely dangerous for people using these apps and for organizations providing the services. Data collected from individuals who explicitly or implicitly admit to same-sex activities or gender non-conforming behavior could, in theory, be used by their family and community as evidence in their persecution. Similarly, sexual activity and fertility data tracked in a period tracker could be used to ‘prove’ that a girl or woman is/was fertile or infertile, had sex, miscarried, or aborted — all of which can be a risk depending on the family, social, or legal context. Communication on sensitive topics increases the risk of prosecution because email, web searches, social media posts, text messages, voice messages, call logs, and anything that can be found on a phone or computer can be used as evidence. If a digital app or service can function without collecting data, then it should! For example, it’s not necessary to collect a person’s data to provide them with legal advice or to allow them to track their period.

4. Be thoughtful about where data is stored. When using third party apps to help manage a digital solution, it’s important to know exactly what data is stored, whether the data can be deleted, and whether it can be subpoenaed. Also consider that if an app or third-party data processor is sold to another company, the data they store will likely be sold along with the app, and the policies related to data might change.

While sometimes it is safer to store data on an individual’s device, in other cases it might be safer for data to live in the cloud and/or in different country. This will depend on the threat landscape and actors. You’ll want to also review data privacy regulations for the countries where you are based, where the data is stored, and where your target end users live. All of these regulations may need to be complied with depending on where data is collected, processed, and stored. Some countries have “data sovereignty laws” that dictate that data must reside in the country where it was collected. Some governments have even drafted laws that require government to have access to this data. Others have so-called “hostage” laws that require that digital platforms maintain at least one employee in the country. These employees have been harassed by governments who push them to comply with certain types of censorship or surrender data from their digital platforms. If government is your main threat actor, you might need to decide whether non-compliance with data laws is a risk that you are willing to take.

5. Improve consent processes and transparency. Consent cannot be conceived as a one-time, one-off process, because circumstances change and so does consent. Generally digital platforms do a terrible job at telling people about what happens to their data and informing them of the possible risks to their privacy and safety. It’s complicated to explain where data goes and what happens to it, but we all need to do better with consent and transparency. Engaging people who will use your app in designing a good process is one way to help develop easy to understand language and explanations.

6. Help people protect themselves. Add content to your website, app, or bot that helps people learn how to adjust their privacy settings and understand the risks of using your service and how to protect themselves while doing so. Some features that were mentioned by Salon participants include those that allow people to disguise the apps they are using, quickly delete their data and/or the app itself, mask or ‘forget’ phone numbers so that the number won’t appear in the contact list and so that text message content won’t repopulate if the number is used again to send a text, and using different phone numbers for the organization’s website and for outreach so that the numbers are harder to trace back to the organization or a service.

7. Plan for the end of your project and/or funding. It’s important to plan for how you will safely delete all your data and any data held by third parties at the end of your funding cycle if the app or service is discontinued. In addition, you’ll need to think about what happens to the people who relied on your service. Will you leave them high and dry? Some organizations think of this as an “off ramp” and recommend that you plan for the end of the effort from the very beginning.

8. Take care of your staff. Ensure that you have enough staff capacity to respond to any incoming requests or needs of the people your service targets. Additionally, keep staff safe from harm. Countries, like Hungary, Russia and Indonesia have laws that make the provision of educational material related to LGBTQI+ identities challenging, especially to minors. Similarly, some countries and some US states prohibit any type of counseling related to abortion or gender affirmative care. This poses a risk to organizations who establish legal entities and employ people in these countries and states and to their staff. It’s critical to ensure that you have enough resources to keep staff safe. You will also want to be sure to provide support for them to avoid high levels of burn out and to deal with any vicarious trauma. Keeping staff safe and healthy is not only good for them, but also for your service because better morale will mean higher quality support services.

9. Accept that there will be trade-offs. Password protected apps are more secure, but they can pose higher barriers to use because they introduce friction. If your app doesn’t collect personal data it will be safer, but it will be more difficult to offer a password reset or recovery options, which is a usability challenge, especially in places where people have lower literacy and less experience using apps and passwords. When data is stored locally, it’s less susceptible to large scale data mining, however it might be more at risk of a family member or law enforcement forcing the data to be shared, and if a device is lost or broken, the data will be lost.

Large platforms may be more prone to commercial privacy risks, yet in some ways they provide greater data security. As one person said, “We decided to just go with WhatsApp because we could never develop a platform as secure as theirs – we simply don’t have the engineering power that they do.” Another person mentioned that they offer a Signal option (which is encrypted) for private messaging but that many people do not use Signal and prefer to communicate through platforms they already use. These more popular platforms are less secure, so the organization had to find other ways to set protective parameters for people who use them. Some organizations have decided that, despite the legal challenges it might bring, they simply will not hand over data to law enforcement. To prevent this situation from happening, they have only set up legal entities in countries where human rights protections for the populations they serve are strong. You’ll want to carefully discuss all these different privacy and usability choices, including with potential end users, to come to the best decision for each app or service.

Additional resources on this topic include:

Technology Salons run under Chatham House Rule, so no attribution has been made in this post. If you’d like to join us for a Salon, sign up here. If you’d like to suggest a topic or provide funding support to Salons in NYC please get in touch!

Read Full Post »

 

Read Full Post »

On Thursday September 19, we gathered at the OSF offices for the Technology Salon on “Automated Decision Making in Aid: What could possibly go wrong?” with lead discussants Jon Truong, and Elyse Voegeli, two of the creators of Automating NYC; and Genevieve Fried and Varoon Mathur, Fellows at the AI Now Institute at NYU.

To start off, we asked participants whether they were optimistic or skeptical about the role of Automated Decision-making Systems (ADS) in the aid space. The response was mixed: about half skeptics and half optimists, most of whom qualified their optimism as “cautious optimism” or “it depends on who I’m talking to” or “it depends on the day and the headlines” or “if we can get the data, governance, and device standards in place.”

What are ADS?

Our next task was to define ADS. (One reason that the New York City ADS task force was unable to advance is that its members were unable to agree on the definition of an ADS).

One discussant explained that NYC’s provisional definition was something akin to:

  • Any system that uses data algorithms or computer programs to replace or assist a human decision-making process.

This may seem straightforward, yet, as she explained, “if you go too broad you might include something like ‘spellcheck’ which feels like overkill. On the other hand, spellcheck is a good case for considering how complex things can get. What if spellcheck only recognized Western names? That would be an example of encoding bias into the ADS. However, the degree of harm that could come from spellcheck as compared to using ADS for predictive policing is very different. Defining ADS is complex.”

Other elements of the definition of an ADS are that it includes computational implementation of an algorithm. Algorithms are basically clear instructions or criteria followed in order to make a decision. Algorithms can be manual. ADS include the power of computation, noted another discussant. And perhaps a computer and complex system should be included as well, and a decision-making point or cut off; for example, an algorithm that determines who gets a loan. It is also important to consider statistical modeling and forecasting, which allow for prediction.

Using data and criteria for making decisions is nothing new, and it’s often done without specific systems or computers. People make plenty of very bad decisions without computers, and the addition of computers and algorithms is sometimes considered a more objective approach, because instructions can be set and run by a computer.

Why are there issues with ADS?

In practice things are not as clear cut as they might seem, explained one of our discussants. We live in a world where people are treated differently because of their demographic identity, and curation of data can represent some populations over others or misrepresent certain populations because of how they have been treated historically. These current and historic biases make their way into the algorithms, which are created by humans, and this encodes human biases into an ADS. When feeding existing data into a computer so that it can learn, we bring our historical biases into decision-making. The data we feed into an ADS may not reflect changing demographics or shifts in the data, and algorithms may not reflect ongoing institutional policy changes.

As another person said, “systems are touted as being neutral, but they are subject to human fallacies. We live in a world that is full of injustice, and that is reflected in a data set or in an algorithm. The speed of the system, once it’s computerized, replicates injustices more quickly and at greater scale.” When people or institutions believe that the involvement of a computer means the system is neutral, we have a problem. “We need to take ADS with a grain of salt, similar to how we tell children not to believe everything they see on the Internet.”

Many people are unaware of how an algorithm works. Yet over time, we tend to rely on algorithms and believe in them as unbiased truth. When ADS are not monitored, tested, and updated, this becomes problematic. ADS can begin to make decisions for people rather than supporting people in making decisions, and this can go very wrong, for example when decisions are unquestioningly made based on statistical forecasting models.

Are there ways to curb these issues with ADS?

Consistent monitoring. ADS should also be monitored constantly over time by humans. One Salon participant suggested setting up checkpoints in the decision-making process to alert humans that something is afoul. Another suggested that research and proof of concept are critical. For example, running the existing human-only system alongside the ADS and comparing the decisions over time help to flag differences that can then be examined to see which of the processes is working better and to adjust or discontinue the ADS if it is incorrect. (In some cases, this process may actually flag biases in the human system). Random checks can be set up as can control situations where some decisions are made without using an ADS so that results can be compared between the two.

Recourse and redress. There should be simple and accessible ways for people affected by ADS to raise issues and make complaints. All ADS can make mistakes – there can be false positives (where an error points falsely to a match or the presence of a condition) and false negatives (where an error points to the absence of a match or a condition when indeed it is present). So there needs to be recourse for people affected by errors or in cases where biased data is leading to further discrimination or harm. Anyone creating an ADS needs to build in a way for mistakes to be managed and corrected.

Education and awareness. A person may not be aware that an ADS has affected them, and they likely won’t understand how an ADS works. Even people using ADS for decisions about others often forget that it’s an ADS deciding. This is similar to how people forget that their newsfeed on Facebook is based on their historical choices in content and their ‘likes’ and is not a neutral serving of objective content.

Improving the underlying data. Algorithms will only get better when there are constant feedback loops and new data that help the computer learn, said one Salon participant. Currently most algorithms are trained on highly biased samples that do not reflect marginalized groups and communities. For example, there is very little data about many of the people participating in or eligible for aid and development programs.

So we need proper data sets that are continually updated if we are to use ADS in aid work. This is a problem, however, if the data that is continually fed into the ADS remains biased. One person shared this example: If some communities are policed more because of race, economic status, etc., there will continually be more data showing that people in those communities are committing crimes. In whiter or wealthier communities, where there is less policing, less people are arrested. If we update our data continually without changing the fact that some communities are policed more than others (thus will appear to have higher crime rates), we are simply creating a feedback loop that confirms our existing biases.

Privacy concerns also enter the picture. We may want to avoid collecting data on race, gender, ethnicity or economic status so that we don’t expose people to discrimination, stigma, or harm. For example, in the case of humanitarian work or conflict zones, sensitive data can make people or groups a target for governments or unfriendly actors. However, it’s hard to make decisions that benefit people if their data is missing. It ends up being a catch 22.

Transparency is another way to improve ADS. “In the aid sector, we never tell people how decisions are made, regardless of whether those are human or machine-made decisions,” said one Salon participant. When the underlying algorithm is obscured, it cannot be reviewed for value judgments. Some compared this to some of the current non-algorithmic decision-making processes in the aid system (which are also not transparent) and suggested that aid systems could get more intelligent if they began to surface their own specific biases.

The objectives of the ADS can be reviewed. Is the system used to further marginalize or discriminate against certain populations, or can this be turned on its head? asked one discussant. ADS could be used to try to determine which police officers might commit violence against civilians rather than to predict which people might commit a crime. (See the Algorithmic Justice League’s work). 

ADS in the aid system – limited to the powerful few?

Because of the underlying challenges with data (quality, standards, lack of) in the aid sector, ADS is still a challenge. One area where data is available and where ADS are being built and used is in supply chain management, for example, at massive UN agencies like the World Food Program.

Some questioned whether this exacerbates concentration of power in these large agencies, running counter to agreed-upon sector goals to decentralize power and control to smaller, local organizations who are ‘on the ground’ and working directly in communities. Does ADS then bring even more hierarchy, bias, and exclusion into an already problematic system of power and privilege? Could there be ways of using ADS differently in the aid system that would not replicate existing power structures? Could ADS itself be used to help people see their own biases? “Could we build that into an ADS? Could we have a read out of decisions we came to and then see what possible biases were?” asked one person.

How can we improve trust in ADS?

Most aid workers, national organizations, and affected communities have a limited understanding of ADS, leading to lower levels of trust in ADS and the decisions they produce. Part of the issue is the lack of participation and involvement in the design, implementation, validation, and vetting of ADS. On the other hand, one Salon participant pointed out that given all the issues with bias and exclusion, “maybe they would trust an ADS even less if they understood how an ADS works.”

Involving both users of an ADS and the people affected by ADS decisions is crucial. This needs to happen early in the process, said one person. It shouldn’t be limited to having people complain or report once the ADS has wronged them. They need to be at the table when the system is being developed and trialed.

If trust is to be built, the explainability of an algorithm needs consideration. “How can you explain the algorithm to people who are affected by it? Humanitarian workers cannot describe an ADS if they don’t understand it. We need to find ways to explain ADS to a non-technical audience so that they can be involved,” said one person. “We’ve shown sophisticated models to leaders, and they defaulted to spreadsheets.”

This brought up the need for change management if ADS are introduced. Involving and engaging decision-makers in the design and creation of ADS systems is a critical step for their adoption. This means understanding how decisions are made currently and based on what factors. Technology and data teams need to be in the room to understand the open and hidden nature of decision-making.

Isn’t decision making without ADS also highly biased and obscured?

People are often resistant to talking about or sharing how decisions have been made in the past, however, because those decisions may have been biased or inconsistent, based on faulty data, or made for political or other reasons.

As one person pointed out, both government and the aid system are deeply politicized and suffer from local biases, corruption and elite capture. A spatial analysis of food distribution in two countries, for example, showed extreme biases along local political leader lines. A related analysis of the road network and aid distribution allowed a clear view into the unfairness of food distribution and efficiency losses.

Aid agencies themselves make highly-biased decisions all the time, it was noted. Decisions are often political, situational, or made to enhance the reputation of an individual or agency. These decisions are usually not fully documented. Is this any less transparent than the ‘black box’ of an algorithm? Not to mention that agencies have countless dashboards that are aimed at helping them make efficient, unbiased decisions, yet recommendations based on the data may run counter to what is needed politically or for other reasons in a given moment.

Could (should) the humanitarian sector assume greater leadership on ADS?

Most ADS are built by private sector partners. When they are sold to the public or INGO sector, these companies indemnify themselves against liability and keep their trade secrets. It becomes impossible to hold them to account for any harm produced. One person asked whether the humanitarian sector could lead by bringing in different incentives – transparency, multi-stakeholder design, participation, and a focus on wellbeing? Could we try this and learn from it and develop and document processes whereby this could be done at scale? Could the aid sector open source how ADS are designed and created so that data scientists and others could improve?

Some were skeptical about whether the aid sector would be capable of this. “Theoretically we could do this,” said one person, “but it would then likely be concentrated in the hands of these few large agencies. In order to have economies of scale, it will have to be them because automation requires large scale. If that is to happen, then the smaller organizations will have to trust the big ones, but currently the small organizations don’t trust the big ones to manage or protect data.” And what about the involvement of governments, said another person, we would need to consider the role of the public sector.

“I like the idea of the humanitarian sector leading,” added one person, “but aid agencies don’t have the greatest track record for putting their constituencies in the driving seat. That’s not how it works. A lot of people are trying to correct that, but aid sector employees are not the people who will be affected by these systems in the end. We could think about working with organizations who have the outreach capacity to do work with these groups, but again, these organizations are not made up of the affected people. We have to remember that.”

How can we address governance and accountability?

When you bring in government, private sector, aid agencies, software developers, data, and the like, said another person, you will have issues of intellectual property, ownership, and governance. What are the local laws related to data transmission and storage? Is it enough to open source just the code or ADS framework without any data in it? If you work with local developers and force them to open source the algorithm, what does that mean for them and their own sustainability as local businesses?

Legal agreements? Another person suggested that we focus on open sourcing legal agreements rather than algorithms. “There are always risks, duties, and liabilities listed in contracts and legal agreements. The private sector in particular will always play the indemnity card. And that means there is no commercial incentive to fix the tools that are being used. What if we pivoted this conversation to commercial liability? If a model is developed in Manhattan, it won’t work in Malawi — a company has a commercial duty to flag and recognize that. This type of issue is hidden if we focus the conversation on open software or open models. It’s rare that all the technology will be open and transparent. What we should push for is open contracting, and that could help a lot with governance.”

Certification? Others suggested that we adapt existing audit systems like the LEED certification (which allows engineers and architects to audit whether buildings are actually environmentally sustainable) or the IRB process (external boards that review research to flag ethical issues). “What if there were a team of data scientists and others who could audit ADS and determine the flaws and biases?” suggested one person. “That way the entire thing wouldn’t need to be open, but it could still be audited independently”. This was questioned, however, in that a stamp of approval on a single system could lead people to believe that every system designed by a particular group would pass the test.

Ethical frameworks could be a tool, yet which framework? A recent article cited 84 different ethical frameworks for Artificial Intelligence.

Regulation? Self-regulation has failed, said one person. Why aren’t we talking about actual regulation? The General Data Protection Regulation (GDPR) in Europe has a specific article (Article 22) about ADS that states that people have a right to know when ADS are used to made decisions that affect them, the right to contest decisions made by ADS, and right to request that humans review ADS decisions.

SPHERE Standards / Core Humanitarian Standard? Because of the legal complexities of working across multiple countries and with different entities in different jurisdictions (including some like the UN who are exempt from the law), an add-on to the SPHERE standards might be considered, said one person. Or something linked to the Core Humanitarian Standard (CHS), which includes a certification process. Donors will often ask whether an agency is CHS certified.

So, is there any good to come from ADS?

We tend to judge ADS with higher standards than we judge humans, said one Salon participant. Loan officers have been making biased decisions for years. How can we apply the standards of impartiality and transparency to both ADS and human decision making? ADS may be able to fix some of our current faulty and biased decisions. This may be useful for large systems, where we can’t afford to deploy humans at scale. Let’s find some potential bright spots for ADS.

Some positive examples shared by participants included:

  • Human rights organizations are using satellite imagery to identify areas that have been burned or otherwise destroyed during conflict. This application of automated decision making doesn’t deal directly with people or allocation of resources, it supports human rights research.
  • In California, ADS has been used to expunge the records of people convicted for marijuana-related violations now that marijuana has been legalized. This example supports justice and fairness.
  • During Hurricane Irma, an organization in the Virgin Islands used an excel spreadsheet to track whether people met the criteria for assistance. Aid workers would interview people and the sheet would calculate automatically whether they were eligible. This was not high tech or sexy, but it was automated and fast. The government created the criteria and these were open and transparently communicated to people ahead of time so that if they didn’t receive benefits, they were clear about why.
  • Flood management is an area where there is a lot of data and forecasting. Governments have been using ADS to evacuate people before it’s too late. This sector can gain in efficiency with ADS, which could be expanded to other weather-based hazards. Because it is a straightforward use case that involves satellites and less personal data it may be a less political space, making deployment easier.
  • Drones also use ADS to stitch together hundreds of thousands of photos to create large images of geographical areas. Though drone data still needs to be ground truthed, it is less of an ethical minefield than when personal or household level data is collected, said one participant. Other participants, however, had issues with the portrayal of drones as less of an ethical minefield, citing surveillance, privacy, and challenges with the ownership and governance of the final knowledge product, the data for which was likely collected without people’s consent.

How can the humanitarian sector prepare for ADS?

In conclusion, one participant summed up that decision making has always been around. As ADS is explored more in-depth with groups like the one at this Salon and as we delve into the ethics and improve on ADS, there is great potential. ADS will probably never totally replace humans but can supplement humans to make better decisions.

How are we in the humanitarian sector preparing people at all levels of the system to engage with these systems, design them ethically, reduce harm, and make them more transparent? How are we working to build capacities at the local level to understand and use ADS? How are we figuring out ways to ensure that the populations who will be affected by ADS are aware of what is happening? How are we ensuring recourse and redress in the case of bad decisions or bias? What jobs might be created (rather than eliminated) with the introduction of more ADS?

ADS are not going to go away, and the humanitarian sector doesn’t have to wait until they are perfected to get involved in shaping and improving them so that they support our work in ethical and useful ways rather than in harmful or unethical ways.

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

At our April Technology Salon we discussed the evidence and good practice base for blockchain and Distributed Ledger Technologies (DLTs) in the humanitarian sector. Our discussants were Larissa Fast (co-author with Giulio Coppi of the Global Alliance for Humanitarian Innovation/GAHI’s report on Humanitarian Blockchain, Senior Lecturer at HCRI, University of Manchester and Research Associate at the Humanitarian Policy Group) and Ariana Fowler (UNICEF Blockchain Strategist).

Though blockchain fans suggest DLTs can address common problems of humanitarian organizations, the extreme hype cycle has many skeptics who believe that blockchain and DLTs are simply overblown and for the most part useless for the sector. Until recently, evidence on the utility of blockchain/DLTs in the humanitarian sector has been slim to none, with some calling for the sector to step back and establish a measured approach and a learning agenda in order to determine if blockchain is worth spending time on. Others argue that evaluators misunderstand what to evaluate and how.

The GAHI report provides an excellent overview of blockchain and DLTs in the sector along with recommendations at the project, policy and system levels to address the challenges that would need to be overcome before DLTs can be ethically, safely, appropriately and effectively scaled in humanitarian contexts.

What’s blockchain? What’s a DLT?

We started with a basic explanation of DLTs and Blockchain and how they work. (See page 5 of the GAHI report for more detail).

The GAHI report aimed to get beyond the potential of Blockchain and DLTs to actual use cases — however, in the humanitarian sector there is still more potential than evidence. Although there were multiple use cases to choose from, the report authors chose to go in-depth on five, selected to provide a sense of the different ways that blockchain is specifically being used in the sector.

These use cases all currently have limited “nodes” (e.g., places where the data is stored) and only a few “controlling entities” (that determine what information is stored or put on the chain). They are all “private“ (as opposed to public) blockchains, meaning they are not taking advantage of DLT potential for dispersed information, and they end up being more like “a very expensive database.”

What’s the deal with private vs public blockchains?

Private versus public blockchains are an ideological sticking point in “deep blockchain culture,” noted one Salon participant. “’Cryptobros’ and blockchain fundamentalists think private blockchains are the Antichrist.” Private blockchains are considered an oxymoron and completely antithetical to the idea of blockchain.

So why are humanitarian organizations creating private blockchains? “They are being cautious about protecting data as they test out blockchain and DLTs. It’s a conscious choice to proceed in a controlled way, because once information is on the blockchain, it’s immutable — it cannot be removed.” When first trying out a DLT or blockchain, “Humanitarians tend to be cautious. They don’t want to play with the permanency of a public blockchain since they are working with vulnerable populations.”

Because of the blockchain hype cycle, however, there is some skepticism about organizations using private blockchains. “Are they setting up a private blockchain with one node so that they can say that they’re using blockchain just to get funding?”

An issue with private blockchains is that they are not open and transparent. The code is developed behind closed doors, meaning that it’s difficult to make it interoperable, whereas “with a public chain, you can check the code and interact with it.”

Does the humanitarian sector have the capacity to use blockchain?

As one person pointed out, knowledge and capacity around blockchain in the humanitarian sector is very low. There are currently very few people who understand both humanitarian work and the private sector/technology side of blockchain. “We desperately need intermediaries because people in the two sectors talk past each other. They use the same words to mean very different things, and this leads to misunderstandings.” This is a perpetual issue in the “humanitarian tech” space, and it often leads to applications that are not in the best interest of those on the receiving end of humanitarian work.

Capacity challenges also come up with regard to managing partnerships that involve intellectual properly. When cooperating with the private sector, organizations are normally required to sign an MOU that gives rights to the company. Often humanitarian agencies do not fully understand what they are signing up for. This can mean that the company uses the humanitarian collaboration to develop technologies that are later used in ways that the humanitarian agency considers unethical or disturbing. Having technology or blockchain expertise within an organization makes it possible to better negotiate those types of situations, but often only the larger INGOs can afford that type of expertise. Similarly, organizations lack expertise in the legal and regulatory space with regard to blockchain.

How will blockchain become locally owned? Should we wait for a user-friendly version?

Technology moves extremely fast, and organizations need a certain level of capacity to create it and maintain it. “I’m an engineer working in the humanitarian space,” said one Salon participant. “Blockchain is such a complex software solution that I’m very skeptical it will ever be at a stage where it could be locally owned and managed. Even with super basic SMS-based services we have maintenance issues and challenges handing off the tech. If in this room we are struggling to understand blockchain, how will this ever work in lower tech and lower resource areas?” Another participant asked a similar question with regard to handing off a blockchain solution to a local government.

Does the sector needs to wait for a simplified and “user friendly” version of blockchain before humanitarians get into the space? Some said yes, but other participants said that the technology is moving quickly, and that it is critical for humanitarians to “get in there” to try to slow it down. “Sometimes blockchain is not the solution. Sometimes a database is just fine. We need people to pump the brakes before things get out of control.”

“How can people learn about blockchain? How could a grassroots organization begin to set one up?” asked one person. There is currently no “Square Space for Blockchain,” and the technology remains complicated, but those with a strong drive could learn, according to one person. But although “coders might be able to teach themselves ‘light blockchain,’ there is definitely a barrier to entry.” This is a challenge with the whole area of blockchain. “It skipped the education step. We need a ‘learning revolution ‘if we want people to actually use it.”

Enabling environments for learning to use blockchain don’t exist in conflict zones. The knowledge is held by a few individuals, and this makes long-term support and maintenance of DLT and blockchain systems very difficult. How to localize and own the knowledge? How to ensure sustainability? The sector needs to think about what the “Blockchain 101” is. There needs to be more accompaniment, investment and support for the enabling environment if blockchain is to be useful and sustainable in the sector.

Are there any examples of humanitarian blockchain that are working?

The GAHI report talks about five cases in particular. Disberse was highlighted by one Salon participant as an example that seems to be working. Disberse is a private fin-tech company that uses blockchain, but it was started by former humanitarians. “This example works in part because there is a sense of commitment to the humanitarian sector alongside the technical expertise.”

In general, in the humanitarian space, the place where blockchain/ DLTs appear to be the most effective is in back-end use cases. In other words, blockchain is helpful for making behind-the-scenes transactions in humanitarian assistance more efficient. It can eliminate bank transaction fees, and this leads to savings. Agencies can also use blockchain to create efficiencies and benefits for record keeping and auditability. This situation is not unique to blockchain. A recent DIAL baseline study of the global ICT4D ecosystem also found that in the social sector, the main benefits of ICTs were going to organizations, not to vulnerable populations.

“This is all fine,” according to one Salon participant, “but one must be clear that the benefits accrue to the agencies, not the ‘beneficiaries,’ who may not even know that DLTs are being used.” On the one hand, having a seamless backend built on blockchain where users don’t even know that blockchain is involved sounds ideal, However, this can be somewhat problematic. “Are agencies getting meaningful and responsible consent for using blockchain? If executives don’t even understand what the blockchain is, how do you explain that to people more generally?”

Because there is not a simple, accessible way of developing blockchain solutions and there are not a lot of user-friendly interfaces for the general population, for at least the next few years, humanitarian applications of blockchain will likely only be useful for back-office operations. This means that is is up to humanitarian organizations to re-invest any money saved by blockchain into program funding, so that “beneficiaries” are accruing the benefits.

What other “social” use cases are there for blockchain?

In the wider social sector and development sector, there are plenty of potential use cases, but again, very little documented evidence of their short- and long-term impacts. (Author’s note: I am not talking about financial and private sector use cases, I’m referring very specifically to social sectors and the international development and humanitarian sector). For example, Oxfam is tracing supply chains of rice, however this is a one-off pilot and it’s unclear whether it can scale. IBM has a variety of supply chain examples. Land registries and sustainable fishing are also being explored as are digital ID, birth registration and civil registries.

According to one Salon participant, “supply chain is the low-hanging fruit of blockchain – just recording something, tracking it, and referencing it. It’s all basically a ledger, a spreadsheet. Even digital ID – it’s a supply chain of movement. Provenance is a good way to use a blockchain solution.” Other areas where blockchain is said to have potential is in situations where election transparency is needed and also “smart contracts” where one needs complex contracts and there is a lack of trust amongst the parties. In general, where there is a recurring need for anonymized, disaggregated data, blockchain could be a solution.

The important thing, however, is having a very clear definition of the problem before deciding that blockchain is the solution. “A lot of times people don’t know what their problem is, and the problem is not one that can be fixed with blockchain.” Additionally, accuracy (”garbage in, garbage out”) remains a problem that blockchain on its own cannot solve. “If the off-chain process isn’t accurate, If you’re looking at human rights abuses of migrant workers, but everything is being fudged. If your supply chain is blurry, or if the information being put on the blockchain is not verified, then you have a separate problem to figure out before thinking about blockchain.”

What about ethics and consent and the Digital Principles?

Are the Digital Principles are being used as a way to guide ethical, responsible and sustainable blockchain use in the humanitarian space, asked one Salon participant. The general impression in the room was that no. “Deep crypto in the private sector is a black hole in the blockchain space,” according to one person, and the gap between the world of blockchain in the private sector and the world of blockchain in the humanitarian sector is huge. (See this write up, for a taste of one segment of the crypto-world.) “The majority of private sector blockchain enthusiasts who are working on humanitarian issues have not heard of any principles. They are operating with no principles, and sometimes it’s largely for PR because the blockchain hype cycle means they will get a lot of good press from it. You get someone who read an article in Vice about a problem in a place they’ve never heard of, and they decide that blockchain is the solution…. They are often re-inventing the wheel, and fire, and also electricity — they think that no one has ever thought about this problem before.”

Most in the room considered that this type of uninformed application of blockchain is irresponsible, and that these parallel worlds and conversations need to come together. “The humanitarian space has decades of experience with things that have been tried and haven’t worked – but people on the tech side think no one has ever tried solving these problems. We need to improve the dialogue and communication. There is a wealth of knowledge to share, and a huge learning curve on both sides.”

Additionally, one Salon participant pointed out the importance of bringing ethics into the discussion. “It’s not about just using a blockchain. It’s about what the problem is that you’re trying to solve, and does blockchain help address that problem? There are a lot of problems that blockchain is not appropriate for. Do you have the technical capacity or an accessible online environment? That’s important.”

On top of that, “it’s important for people to know that their information is being used in a particular way by a particular technology. We need to grapple with that, or we end up experimenting on people who are already marginalized or vulnerable to begin with. How do we do that? It’s like the Facebook moment. That same thing for blockchain – if you don’t know what’s going on and how your information is being used, it’s problematic.”

A third point is the massive environmental disadvantage in a public blockchain. Currently, the computing power used to verify and validate transactions that happen on public chains is immense. That is part of the ethical challenge related to blockchain. “You can’t get around the massive environmental aspect. And that makes it ironic for blockchain to be used to track carbon offsets.” (Note: there are blockchain companies who say they are working on reducing the environmental impact of blockchain with “pilots coming very soon” but it remains to be seen whether this is true or whether it’s another part of the hype cycle.)

What should donors be doing?

In addition to taking into consideration the ethical, intellectual property, environmental, sustainability, ownership, and consent aspects mentioned above and being guided by the Digital Principles, it was suggested that donors make sure they do their homework and conduct thorough due diligence on potential partners and grantees. “The vetting process needs to be heightened with blockchain because of all the hype around it. Companies come and go. They are here one day and disappear the next.” There was deep suspicion in the room because of the many blockchain outfits that are hyped up and do not actually have the staff to truly do blockchain for humanitarian purposes and use this angle just to get investments.

“Before investing, It would be important to talk with someone like Larissa [our lead discussant] who has done vetting,” said one Salon participant.  “Don’t fall for the marketing. Do a lot of due diligence and demand evidence. Show us the evidence or we’re not funding you. If you’re saying you want to work with a vulnerable or marginalized population, do you have contact with them right now? Do you know them right now? Or did you just read about them in Vice?”

Recommendations outlined in the GAHI report include providing multi-year financing to humanitarian organizations to allow for the possibility of scaling, and asking for interoperability requirements and guidelines around transparency to be met so that there are not multiple silos governing the sector.

So, are we there yet?

Nope. But at least we’re starting to talk about evidence and learning!

Resources

In addition to the GAHI report, the following resources may be useful:

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

Read Full Post »

On November 14 Technology Salon NYC met to discuss issues related to the role of film and video in development and humanitarian work. Our lead discussants were Ambika Samarthya from Praekelt.org; Lina Srivastava of CIEL, and Rebekah Stutzman, from Digital Green’s DC office.

How does film support aid and development work?

Lina proposed that there are three main reasons for using video, film, and/or immersive media (such as virtual reality or augmented reality) in humanitarian and development work:

  • Raising awareness about an issue or a brand and serving as an entry point or a way to frame further actions.
  • Community-led discussion/participatory media, where people take agency and ownership and express themselves through media.
  • Catalyzing movements themselves, where film, video, and other visual arts are used to feed social movements.

Each of the above is aimed at a different audience. “Raising awareness” often only scratches the surface of an issue and can have limited impact if done on its own without additional actions. Community-led efforts tend to go deeper and focus on the learning and impact of the process (rather than the quality of the end product) but they usually reach fewer people (thus have a higher cost per person and less scale). When using video for catalyzing moments, the goal is normally bringing people into a longer-term advocacy effort.

In all three instances, there are issues with who controls access to tools/channels, platforms, and distribution channels. Though social media has changed this to an extent, there are still gatekeepers that impact who gets to be involved and whose voice/whose story is highlighted, funders who determine which work happens, and algorithms that dictate who will see the end products.

Participants suggested additional ways that video and film are used, including:

  • Social-emotional learning, where video is shown and then discussed to expand on new ideas and habits or to encourage behavior change.
  • Personal transformation through engaging with video.

Becky shared Digital Green’s approach, which is participatory and where community members to use video to help themselves and those around them. The organization supports community members to film videos about their agricultural practices, and these are then taken to nearby communities to share and discuss. (More on Digital Green here). Video doesn’t solve anyone’s development problem all by itself, Becky emphasized. If an agricultural extensionist is no good, having a video as part of their training materials won’t solve that. “If they have a top-down attitude, don’t engage, don’t answer questions, etc., or if people are not open to changing practices, video or no video, it won’t work.”

How can we improve impact measurement?

Questions arose from Salon participants around how to measure impact of film in a project or wider effort. Overall, impact measurement in the world of film for development is weak, noted one discussant, because change takes a long time and it is hard to track. We are often encouraged to focus on the wrong things like “vanity measurements” such as “likes” and “clicks,” but these don’t speak to longer-term and deeper impact of a film and they are often inappropriate in terms of who the audience is for the actual films (E.g., are we interested in impact on the local audience who is being impacted by the problem or the external audience who is being encouraged to care about it?)

Digital Green measures behavior change based on uptake of new agriculture practices. “After the agriculture extension worker shows a video to a group, they collect data on everyone that’s there. They record the questions that people ask, the feedback about why they can’t implement a particular practice, and in that way they know who is interested in trying a new practice.” The organization sets indicators for implementing the practice. “The extension worker returns to the community to see if the family has implemented a, b, c and if not, we try to find out why. So we have iterative improvement based on feedback from the video.” The organization does post their videos on YouTube but doesn’t know if the content there is having an impact. “We don’t even try to follow it up as we feel online video is much less relevant to our audience.” An organization that is working with social-emotional learning suggested that RCTs could be done to measure which videos are more effective. Others who work on a more individual or artistic level said that the immediate feedback and reactions from viewers were a way to gauge impact.

Donors often have different understandings of useful metrics. “What is a valuable metric? How can we gather it? How much do you want us to spend gathering it?” commented one person. Larger, longer-term partners who are not one-off donors will have a better sense of how to measure impact in reasonable ways. One person who formerly worked at a large public television station noted that it was common to have long conversation about measurement, goals, and aligning to the mission. “But we didn’t go by numbers, we focused on qualitative measurement.” She highlighted the importance of having these conversations with donors and asking them “why are you partnering with us?” Being able to say no to donors is important, she said. “If you are not sharing goals and objectives you shouldn’t be working together. Is gathering these stories a benefit to the community ? If you can’t communicate your actual intent, it’s very complicated.”

The goal of participatory video is less about engaging external (international) audiences or branding and advocacy. Rather it focuses on building skills and capacities through the process of video making. Here, the impact measurement is more related to individual, and often self-reported, skills such as confidence, finding your voice, public speaking, teamwork, leadership skills, critical thinking and media literacy. The quality of video production in these cases may be low, and videos unsuitable for widespread circulation, however the process and product can be catalysts for local-level change and locally-led advocacy on themes and topics that are important to the video-makers.

Participatory video suffers from low funding levels because it doesn’t reach the kind of scale that is desired by funders, though it can often contribute to deep, personal and community-level change. Some felt that even if community-created videos were of high production quality and translated to many languages, large-scale distribution is not always feasible because they are developed in and speak to/for hyper-local contexts, thus their relevance can be limited to smaller geographic areas. Expectation management with donors can go a long way towards shifting perspectives and understanding of what constitutes “impact.”

Should we re-think compensation?

Ambika noted that there are often challenges related to incentives and compensation when filming with communities for organizational purposes (such as branding or fundraising). Organizations are usually willing to pay people for their time in places such New York City and less inclined to do so when working with a rural community that is perceived to benefit from an organization’s services and projects. Perceptions by community members that a filmmaker is financially benefiting from video work can be hard to overcome, and this means that conflict may arise during non-profit filmmaking aimed at fundraising or building a brand. Even when individuals and communities are aware that they will not be compensated directly, there is still often some type of financial expectation, noted one Salon participant, such as the purchase of local goods and products.

Working closely with gatekeepers and community leaders can help to ease these tensions. When filmmaking takes several hours or days, however, participants may be visibly stressed or concerned about household or economic chores that are falling to the side during filming, and this can be challenging to navigate, noted one media professional. Filming in virtual reality can exacerbate this problem, since VR filming is normally over-programmed and repetitive in an effort to appear realistic.

One person suggested a change in how we approach incentives. “We spent about two years in a community filming a documentary about migration. This was part of a longer research project. We were not able to compensate the community, but we were able to invest directly in some of the local businesses and to raise funds for some community projects.” It’s difficult to understand why we would not compensate people for their time and their stories, she said. “This is basically their intellectual property, and we’re stealing it. We need a sector rethink.” Another person agreed, “in the US everyone gets paid and we have rules and standards for how that happens. We should be developing these for our work elsewhere.”

Participatory video tends to have less of a challenge with compensation. “People see the videos, the videos are for their neighbors. They are sharing good agricultural or nutrition approaches with people that they already know. They sometimes love being in the videos and that is partly its own reward. Helping people around them is also an incentive,” said one person.

There were several other rabbit holes to explore in relation to film and development, so look for more Salons in 2018!

To close out the year right, join us for ICT4Drinks on December 14th at Flatiron Hall from 7-9pm. If you’re signed up for Technology Salon emails, you’ll find the invitation in your inbox!

Salons run under Chatham House Rule so no attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

For our Tuesday, July 27th Salon, we discussed partnerships and interoperability in global health systems. The room housed a wide range of perspectives, from small to large non-governmental organizations to donors and funders to software developers to designers to healthcare professionals to students. Our lead discussants were Josh Nesbit, CEO at Medic Mobile; Jonathan McKay, Global Head of Partnerships and Director of the US Office of Praekelt.org; and Tiffany Lentz, Managing Director, Office of Social Change Initiatives at ThoughtWorks

We started by hearing from our discussants on why they had decided to tackle issues in the area of health. Reasons were primarily because health systems were excluding people from care and organizations wanted to find a way to make healthcare inclusive. As one discussant put it, “utilitarianism has infected global health. A lack of moral imagination is the top problem we’re facing.”

Other challenges include requests for small scale pilots and customization/ bespoke applications, lack of funding and extensive requirements for grant applications, and a disconnect between what is needed on the ground and what donors want to fund. “The amount of documentation to get a grant is ridiculous, and then the system that is requested to be built is not even the system that needs to be made,” commented one person. Another challenge is that everyone is under constant pressure to demonstrate that they are being innovative. [Sidenote: I’m reminded of this post from 2010….] “They want things that are not necessarily in the best interest of the project, but that are seen to be innovations. Funders are often dragged along by that,” noted another person.

The conversation most often touched on the unfulfilled potential of having a working ecosystem and a common infrastructure for health data as well as the problems and challenges that will most probably arise when trying to develop these.

“There are so many uncoordinated pilot projects in different districts, all doing different things,” said one person. “Governments are doing what they can, but they don’t have the funds,” added another, “and that’s why there are so many small pilots happening everywhere.” One company noted that it had started developing a platform for SMS but abandoned it in favor of working with an existing platform instead. “Can we create standards and protocols to tie some of this work together? There isn’t a common infrastructure that we can build on,” was the complaint. “We seem to always start from scratch. I hope donors and organizations get smart about applying pressure in the right areas. We need an infrastructure that allows us to build on it and do the work!” On the other hand, someone warned of the risks of pushing everyone to “jump on a mediocre software or platform just because we are told to by a large agency or donor.”

The benefits of collaboration and partnership are apparent: increased access to important information, more cooperation, less duplication, the ability to build on existing knowledge, and so on. However, though desirable, partnerships and interoperability is not easy to establish. “Is it too early for meaningful partnerships in mobile health? I was wondering if I could say that…” said one person. “I’m not even sure I’m actually comfortable saying it…. But if you’re providing essential basic services, collecting sensitive medical data from patients, there should be some kind of infrastructure apart from private sector services, shouldn’t there?” The question is who should own this type of a mediator platform: governments? MNOs?

Beyond this, there are several issues related to control and ownership. Who would own the data? Is there a way to get to a point where the data would be owned by the patients and demonetized? If the common system is run by the private sector, there should be protections surrounding the patients’ sensitive information. Perhaps this should be a government-run system. Should it be open source?

Open source has its own challenges. “Well… yes. We’ve practiced ‘hopensource’,” said one person (to widespread chuckles).

Another explained that the way we’ve designed information systems has held back shifts in health systems. “When we’re comparing notes and how we are designing products, we need to be out ahead of the health systems and financing shifts. We need to focus on people-centered care. We need to gather information about a person over time and place. About the teams who are caring for them. Many governments we’re working with are powerless and moneyless. But even small organizations can do something. When we show up and treat a government as a systems owner that is responsible to deliver health care to their citizens, then we start to think about them as a partner, and they begin to think about how they could support their health systems.”

One potential model is to design a platform or system such that it can eventually be handed off to a government. This, of course, isn’t a simple idea in execution. Governments can be limited by their internal expertise. The personnel that a government has at the time of the handoff won’t necessarily be there years or months later. So while the handoff itself may be successful in the short term, there’s no firm guarantee that the system will be continually operational in the future. Additionally, governments may not be equipped with the knowledge to make the best decisions about software systems they purchase. Governments’ negotiating capacity must be expanded if they are to successfully run an interoperable system. “But if we can bring in a snazzy system that’s already interoperable, it may be more successful,” said one person.

Having a common data infrastructure is crucial. However, we must also spend some time thinking about what the data itself should look like. Can it be standardized? How can we ensure that it is legible to anyone with access to it?

These are only some of the relevant political issues, and at a more material level, one cannot ignore the technical challenges of maintaining a national scale system. For example, “just getting a successful outbound dialing rate is hard!” said one person. “If you are running servers in Nigeria it just won’t always be up! I think human centered design is important. But there is also a huge problem simply with making these things work at scale. The hardcore technical challenges are real. We can help governments to filter through some of the potential options. Like, can a system demonstrate that it can really operate at massive scale?” Another person highlighted that “it’s often non-profits who are helping to strengthen the capacity of governments to make better decisions. They don’t have money for large-scale systems and often don’t know how to judge what’s good or to be a strong negotiator. They are really in a bind.”

This is not to mention that “the computers have plastic over them half the time. Electricity, computers, literacy, there are all these issues. And the TelCo infrastructure! We have layers of capacity gaps to address,” said one person.

There are also donors to consider. They may come into a project with unrealistic expectations of what is normal and what can be accomplished. There is a delicate balance to be struck between inspiring the donors to take up the project and managing expectations so that they are not disappointed.” One strategy is to “start hopeful and steadily temper expectations.” This is true also with other kinds of partnerships. “Building trust with organizations so that when things do go bad, you can try to manage it is crucial. Often it seems like you don’t want to be too real in the first conversation. I think, ‘if I lay this on them at the start it can be too real and feel overwhelming.…'” Others recommended setting expectations about how everyone together is performing. “It’s more like, ‘together we are going to be looking at this, and we’ll be seeing together how we are going to work and perform together.”

Creating an interoperable data system is costly and time-consuming, oftentimes more so than donors and other stakeholders imagine, but there are real benefits. Any step in the direction of interoperability must deal with challenges like those considered in this discussion. Problems abound. Solutions will be harder to come by, but not impossible.

So, what would practitioners like to see? “I would like to see one country that provides an incredible case study showing what good partnership and collaboration looks like with different partners working at different levels and having a massive impact and improved outcomes. Maybe in Uganda,” said one person. “I hope we see more of us rally around supporting and helping governments to be the system owners. We could focus on a metric or shared cause – I hope in the near future we have a view into the equity measure and not just the vast numbers. I’d love to see us use health equity as the rallying point,” added another. From a different angle, one person felt that “from a for-profit, we could see it differently. We could take on a country, a clinic or something as our own project. What if we could sponsor a government’s health care system?”

A participant summed the Salon up nicely: “I’d like to make a flip-side comment. I want to express gratitude to all the folks here as discussants. This is one of the most unforgiving and difficult environments to work in. It’ SO difficult. You have to be an organization super hero. We’re among peers and feel it as normal to talk about challenges, but you’re really all contributing so much!”

Salons are run under Chatham House Rule so not attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

Our Tech Salon on Thursday March 9th focused on the potential of Microwork to support youth economic empowerment. Joining us as lead discussants were Lis Meyers, Banyan Global; Saul Miller, Samasource; and Elena Matsui, The Rockefeller Foundation. Banyan Global recently completed a report on “The Nexus of Microwork and Impact Sourcing: Implications for Youth Employment,” supported by the Global Center for Youth Employment and RTI, who also sponsored this Salon. (Disclosure: I worked on the report with the team at Banyan)

Definitions: To frame the discussion, we provided some core definitions and an explanation of the premise of microwork and its role within Impact sourcing.

  • Business Process Outsourcing (BPO): the practice of reducing business costs by transferring portions of work to outside suppliers rather than completing it internally.
  • Online Outsourcing: Contracting a third-party provider (often in a different country) to supply products or services that are delivered and paid for via the Internet. The third party is normally an individual (e-lancing), an online community(crowdsourcing) or a firm.
  • Microwork: a segment of online outsourcing where projects or complex tasks are broken into simple tasks that can be completed in seconds or minutes. Workers require numeracy and understanding of internet and computer technology, and advanced literacy, and are usually paid small amounts of money for each completed task.
  • Impact sourcing: (also known as socially responsible outsourcing), is a business practice in which companies outsource to suppliers that employ individuals from the lowest economic segments of the population.

The premise: It is believed that if microwork is done within an impact sourcing framework, it has the potential to create jobs for disadvantaged youth and disconnected, vulnerable populations and to provide them with income opportunities to support themselves and their families. Proponents of microwork believe it can equip workers with skills and experience that can enable them to enhance their employability regardless of gender, age, socio-economic status, previous levels of employment, or physical ability. Microwork is not always intentionally aimed at vulnerable populations, however. It is only when impact sourcing is adopted as the business strategy that microwork directly benefits the most disadvantaged.

The ecosystem: The microwork industry includes a variety of stakeholders, including: clients (looking to outsource work), service providers (who facilitate the outsourcing by liaising with these clients, breaking tasks down into micro tasks, employing and managing micro workers, and providing overall management and quality control), workers (individual freelancers, groups of people, direct employees, or contractors working through a service provider on assigned micro tasks), donors/investors, government, and communities.

Models of Microwork: The report identifies three main models for microwork (as shown below); micro-distribution (e.g., Amazon Mechanical Turk or CrowdFlower); the direct model (e.g., Digital Divide Data or iMerit) and the indirect model (e.g., Samasource or Rural Shores).

 

Implementer Case Study. With the framework settled, we moved over to hearing from our first discussant, from Samasource, who provided the “implementer” point of view. Samasource has been operating since 2008. Their goal is to connect marginalized women and/or youth with dignified work through the Internet. The organization sees itself as an intermediary or a bridge and believes that work offers the best solution to the complex problem of poverty. The organization works through 3 key programs: SamaSchools, Microwork and SamaHub. At the Samaschool, potential micro workers are trained on the end-to-end process.

The organization puts potential micro workers through an assessment process (former employment history, level of education, context) to predict and select which of the potential workers will offer the highest impact. Most of Samasources’ workers were underemployed or unemployed before coming to Samasource. At Samaschool they learn digital literacy, soft skills, and the technical skills that will enable them to succeed on the job and build their resumes. Research indicates that after 4 years with Samasource, these workers show a 4-fold increase in income.

The organization has evolved over the past couple of years to opening its own delivery center in Nairobi with 650 agents (micro workers). They will also launch in Mumbai, as they’ve learned that hands-on delivery center. Samasource considers that their model (as opposed to the micro-distribution model) offers more control over recruitment and training, quality control, worker preparation, and feedback loops to help workers improve their own performance. This model also offers workers wrap-around programs and benefits like full-time employment with financial literacy training, mentorship, pensions and healthcare.

In closing, it was highlighted that Impact measurement has been a top priority for Samaource. The organization was recently audited with 8 out of 9 stars in terms of quality of impact, evidence and M&E systems. Pending is an RCT that will aim to address the counterfactual (what would happen if Samasource was not operating here?). The organization is experiencing substantial growth, doubling its revenue last year and projecting to grow another 50%. The organization achieved financial sustainability for the first time in the last quarter of 2016. Growth in the industries that require data processing and cleaning and the expansion of AI has driven this growth.

Questions on sustainability. One participant asked why the organization took 8 years to become sustainable. Samasource explained that they had been heavily subsidized by donors, and part of the journey has been to reduce subsidies and increase paid clients. A challenge is keeping costs down and competing with other service providers while still offering workers dignified work. As one of our other discussants noted, this is a point of contention with some local service providers who are less well-known to donors. Because they are not heavily subsidized, they have not been able to focus as much on the “impact” part.

For Digital Divide Data (DDD), who was also present at the Salon, the goal was not quickly getting to profit. Rather the initial objective was social. Now that the organization is maturing it has begun thinking more about profitability and sustainability. It remains a non-profit organization however.

Retention and scale. Both Samasource and DDD noted that workers are staying with them for longer periods of time (up to 4 years). This works well for individual employees (who then have stable work with benefits). It also works well for clients, because employees learn the work, meaning it will be of higher quality – and because the BPO industry has a lot of turnover, and if micro workers are stable it benefits the BPO. This, however, is less useful for achieving scale, because workers don’t move through the program quickly, opening up space for new recruits. For Samasource, the goal would be for workers to move on within 2 years. At DDD, workers complete university while working for DDD, so 4 years is the norm. Some stay for 6 years, which also impacts scaling potential. DDD is looking at a new option for workers to be credentialed and certified, potentially through a 6 month or 1-year program.

The client perspective. One perspective highlighted in the Banyan report is the client perspective. Some loved microwork and impact sourcing. Others said it was challenging. Many are interested in partnering with microwork service providers like iMerit and Daiprom because it offers more data security (you can sign an NDA with service provider, whereas you can’t with individual workers who are coming in through micro-distribution and crowdsourcing). Working with a service provider also means that you have an entity that is responsible for quality control. Experiences with service providers have varied, however, and some companies had signed on to jobs that they were unprepared to train workers on and this resulted in missed deadlines and poor quality work. Clients were clear that their top priority was business – they cared first about quality, cost, and timeliness. “Impact was the cherry on top,” as one discussant noted.

The worker perspective. An aspect missing from the study and the research is that of worker experiences. (As Banyan noted, this would require additional resources for a proper in-depth study). Do workers really seek career growth? Or are they simply looking for something flexible that can help them generate some income in a pinch or supplement their incomes during hard times. In Venezuela, for example, the number of micro workers on CrowdFlower has jumped astronomically during the current political and economic crisis, demonstrating that these type of platforms may serve as supplemental income for those in the most desperate situations. What is the difference in what different workers need?

One small study of micro workers in Kenya noted that when trying to work on their own through the micro-distribution model, they had major challenges: they were not able to collect electronic payments; they got shut out of the system because there were several youth using the same IP address and it was flagged as fraud; language and time zones affected the work was available to them; some companies only wanted workers from certain countries whom they trusted or felt could align culturally; and young women were wary of scams and sexual harassment if accessing work online, as this was their experience with work offline. Some participants wondered what the career path was for a micro worker. Did they go back to school? Did they move ahead to a higher level, higher paying job? Samasource and DDD have some evidence that micro workers in their programs do go on to more dignified, higher paying, more formal jobs, however much of this is due to the wraparound programming that they offer.

The role of government was questioned by Salon participants. Is there a perfect blend of private sector, government and an impact sourcing intermediary? Should government be using micro workers and purposefully thinking about impact sourcing? Could government help to scale microwork and impact sourcing? To date the role of government has been small, noted one discussant. Others wondered if there would be touch points through existing government employment or vocational programs, but it was pointed out that most of the current micro workers are those that have already fallen through the cracks on education and vocational training programming.

A participant outlined her previous experiences with a local municipality in India that wanted to create local employment. The contracting process excluded impact sourcing providers for inexplicable reasons. There were restrictions such as having been in operation for at least 3 years, having a certain minimal level of turnover, number of employees in the system, etc. “So while the government talked about work that needed to be digitized and wanted rural employees, and we went on a three year journey with them to make it inclusive of impact sourcers, it didn’t really work.”

What about social safeguards? One Salon participant raised concerns about the social services and legal protections in place for micro workers. In the absence of regulations, are these issues being swept under the carpet, she wondered. Another noted that minimum standards would be a positive development, but that this will be a long process, as currently there is not even a standard definition of impact sourcing, and it’s unclear what is meant by ‘impact’ and how it’s measured.

This is one area where government could and should play a role. In the past, for example, government has pushed procurement from women-owned or minority owned businesses. Something similar could happen with impact sourcing, but we need standards in order for it to happen. Not all clients who use micro workers are doing it within a framework of impact sourcing and social impact goals. For example, some clients said they were doing “impact sourcing” simply because they were sourcing work from a developing country. In reality, they were simply working with a normal BPO, and so the risk of “impact washing” is real.

Perhaps, noted another participant, the focus should be on drumming up quality clients who actually want to have an impact. “A mandated standard will mean that you lose the private sector.” Some suggested that there would be some type of a ‘certified organic’ or ‘good housekeeping’ seal of approval from a respected entity. Some felt that business were not interested and government would never move something like this. Others disagreed, saying that some large corporation really wanted to be perceived as ethical players.

Definitions proved a major challenge – for example at what point does an ‘impact worker’ cease being an impact worker and how do you count them? Should someone be labeled for life as an impact worker? There was disagreement in the room on this point.

A race to the bottom? Some wondered if microwork was just the same re-hashing of the ‘gig economy’ debate. Would it drive down prices and create extremely unstable work for the most disadvantaged populations? Were there ways that workers could organize if they were working via the micro-distribution model and didn’t even know where to find each other, and if the system was set up to make them bid against each other. It was noted that there was one platform that had been identified that aimed to support workers on Amazon Mechanical Turk, that workers there helped each other with tips on how to get contracts. However as with Uber and other gig economy players, it appeared that all the costs for learning and training were then being pawned off onto the workers themselves.

Working through the direct or indirect models can help to protect individual workers in this aspect, as Samasource, for example, does offer workers contracts and benefits and has a termination policy. The organization is also in a position to negotiate contracts that may be more beneficial to workers, such as extending a 3-week contract with lots of workers over a longer period of time with fewer workers so that income is steadier. Additionally, evaluations have shown that these jobs are pulling in workers who have never had formal jobs before, and that there is an increase in income over time for Samasource workers.

What can donors do? Our third discussant noted that the research is mixed in terms of how different kinds of microwork without any intermediary or wraparound services can actually build a career pathway. Some who are active in the space are still working hard to identify the right partnerships and build support for impact sourcing. It has been difficult to find a “best of breed” or a “gold standard” to date as the work is still evolving. “We’re interested in learning from others what partners need from donors to help scale the work that is effective.” It’s been difficult to evaluate, as she noted, because there has been quite a lot of secrecy involved, as often people do not want to share what is working for fear of losing the competitive edge.

What does the future hold? One Salon participant felt that something very bold was required, given how rapidly economies and technologies are changing. Some of the current microwork will be automated in the near future, he said. The window is closing quickly. Others disagreed, saying that the change in technology was opening up new growth in the sector and that some major players were even delaying their projections because of these rapid shifts and changes in robotics and automation. The BPO sector is fickle and moves quickly – for example voice has shifted rapidly from India to The Philippines. Samasource felt that human components were still required to supplement and train AI and DDD noted that their workers are actually training machines to take over their current jobs. It was also noted that most of the current micro workers are digital natives and a career in data entry is not highly enticing. “We need to find something that helps them feel connected to the global economy. We need to keep focused on relevant skills. The data stuff has a timestamp and it’s on its way out.” DDD is working with universities to bring in courses that are focused on some of the new and emerging skills sets that will be needed.

Conclusions. In short, there are plenty of critical questions remaining in the area of microwork, impact sourcing and around the broader question of the future of youth employment at the global level. How to stay abreast of the rapid changes in economy, business, and technology? What skill sets are needed? A recent article in India’s Business Standard notes constant efforts at re-skilling IT workers. These question are facing not only ‘developing countries’ but the US is also in a similar crisis. Will online work with no wraparound services be a stopgap solution? Will holistic models be pushed so that young people develop additional life skills that will help them in the longer term? Will we learn how to measure and understand the ‘impact’ in ‘impact sourcing?’ Much remains to explore and test!

Thanks to the Global Center for Youth Employment and RTI for supporting this Salon, to our lead discussants and participants, and to ThoughtWorks for hosting us! If you’d like to join us for a future Technology Salon, sign up here!

 

 

 

 

 

Read Full Post »

Our March 18th Technology Salon NYC covered the Internet of Things and Global Development with three experienced discussants: John Garrity, Global Technology Policy Advisor at CISCO and co-author of Harnessing the Internet of Things for Global Development; Sylvia Cadena, Community Partnerships Specialist, Asia Pacific Network Information Centre (APNIC) and the Asia Information Society Innovation Fund (ISIF); and Andy McWilliams, Creative Technologist at ThoughtWorks and founder and director of Art-A-Hack and Hardware Hack Lab.

By Wilgengebroed on Flickr [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)%5D, via Wikimedia Commons

What is the Internet of Things?

One key task at the Salon was clarifying what exactly is the “Internet of Things.” According to Wikipedia:

The Internet of Things (IoT) is the network of physical objects—devices, vehicles, buildings and other items—embedded with electronics, software, sensors, and network connectivity that enables these objects to collect and exchange data.[1] The IoT allows objects to be sensed and controlled remotely across existing network infrastructure,[2] creating opportunities for more direct integration of the physical world into computer-based systems, and resulting in improved efficiency, accuracy and economic benefit;[3][4][5][6][7][8] when IoT is augmented with sensors and actuators, the technology becomes an instance of the more general class of cyber-physical systems, which also encompasses technologies such as smart grids, smart homes, intelligent transportation and smart cities. Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure. Experts estimate that the IoT will consist of almost 50 billion objects by 2020.[9]

As one discussant explained, the IoT involves three categories of entities: sensors, actuators and computing devices. Sensors read data in from the world for computing devices to process via a decision logic which then generates some type of action back out to the world (motors that turn doors, control systems that operate water pumps, actions happening through a touch screen, etc.). Sensors can be anything from video cameras to thermometers or humidity sensors. They can be consumer items (like a garage door opener or a wearable device) or industrial grade (like those that keep giant machinery running in an oil field). Sensors are common in mobile phones, but more and more we see them being de-coupled from cell phones and integrated into or attached to all manner of other every day things. The boom in the IoT means that in whereas in the past, a person may have had one URL for their desktop computer, now they might be occupying several URLs:  through their phone, their iPad, their laptop, their Fitbit and a number of other ‘things.’

Why does IoT matter for Global Development?

Price points for sensors are going down very quickly and wireless networks are steadily expanding — not just wifi but macro cellular technologies. According to one lead discussant, 95% of the world is covered by 2G and two-thirds by 3G networks. Alongside that is a plethora of technology that is wide range and low tech. This means that all kinds of data, all over the world, are going to be available in massive quantities through the IoT. Some are excited about this because of how data can be used to track global development indicators, for example, the type of data being sought to measure the Sustainable Development Goals (SDGs). Others are concerned about the impact of data collected via the IoT on privacy.

What are some examples of the IoT in Global Development?

Discussants and others gave many examples of how the IoT is making its way into development initiatives, including:

  • Flow meters and water sensors to track whether hand pumps are working
  • Protecting the vaccine cold chain – with a 2G thermometer, an individual can monitor the cold chain for local use and the information also goes directly to health ministries and to donors
  • Monitoring the environment and tracking animals or endangered species
  • Monitoring traffic routes to manage traffic systems
  • Managing micro-irrigation of small shareholder plots from a distance through a feature phone
  • As a complement to traditional monitoring and evaluation (M&E) — a sensor on a cook stove can track how often a stove is actually used (versus information an individual might provide using recall), helping to corroborate and reduce bias
  • Verifying whether a teacher is teaching or has shown up to school using a video camera

The CISCO publication on the IoT and Global Development provides many more examples and an overview of where the area is now and where it’s heading.

How advanced is the IoT in the development space?

Currently, IoT in global development is very much a hacker space, according to one discussant. There are very few off the shelf solutions that development or humanitarian organizations can purchase and readily implement. Some social enterprises are ramping up activity, but there is no larger ecosystem of opportunities for off the shelf products.

Because the IoT in global development is at an early phase, challenges abound. Technical issues, power requirements, reliability and upkeep of sensors (which need to be calibrated), IP issues, security and privacy, technical capacity, and policy questions all need to be worked out. One discussant noted that these challenges carry on from the mobile for development (m4d) and information and communication technologies for development (ICT4D) work of the past.

Participants agreed that challenges are currently huge. For example, devices are homogeneous, making them very easy to hack and affect a lot of devices at once. No one has completely gotten their head around the privacy and consent issues, which are are very different than those of using FB. There are lots of interoperability issues also. As one person highlighted — there are over 100 different communication protocols being used today. It is more complicated than the old “BetaMax v VHS” question – we have no idea at this point what the standard will be for IoT.

For those who see the IoT as a follow-on from ICT4D and m4d, the big question is how to make sure we are applying what we’ve learned and avoiding the same mistakes and pitfalls. “We need to be sure we’re not committing the error of just seeing the next big thing, the next shiny device, and forgetting what we already know,” said one discussant. There is plenty of material and documentation on how to avoid repeating past mistakes, he noted. “Read ICT works. Avoid pilotitis. Don’t be tech-led. Use open source and so on…. Look at the digital principles and apply them to the IoT.”

A higher level question, as one person commented, is around the “inconvenient truth” that although ICTs drive economic growth at the macro level, they also drive income inequality. No one knows how the IoT will contribute or create harm on that front.

Are there any existing standards for the IoT? Should there be?

Because there is so much going on with the IoT – new interventions, different sectors, all kinds of devices, a huge variety in levels of use, from hacker spaces up to industrial applications — there are a huge range of standards and protocols out there, said one discussant. “We don’t really want to see governments picking winners or saying ‘we’re going to us this or that.’ We want to see the market play out and the better protocols to bubble up to the surface. What’s working best where? What’s cost effective? What open protocols might be most useful?”

Another discussant pointed out that there is a legacy predating the IOT: machine-to-machine (M2M), which has not always been Internet based. “Since this legacy is still there. How can we move things forward with regard to standardization and interoperability yet also avoid leaving out those who are using M2M?”

What’s up with IPv4 and IPv6 and the IoT? (And why haven’t I heard about this?)

Another crucial technical point raised is that of IPv4 and IPv6, something that not many Salon participant had heard of, but that will greatly impact on how the IoT rolls out and expands, and just who will be left out of this new digital divide. (Note: I found this video to be helpful for explaining IPv4 vs IPv6.)

“Remember when we used Netscape and we understood how an IP number translated into an IP address…?” asked one discussant. “Many people never get that lovely experience these days, but it’s important! There is a finite number of IP4 addresses and they are running out. Only Africa and Latin America have addresses left,” she noted.

IPv6 has been around for 20 years but there has not been a serious effort to switch over. Yet in order to connect the next billion and the multiple devices that they may bring online, we need more addresses. “Your laptop, your mobile, your coffee pot, your fridge, your TV – for many of us these are all now connected devices. One person might be using 10 IP addresses. Multiply that by millions of people, and the only thing that makes sense is switching over to IPv6,” she said.

There is a problem with the technical skills and the political decisions needed to make that transition happen. For much of the world, the IoT will not happen very smoothly and entire regions may be left out of the IoT revolution if high level decision makers don’t decide to move ahead with IPv6.

What are some of the other challenges with global roll-out of IoT?

In addition to the IPv4 – IPv6 transition, there are all kinds of other challenges with the IoT, noted one discussant. The technical skills required to make the transition that would enable IoT in some regions, for example Asia Pacific, are sorely needed. Engineers will need to understand how to make this shift happen, and in some places that is going to be a big challenge. “Things have always been connected to the Internet. There are just going to be lots more, different things connected to the Internet now.”

One major challenge is that there are huge ethical questions along with security and connectivity holes (as I will outline later in this summary post, and as discussed in last year’s salon on Wearable Technologies). In addition, noted one discussant, if we are designing networks that are going to collect data for diseases, for vaccines, for all kinds of normal businesses, and put the data in the cloud, developing countries need to have the ability to secure the data, the computing capacity to deal with it, and the skills to do their own data analysis.

“By pushing the IoT onto countries and not supporting the capacity to manage it, instead of helping with development, you are again creating a giant gap. There will be all kinds of data collected on climate change in the Pacific Island Countries, for example, but the countries don’t have capacity to deal with this data. So once more it will be a bunch of outsiders coming in to tell the Pacific Islands how to manage it, all based on conclusions that outsiders are making based on sensor data with no context,” alerted one discussant. “Instead, we should be counseling our people, our countries to figure out what they want to do with these sensors and with this data and asking them what they need to strengthen their own capacities.”

“This is not for the SDGs and ticking off boxes,” she noted. “We need to get people on the ground involved. We need to decentralize this so that people can make their own decisions and manage their own knowledge. This is where the real empowerment is – where local people and country leaders know how to collect data and use it to make their own decisions. The thing here is ownership — deploying your own infrastructure and knowing what to do with it.”

How can we balance the shiny devices with the necessary capacities?

Although the critical need to invest in and support country-level capacity to manage the IoT has been raised, this type of back-end work is always much less ‘sexy’ and less interesting for donors than measuring some development programming with a flashy sensor. “No one wants to fund this capacity strengthening,” said one discussant. “Everyone just wants to fund the shiny sensors. This chase after innovation is really damaging the impact that technology can actually have. No one just lets things sit and develop — to rest and brew — instead we see everyone rushing onto the next big thing. This is not a good thing for a small country that doesn’t have the capacity to jump right into it.”

All kinds of things can go wrong if people are not trained on how to manage the IoT. Devices can be hacked and they may be collecting and sharing data without an individuals’ knowledge (see Geoff Huston on The Internet of Stupid Things). Electrical short outs, common in places with poor electricity ecosystems, can also cause big problems. In addition, the Internet is affected by legacy systems – so we need interoperability that goes backwards, said one discussant. “If we don’t make at least a small effort to respect those legacy systems, we’re basically saying ‘if you don’t have the funding to update your system, you’re out.’ This then reinforces a power dynamic where countries need the international community to give them equipment, or they need to buy this or buy that, and to bring in international experts from the outside….’ The pressure on poor countries to make things work, to do new kinds of M&E, to provide evidence is huge. With that pressure comes a higher risk of falling behind very quickly. We are also seeing pilot projects that were working just fine without fancy tech being replaced by new fangled tech-type programs instead of being supported over the longer term,” she said.

Others agreed that the development sector’s fascination with shiny and new is detrimental. “There is very little concern for the long-term, the legacy system, future upgrades,” said one participant. “Once the blog post goes up about the cool project, the sensors go bad or stop working and no one even knows because people have moved on.” Another agreed, citing that when visiting numerous clinics for a health monitoring program in one country, the running joke among the M&E staff was “OK, now let’s go and find the broken solar panel.” “When I think of the IoT,” she said, “I think of a lot of broken devices in 5 years.” The aspect of eWaste and the IoT has not even begun to be examined or quantified, noted another.

It is increasingly important for governments to understand how the Internet works, because they are making policy about it. Manufacturers need to better understand how the tech works on the ground, especially in different contexts that they are not accustomed to working in. Users need a better understanding of all of this because their privacy is at risk. Legal frameworks around data and national laws need more attention as well. “When you are working with restrictive governments, your organization’s or start-up’s idea might actually be illegal or close to a sedition law and you may end up in jail,” noted one discussant.

What choices will organizations need to make regarding the IoT?

When it comes to actually making decisions on how involved an organization should and can be in supporting or using the IoT, one critical choice will be related the suites of devices, said our third discussant. Will it be a cloud device? A local computing device? A computer?

Organizations will need to decide if they want a vendor that gives them a package, or if they want a modular, interoperable approach of units. They will need to think about aspects like whether they want to go with proprietary or open source and will it be plug and play?

There are trade-offs here and key technical infrastructure choices will need to be made based on a certain level of expertise and experience. If organizations are not sure what they need, they may wish to get some advice before setting up a system or investing heavily.

As one discussant put it, “When I talk about the IOT, I often say to think about what the Internet was in the 90s. Think about that hazy idea we had of what the Internet was going to be. We couldn’t have predicted in the 90s what today’s internet would look like, and we’re in the same place with the IoT,” he said. “There will be seismic change. The state of the whole sector is immature now. There are very hard choices to make.”

Another aspect that’s representative of the IoT’s early stage, he noted, is that the discussion is all focusing on http and the Internet. “The IOT doesn’t necessarily even have to involve the Internet,” he said.

Most vendors are offering a solution with sensors to deploy, actuators to control and a cloud service where you log in to find your data. The default model is that the decision logic takes place there in the cloud, where data is stored. In this model, the cloud is in the middle, and the devices are around it, he said, but the model does not have to be that way.

Other models can offer more privacy to users, he said. “When you think of privacy and security – the healthcare maxim is ‘do no harm.’ However this current, familiar model for the IoT might actually be malicious.” The reason that the central node in the commercial model is the cloud is because companies can get more and more detailed information on what people are doing. IoT vendors and IoT companies are interested in extending their profiles of people. Data on what people do in their virtual life can now be combined with what they do in their private lives, and this has huge commercial value.

One option to look at, he shared, is a model that has a local connectivity component. This can be something like bluetooth mesh, for example. In this way, the connectivity doesn’t have to go to the cloud or the Internet at all. This kind of set-up may make more sense with local data, and it can also help with local ownership, he said. Everything that happens in the cloud in the commercial model can actually happen on a local hub or device that opens just for the community of users. In this case, you don’t have to share the data with the world. Although this type of a model requires greater local tech capacity and can have the drawback that it is more difficult to push out software updates, it’s an option that may help to enhance local ownership and privacy.

This requires a ‘person first’ concept of design. “When you are designing IOT systems, he said, “start with the value you are trying to create for individuals or organizations on the ground. And then implement the local part that you need to give local value. Then, only if needed, do you add on additional layers of the onion of connectivity, depending on the project.” The first priority here are the goals that the technology design will achieve for individual value, for an individual client or community, not for commercial use of people’s data.

Another point that this discussant highlighted was the need to conduct threat modeling and to think about unintended consequences. “If someone hacked this data – what could go wrong?” He suggested working backwards and thinking: “What should I take offline? How do I protect it better? How do I anonymize it better.”

In conclusion….

It’s critical to understand the purpose of an IoT project or initiative, discussants agreed, to understand if and why scale is needed, and to be clear about the drivers of a project. In some cases, the cloud is desirable for quicker, easier set up and updates to software. At the same time, if an initiative is going to be sustainable, then community and/or country capacity to run it, sustain it, keep it protected and private, and benefit from it needs to be built in. A big part of that capacity includes the ability to understand the different layers that surround the IoT and to make grounded decisions on the various trade-offs that will come to a head in the process of design and implementation. These skills and capacities need to be developed and supported within communities, countries and organizations if the IoT is to contribute ethically and robustly to global development.

Thanks to APNIC for sponsoring and supporting this Salon and to our friends at ThoughtWorks for hosting! If you’d like to join discussions like this one in cities around the world, sign up at Technology Salon

Salons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

Our December 2015 Technology Salon discussion in NYC focused on approaches to girls’ digital privacy, safety and security. By extension, the discussion included ways to reduce risk for other vulnerable populations. Our lead discussants were Ximena BenaventeGirl Effect Mobile (GEM) and Jonathan McKay, Praekelt Foundation. I also shared a draft Girls’ Digital Privacy, Safety and Security Policy and Toolkit I’ve been working on with both organizations over the past year.

Girls’ digital privacy, safety and security risks

Our first discussant highlighted why it’s important to think specifically about girls and digital security. In part, this is because different factors and vulnerabilities combine, exacerbating girls’ levels of risk. For example, girls living on less than $2 per day likely only have access to basic mobile phones, which are often borrowed from parents or siblings. The organization she works with always starts with deep research on aspects like ownership vs. borrowship and whether girls’ mobile usage is free/unlimited and un-supervised or controlled by gatekeepers such as parents, brothers, or other relatives. This helps to design better tools, services and platforms and to design for safety and security, she said. “Gatekeepers are very restrictive in many cases, but parental oversight is not necessarily a bad thing. We always work with parents and other gatekeepers as well as with girls themselves when we design and test.” When girls are living in more traditional or conservative societies, she said, we also need to think about how content might affect girls both online and offline. For example, “is content sufficiently progressive in terms of girls’ rights, yet safe for girls to read, comment on or discuss with friends and family without severe retaliation?”

Research suggests that girls who are more vulnerable offline (due to poverty or other forms of marginalization), are likely also more vulnerable to certain risks online, so we design with that in mind, she said. “When we started off on this project, our team members were experts in digital, but we had less experience with the safety and privacy aspects when it comes to girls living under $2/day or who were otherwise vulnerable. “Having additional guidance and developing a policy on this aspect has helped immensely – but has also slowed our processes down and sometimes made them more expensive,” she noted. “We had to go back to everything and add additional layers of security to make it as safe as possible for girls. We have also made sure to work very closely with our local partners to be sure that everyone involved in the project is aware of girls’ safety and security.”

Social media sites: Open, Closed, Private, Anonymous?

One issue that came up was safety for children and youth on social media networks. A Salon participant said his organization had thought about developing this type of a network several years back but decided in the end that the security risks outweighed the advantages. Participants discussed whether social media networks can ever be safe. One school of thought is that the more open a platform, the safer it is, as “there is no interaction in private spaces that cannot be constantly monitored or moderated.” Some worry about open sites, however, and set up smaller, closed, private groups that were closely monitored. “We work with victims of violence to share their stories and coping mechanisms, so, for us, private groups are a better option.”

Some suggested that anonymity on a social media site can protect girls and other vulnerable groups, however there is also research showing that Internet anonymity contributes to an increase in activities such as bullying and harassment. Some Salon participants felt that it was better to leverage existing platforms and try to use them safely. Others felt that there are no existing social media platforms that have enough security for girls or other vulnerable groups to use with appropriate levels of risk. “We sometimes recruit participants via existing social media platforms,” said one discussant, “but we move people off of those sites to our own more secure sites as soon as we can.”

Moderation and education on safety

Salon participants working with vulnerable populations said that they moderate their sites very closely and remove comments if users share personal information or use offensive language. “Some project budgets allow us to have a moderator check every 2 hours. For others, we sweep accounts once a day and remove offensive content within 24 hours.” One discussant uses moderation to educate the community. “We always post an explanation about why a comment was removed in order to educate the larger user base about appropriate ways to use the social network,” he said.

Close moderation becomes difficult and costly, however, as the user base grows and a platform scales. This means individual comments cannot be screened and pre-approved, because that would take too long and defeat the purpose of an engaging platform. “We need to acknowledge the very real tension between building a successful and engaging community and maintaining privacy and security,” said one Salon participant. “The more you lock it down and the more secure it is, the harder you find it is to create a real and active community.”

Another participant noted that they use their safe, closed youth platform to educate and reinforce messaging about what is safe and positive use of social media in hopes that young people will practice safe behaviors when they use other platforms. “We know that education and awareness raising can only go so far, however,” she said, “and we are not blind to that fact.” She expressed concern about risk for youth who speak out about political issues, because more and more governments are passing laws that punish critics and censor information. The organization, however, does not want to encourage youth to stop voicing opinions or participating politically.

Data breaches and project close-out

One Salon participant asked if organizations had examples of actual data breaches, and how they had handled them. Though no one shared examples, it was recommended that every organization have a contingency plan in place for accidental data leaks or a data breach or data hack. “You need to assume that you will get hacked,” said one person, “and develop your systems with that as a given.”

In addition to the day-to-day security issues, we need to think about project close-out, said one person. “Most development interventions are funded for a short, specific period of time. When a project finishes, you get a report, you do your M&E, and you move on. However, the data lives on, and the effects of the data live on. We really need to think more about budgeting for proper project wind-down and ensure that we are accountable beyond the lifetime of a project.”

Data security, anonymization, consent

Another question was related to using and keeping girls’ (and others’) data safe. “Consent to collect and use data on a website or via a mobile platform can be tricky, especially if we don’t know how to explain what we might do with the data,” said one Salon participant. Others suggested it would be better not to collect any data at all. “Why do we even need to collect this data? Who is it for?” he asked. Others countered that this data is often the only way to understand what people are doing on the site, to make adjustments and to measure impact.

One scenario was shared where several partner organizations discussed opening up a country’s cell phone data records to help contain a massive public health epidemic, but the privacy and security risks were too great, so the idea was scrapped. “Some said we could anonymize the data, but you can never really and truly anonymize data. It would have been useful to have a policy or a rubric that would have guided us in making that decision.”

Policy and Guidelines on Girls Privacy, Security and Safety

Policy guidelines related to aspects such as responsible data for NGOs, data security, privacy and other aspects of digital security in general do exist. (Here are some that we compiled along with some other resources). Most IT departments also have strict guidelines when it comes to donor data (in the case of credit card and account information, for example). This does not always cross over to program-level ICT or M&E efforts that involve the populations that NGOs are serving through their programming.

General awareness around digital security is increasing, in part due to recent major corporate data hacks (e.g., Target, Sony) and the Edward Snowden revelations from a few years back, but much more needs to be done to educate NGO staff and management on the type of privacy and security measures that need to be taken to protect the data and mitigate risk for those who participate in their programs.  There is an argument that NGOs should have specific digital privacy, safety and security policies that are tailored to their programming and that specifically focus on the types of digital risks that girls, women, children or other vulnerable people face when they are involved in humanitarian or development programs.

One such policy (focusing on vulnerable girls) and toolkit (its accompanying principles and values, guidelines, checklists and a risk matrix template); was shared at the Salon. (Disclosure: – This policy toolkit is one that I am working on. It should be ready to share in early 2016). The policy and toolkit take program implementers through a series of issues and questions to help them assess potential risks and tradeoffs in a particular context, and to document decisions and improve accountability. The toolkit covers:

  1. data privacy and security –using approaches like Privacy by Design, setting limits on the data that is collected, achieving meaningful consent.
  2. platform content and design –ensuring that content produced for girls or that girls produce or volunteer is not putting girls at risk.
  3. partnerships –vetting and managing partners who may be providing online/offline services or who may partner on an initiative and want access to data, monetizing of girls’ data.
  4. monitoring, evaluation, research and learning (MERL) – how will program implementers gather and store digital data when they are collecting it directly or through third parties for organizational MERL purposes.

Privacy, Security and Safety Implications

Our final discussant spoke about the implications of implementing the above-mentioned girls’ privacy, safety and security policy. He started out saying that the policy starts off with a manifesto: We will not compromise a girl in any way, nor will we opt for solutions that cut corners in terms of cost, process or time at the expense of her safety. “I love having this as part of our project manifesto, he said. “It’s really inspiring! On the flip side, however, it makes everything I do more difficult, time consuming and expensive!”

To demonstrate some of the trade-offs and decisions required when working with vulnerable girls, he gave examples of how the current project (implemented with girls’ privacy and security as a core principle) differed from that of a commercial social media platform and advertising campaign he had previously worked on (where the main concern was the reputation of the corporation, not that of the users of the platform and the potential risks they might put themselves in by using the platform).

Moderation

On the private sector platform, said the discussant, “we didn’t have the option of pre-moderating comments because of the budget and because we had 800 thousand users. To meet the campaign goals, it was more important for users to be engaged than to ensure content was safe. We focused on removing pornographic photos within 24 hours, using algorithms based on how much skin tone was in the photo.” In the fields of marketing and social media, it’s a fairly well-known issue that heavy-handed moderation kills platform engagement. “The more we educated and informed users about comment moderation, or removed comments, the deader the community became. The more draconian the moderation, the lower the engagement.”

The discussant had also worked on a platform for youth to discuss and learn about sexual health and practices, where he said that users responded angrily to moderators and comments that restricted their participation. “We did expose our participants to certain dangers, but we also knew that social digital platforms are more successful when they provide their users with sense of ownership and control. So we identified users that exhibited desirable behaviors and created a different tier of users who could take ownership (super users) to police and flag comments as inappropriate or temporarily banned users.” This allowed a 25% decrease in moderation. The organization discovered, however, that they had to be careful about how much power these super users had. “They ended up creating certain factions on the platform, and we then had to develop safeguards and additional mechanisms by which we moderated our super users!”

Direct Messages among users

In the private sector project example, engagement was measured by the number of direct or private messages sent between platform users. In the current scenario, however, said the discussant, “we have not allowed any direct messages between platform users because of the potential risks to girls of having places on the site that are hidden from moderators. So as you can see, we are removing some of our metrics by disallowing features because of risk. These activities are all things that would make the platform more engaging but there is a big fear that they could put girls at risk.”

Adopting a privacy, security, and safety policy

One discussant highlighted the importance of having privacy, safety and security policies before a project or program begins. “If you start thinking about it later on, you may have to go back and rebuild things from scratch because your security holes are in the design….” The way a database is set up to capture user data can make it difficult to query in the future or for users to have any control of what information is or is not being shared about them. “If you don’t set up the database with security and privacy in mind from the beginning, it might be impossible to make the platform safe for girls without starting from scratch all over again,” he said.

He also cautioned that when making more secure choices from the start, platform and tool development generally takes longer and costs more. It can be harder to budget because designers may not have experience with costing and developing the more secure options.

“A valuable lesson is that you have to make sure that what you’re trying to do in the first place is worth it if it’s going to be that expensive. It is worth a girls’ while to use a platform if she first has to wade through a 5-page terms and conditions on a small mobile phone screen? Are those terms and conditions even relevant to her personally or within her local context? Every click you ask a user to make will reduce their interest in reaching the platform. And if we don’t imagine that a girl will want to click through 5 screens of terms and conditions, the whole effort might not be worth it.” Clearly, aspects such as terms and conditions and consent processes need to be designed specifically to fit new contexts and new kinds of users.

Making responsible tradeoffs

The Girls Privacy, Security and Safety policy and toolkit shared at the Salon includes a risk matrix where project implementers rank the intensity and probability of risks as high, medium and low. Based on how a situation, feature or other potential aspect is ranked and the possibility to mitigate serious risks, decisions are made to proceed or not. There will always be areas with a certain level of risk to the user. The key is in making decisions and trade-offs that balance the level of risk with the potential benefits or rewards of the tool, service, or platform. The toolkit can also help project designers to imagine potential unintended consequences and mitigate risk related to them. The policy also offers a way to systematically and pro-actively consider potential risks, decide how to handle them, and document decisions so that organizations and project implementers are accountable to girls, peers and partners, and organizational leadership.

“We’ve started to change how we talk about user data in our organization,” said one discussant. “We have stopped thinking about it as something WE create and own, but more as something GIRLS own. Banks don’t own people’s money – they borrow it for a short time. We are trying to think about data that way in the conversations we’re having about data, funding, business models, proposals and partnerships. You don’t get to own your users’ data, we’re not going to share de-anonymized data with you. We’re seeing legislative data in some of the countries we work that are going that way also, so it’s good to be thinking about this now and getting prepared”

Take a look at our list of resources on the topic and add anything we may have missed!

 

Thanks to our friends at ThoughtWorks for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology SalonSalons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

The July 7th Technology Salon in New York City focused on the role of Information and Communication Technologies (ICTs) in Public Consultation. Our lead discussants were Tiago Peixoto, Team Lead, World Bank Digital Engagement Unit; Michele Brandt, Interpeace’s Director of Constitution-Making for Peace; and Ravi Karkara, Co-Chair, Policy Strategy Group, World We Want Post-2015 Consultation. Discussants covered the spectrum of local, national and global public consultation.

We started off by delving into the elements of a high-quality public consultation. Then we moved into whether, when, and how ICTs can help achieve those elements, and what the evidence base has to say about different approaches.

Elements and principles of high quality public participation

Our first discussant started by listing elements that need to be considered, whether a public consultation process is local, national or global, and regardless of whether it incorporates:

  • Sufficient planning
  • Realistic time frames
  • Education for citizens to participate in the process
  • Sufficient time and budget to gather views via different mechanisms
  • Interest in analyzing and considering the views
  • Provision of feedback about what is done with the consultation results

Principles underlying public consultation processes are that they should be:

  • Inclusive
  • Representative
  • Transparent
  • Accountable

Public consultation process should also be accompanied by widespread public education processes to ensure that people are prepared to a) provide their opinions and b) aware of the wider context in which the consultation takes place, she said. Tech and media can be helpful for spreading the news that the consultation is taking place, creating the narrative around it, and encouraging participation of groups who are traditional excluded, such as girls and women or certain political, ethnic, economic or religious groups, a Salon participant added.

Technology increases scale but limits opportunities for empathy, listening and learning

When thinking about integrating technologies into national public consultation processes, we need to ask ourselves why we want to encourage participation and consultation, what we want to achieve by it, and how we can best achieve it. It’s critical to set goals and purpose for a national consultation, rather than to conduct one just to tick a box, continued the discussant.

The pros and cons of incorporating technology into public consultations are contextual. Technology can be useful for bringing more views into the consultation process, however face-to-face consultation is critical for stimulating empathy in decision makers. When people in positions of power actually sit down and listen to their constituencies, it can send a very powerful message to people across the nation that their ideas and voices matter. National consultation also helps to build consensus and capacity to compromise. If done according to the above-mentioned principles, public consultation can legitimize national processes and improve buy-in. When leaders are open to listening, it also transforms them, she said.

At times, however, those with leadership or in positions of power do not believe that people can participate; they do not believe that the people have the capacity to have an opinion about a complicated political process, for example the creation of a new constitution. For this reason there is often resistance to national level consultations from multilateral or bilateral donors, politicians, the elites of a society, large or urban non-governmental organizations, and political leaders. Often when public consultation is suggested as part of a constitution making process, it is rejected because it can slow down the process. External donors may want a quick process for political reasons, and they may impose deadlines on national leaders that do not leave sufficient time for a quality consultation process.

Polls often end up being one-off snapshots or popularity contests

One method that is seen as a quick way to conduct a national consultation is polling. Yet, as Salon participants discussed, polls may end up being more like a popularity contest than a consultation process. Polls offer limited space for deeper dialogue or preparing those who have never been listened to before to make their voices heard. Polling may also raise expectations that whatever “wins” will be acted on, yet often there are various elements to consider when making decisions. So it’s important to manage expectations about what will be done with people’s responses and how much influence they will have on decision-making. Additionally, polls generally offers a snapshot of how people feel at a distinct point in time, but it may be important to understand what people are thinking at various moments throughout a longer-term national process, such as constitution making.

In addition to the above, opinion polls often reinforce the voices of those who have traditionally had a say, whereas those who have been suffering or marginalized for years, especially in conflict situations, may have a lot to say and a need to be listened to more deeply, explained the discussant. “We need to compress the vertical space between the elites and the grassroots, and to be sure we are not just giving people a one-time chance to participate. What we should be doing is helping to open space for dialogue that continues over time. This should be aimed at setting a precedent that citizen engagement is important and that it will continue even after a goal, such as constitution writing, is achieved,” said the discussant.

In the rush to use new technologies, often we forget about more traditional ones like radio, added one Salon participant, who shared an example of using radio and face to face meetings to consult with boys and girls on the Afghan constitution. Another participant suggested we broaden our concept of technology. “A plaza or a public park is actually a technology,” he noted, and these spaces can be conducive to dialogue and conversation. It was highlighted that processes of dialogue between a) national government and the international community and b) national government and citizens, normally happen in parallel and at odds with one another. “National consultations have historically been organized by a centralized unit, but now these kinds of conversations are happening all the time on various channels. How can those conversations be considered part of a national level consultation?” wondered one participant.

Aggregation vs deliberation

There is plenty of research on aggregation versus deliberation, our next discussant pointed out, and we know that the worst way to determine how many beans are in a jar is to deliberate. Aggregation (“crowd sourcing”) is a better way to find that answer. But for a trial, it’s not a good idea to have people vote on whether someone is guilty or not. “Between the jar and the jury trial, however,” he said, “we don’t know much about what kinds of policy issues lend themselves better to aggregation or to deliberation.”

For constitution making, deliberation is probably better, he said. But for budget allocation, it may be that aggregation is better. Research conducted across 132 countries indicated that “technology systematically privileges those who are better educated, male, and wealthier, even if you account for the technology access gaps.” This discussant mentioned that in participatory budgeting, people tend to just give up and let the educated “win” whereas maybe if it were done by a simple vote it would be more inclusive.

One Salon participated noted that it’s possible to combine deliberation and aggregation. “We normally only put things out for a vote after they’ve been identified through a deliberative process,” he said, “and we make sure that there is ongoing consultation.” Others lamented that decision makers often only want to see numbers – how many voted for what – and they do not accept more qualitative consultation results because they usually happen with fewer people participating. “Congress just wants to see numbers.”

Use of technology biases participation towards the elite

Some groups are using alternative methods for participatory democracy work, but the technology space has not thought much about this and relies on self-selection for the most part, said the discussant, and results end up being biased towards wealthier, urban, more educated males. Technology allows us to examine behaviors by looking at data that is registered in systems and to conduct experiments, however those doing these experiments need to be more responsible, and those who do not understand how to conduct research using technology need to be less empirical. “It’s a unique moment to build on what we’ve learned in the past 100 years about participation,” he said. Unfortunately, many working in the field of technology-enabled consultation have not done their research.

These biases towards wealthier, educated, urban males are very visible in Europe and North America, because there is so much connectivity, yet whether online or offline, less educated people participate less in the political process. In ‘developing’ countries, the poor usually participate more than the wealthy, however. So when you start using technology for consultation, you often twist that tendency and end up skewing participation toward the elite. This is seen even when there are efforts to proactively reach out to the poor.

Internal advocacy and an individual’s sense that he or she is capable of making a judgment or influencing an outcome is key for participation, and this is very related to education, time spent in school and access to cultural assets. With those who are traditionally marginalized, these internal assets are less developed and people are less confident. In order to increase participation in consultations, it’s critical to build these internal skills among more marginalized groups.

Combining online and offline public consultations

Our last discussant described how a global public consultation was conducted on a small budget for the Sustainable Development Goals, reaching an incredible 7.5 million people worldwide. Two clear goals of the consultation were that it be inclusive and non-discriminatory. In the end, 49% who voted identified as female, 50% as male and 1% as another gender. Though technology played a huge part in the process, the majority of people who voted used a paper ballot. Others participated using SMS, in locally-run community consultation processes, or via the website. Results from the voting were visualized on a data dashboard/data curation website so that it would be easier to analyze them, promote them, and encourage high-level decision makers to take them into account.

Some of the successful elements of this online/offline process included that transparency was a critical aspect. The consultation technology was created as open source so that those wishing to run their own consultations could open it, modify it, and repackage it however they wanted to suit their local context. Each local partner could manage their own URL and track their own work, and this was motivating to them.

Other key learning was that a conscious effort has to be made to bring in voices of minority groups; investment in training and capacity development was critical for those running local consultations; honesty and transparency about the process (in other words, careful management of expectations); and recognize that there will be highs and lows in the participation cycle (be sensitive to people’s own cycles and available time to participate).

The importance of accountability

Accountability was a key aspect for this process. Member states often did not have time to digest the results of the consultation, and those running it had to find ways to capture the results in short bursts and visually simple graphics so that the consultation results would be used for decision making. This required skill and capacity for not only gathering and generating data but also curating it for the decision-making audience.

It was also important to measure the impact of the consultation – were people’s voices included in the decision-making process and did it make a difference? And were those voices representative of a wide range of people? Was the process inclusive?

Going forward, in order to build on the consultation process and to support the principle of accountability, the initiative will shift focus to become a platform for public participation in monitoring and tracking the implementation of the Sustainable Development Goals.

Political will and responsiveness

A question came up about the interest of decision-makers in actually listening. “Leaders often are not at all interested in what people have to say. They are more concerned with holding onto their power, and if leaders have not agreed to a transparent and open process of consultation, it will not work. You can’t make them listen if they don’t want to. If there is no political will, then the whole consultation process will just be propaganda and window dressing,” one discussant commented. Another Salon participant what can be done to help politicians see the value of listening. “In the US, for example, we have lobbyists, issues groups, PACs, etc., so our politicians are being pushed on and demanded from all sides. If consultation is going to matter, you need to look at the whole system.” “How can we develop tools that can help governments sort through all these pressures and inputs to make good decisions?” wondered one participant.

Another person mentioned Rakesh Rajani’s work, noting that participation is mainly about power. If participation is not part of a wider system change, part of changing power structures, then using technology for participation is just a new tool to do the same old thing. If the process is not transparent and accountable, or if you engage and do not deliver anything based on the engagement, then you will lose future interest to engage.

Responsiveness was also raised. How many of these tech-fueled participation processes have led to governments actually changing, doing something different? One discussant said that evidence of impact of ICT-enabled participation processes was found in only 25 cases, and of those only 5 could show any kind of impact. All the others had very unclear impact – it was ambiguous. Did using ICTs make a difference? There was really no evidence of any. Another commented that clearly technology will only help if government is willing and able to receive consultation input and act on it. We need to find ways to help governments to do that, noted another person.

As always, conversation could have continued on for quite some time but our 2 hours was up. For more on ICTs and public consultations, here is a short list of resources that we compiled. Please add any others that would be useful! And as a little plug for a great read on technology and its potential in development and political work overall, I highly recommend checking out Geek Heresy: Rescuing Social Change from the Cult of Technology from Kentaro Toyama. Kentaro’s “Law of Amplification” is quite relevant in the space of technology-enabled participation, in that technology amplifies existing human behaviors and tendencies, and benefits those who are already primed to benefit while excluding those who have been traditionally excluded. Hopefully we’ll get Kentaro in for a Tech Salon in the Fall!

Thanks to our lead discussants, Michele, Tiago and Ravi, and to Thoughtworks for their generous hosting of the Salon! Salons are conducted under Chatham House Rule so no attribution has been made in this post. Sign up here if you’d like to receive Technology Salon invitations.

Read Full Post »

Older Posts »