Feeds:
Posts
Comments

Posts Tagged ‘challenges’

On Thursday September 19, we gathered at the OSF offices for the Technology Salon on “Automated Decision Making in Aid: What could possibly go wrong?” with lead discussants Jon Truong, and Elyse Voegeli, two of the creators of Automating NYC; and Genevieve Fried and Varoon Mathur, Fellows at the AI Now Institute at NYU.

To start off, we asked participants whether they were optimistic or skeptical about the role of Automated Decision-making Systems (ADS) in the aid space. The response was mixed: about half skeptics and half optimists, most of whom qualified their optimism as “cautious optimism” or “it depends on who I’m talking to” or “it depends on the day and the headlines” or “if we can get the data, governance, and device standards in place.”

What are ADS?

Our next task was to define ADS. (One reason that the New York City ADS task force was unable to advance is that its members were unable to agree on the definition of an ADS).

One discussant explained that NYC’s provisional definition was something akin to:

  • Any system that uses data algorithms or computer programs to replace or assist a human decision-making process.

This may seem straightforward, yet, as she explained, “if you go too broad you might include something like ‘spellcheck’ which feels like overkill. On the other hand, spellcheck is a good case for considering how complex things can get. What if spellcheck only recognized Western names? That would be an example of encoding bias into the ADS. However, the degree of harm that could come from spellcheck as compared to using ADS for predictive policing is very different. Defining ADS is complex.”

Other elements of the definition of an ADS are that it includes computational implementation of an algorithm. Algorithms are basically clear instructions or criteria followed in order to make a decision. Algorithms can be manual. ADS include the power of computation, noted another discussant. And perhaps a computer and complex system should be included as well, and a decision-making point or cut off; for example, an algorithm that determines who gets a loan. It is also important to consider statistical modeling and forecasting, which allow for prediction.

Using data and criteria for making decisions is nothing new, and it’s often done without specific systems or computers. People make plenty of very bad decisions without computers, and the addition of computers and algorithms is sometimes considered a more objective approach, because instructions can be set and run by a computer.

Why are there issues with ADS?

In practice things are not as clear cut as they might seem, explained one of our discussants. We live in a world where people are treated differently because of their demographic identity, and curation of data can represent some populations over others or misrepresent certain populations because of how they have been treated historically. These current and historic biases make their way into the algorithms, which are created by humans, and this encodes human biases into an ADS. When feeding existing data into a computer so that it can learn, we bring our historical biases into decision-making. The data we feed into an ADS may not reflect changing demographics or shifts in the data, and algorithms may not reflect ongoing institutional policy changes.

As another person said, “systems are touted as being neutral, but they are subject to human fallacies. We live in a world that is full of injustice, and that is reflected in a data set or in an algorithm. The speed of the system, once it’s computerized, replicates injustices more quickly and at greater scale.” When people or institutions believe that the involvement of a computer means the system is neutral, we have a problem. “We need to take ADS with a grain of salt, similar to how we tell children not to believe everything they see on the Internet.”

Many people are unaware of how an algorithm works. Yet over time, we tend to rely on algorithms and believe in them as unbiased truth. When ADS are not monitored, tested, and updated, this becomes problematic. ADS can begin to make decisions for people rather than supporting people in making decisions, and this can go very wrong, for example when decisions are unquestioningly made based on statistical forecasting models.

Are there ways to curb these issues with ADS?

Consistent monitoring. ADS should also be monitored constantly over time by humans. One Salon participant suggested setting up checkpoints in the decision-making process to alert humans that something is afoul. Another suggested that research and proof of concept are critical. For example, running the existing human-only system alongside the ADS and comparing the decisions over time help to flag differences that can then be examined to see which of the processes is working better and to adjust or discontinue the ADS if it is incorrect. (In some cases, this process may actually flag biases in the human system). Random checks can be set up as can control situations where some decisions are made without using an ADS so that results can be compared between the two.

Recourse and redress. There should be simple and accessible ways for people affected by ADS to raise issues and make complaints. All ADS can make mistakes – there can be false positives (where an error points falsely to a match or the presence of a condition) and false negatives (where an error points to the absence of a match or a condition when indeed it is present). So there needs to be recourse for people affected by errors or in cases where biased data is leading to further discrimination or harm. Anyone creating an ADS needs to build in a way for mistakes to be managed and corrected.

Education and awareness. A person may not be aware that an ADS has affected them, and they likely won’t understand how an ADS works. Even people using ADS for decisions about others often forget that it’s an ADS deciding. This is similar to how people forget that their newsfeed on Facebook is based on their historical choices in content and their ‘likes’ and is not a neutral serving of objective content.

Improving the underlying data. Algorithms will only get better when there are constant feedback loops and new data that help the computer learn, said one Salon participant. Currently most algorithms are trained on highly biased samples that do not reflect marginalized groups and communities. For example, there is very little data about many of the people participating in or eligible for aid and development programs.

So we need proper data sets that are continually updated if we are to use ADS in aid work. This is a problem, however, if the data that is continually fed into the ADS remains biased. One person shared this example: If some communities are policed more because of race, economic status, etc., there will continually be more data showing that people in those communities are committing crimes. In whiter or wealthier communities, where there is less policing, less people are arrested. If we update our data continually without changing the fact that some communities are policed more than others (thus will appear to have higher crime rates), we are simply creating a feedback loop that confirms our existing biases.

Privacy concerns also enter the picture. We may want to avoid collecting data on race, gender, ethnicity or economic status so that we don’t expose people to discrimination, stigma, or harm. For example, in the case of humanitarian work or conflict zones, sensitive data can make people or groups a target for governments or unfriendly actors. However, it’s hard to make decisions that benefit people if their data is missing. It ends up being a catch 22.

Transparency is another way to improve ADS. “In the aid sector, we never tell people how decisions are made, regardless of whether those are human or machine-made decisions,” said one Salon participant. When the underlying algorithm is obscured, it cannot be reviewed for value judgments. Some compared this to some of the current non-algorithmic decision-making processes in the aid system (which are also not transparent) and suggested that aid systems could get more intelligent if they began to surface their own specific biases.

The objectives of the ADS can be reviewed. Is the system used to further marginalize or discriminate against certain populations, or can this be turned on its head? asked one discussant. ADS could be used to try to determine which police officers might commit violence against civilians rather than to predict which people might commit a crime. (See the Algorithmic Justice League’s work). 

ADS in the aid system – limited to the powerful few?

Because of the underlying challenges with data (quality, standards, lack of) in the aid sector, ADS is still a challenge. One area where data is available and where ADS are being built and used is in supply chain management, for example, at massive UN agencies like the World Food Program.

Some questioned whether this exacerbates concentration of power in these large agencies, running counter to agreed-upon sector goals to decentralize power and control to smaller, local organizations who are ‘on the ground’ and working directly in communities. Does ADS then bring even more hierarchy, bias, and exclusion into an already problematic system of power and privilege? Could there be ways of using ADS differently in the aid system that would not replicate existing power structures? Could ADS itself be used to help people see their own biases? “Could we build that into an ADS? Could we have a read out of decisions we came to and then see what possible biases were?” asked one person.

How can we improve trust in ADS?

Most aid workers, national organizations, and affected communities have a limited understanding of ADS, leading to lower levels of trust in ADS and the decisions they produce. Part of the issue is the lack of participation and involvement in the design, implementation, validation, and vetting of ADS. On the other hand, one Salon participant pointed out that given all the issues with bias and exclusion, “maybe they would trust an ADS even less if they understood how an ADS works.”

Involving both users of an ADS and the people affected by ADS decisions is crucial. This needs to happen early in the process, said one person. It shouldn’t be limited to having people complain or report once the ADS has wronged them. They need to be at the table when the system is being developed and trialed.

If trust is to be built, the explainability of an algorithm needs consideration. “How can you explain the algorithm to people who are affected by it? Humanitarian workers cannot describe an ADS if they don’t understand it. We need to find ways to explain ADS to a non-technical audience so that they can be involved,” said one person. “We’ve shown sophisticated models to leaders, and they defaulted to spreadsheets.”

This brought up the need for change management if ADS are introduced. Involving and engaging decision-makers in the design and creation of ADS systems is a critical step for their adoption. This means understanding how decisions are made currently and based on what factors. Technology and data teams need to be in the room to understand the open and hidden nature of decision-making.

Isn’t decision making without ADS also highly biased and obscured?

People are often resistant to talking about or sharing how decisions have been made in the past, however, because those decisions may have been biased or inconsistent, based on faulty data, or made for political or other reasons.

As one person pointed out, both government and the aid system are deeply politicized and suffer from local biases, corruption and elite capture. A spatial analysis of food distribution in two countries, for example, showed extreme biases along local political leader lines. A related analysis of the road network and aid distribution allowed a clear view into the unfairness of food distribution and efficiency losses.

Aid agencies themselves make highly-biased decisions all the time, it was noted. Decisions are often political, situational, or made to enhance the reputation of an individual or agency. These decisions are usually not fully documented. Is this any less transparent than the ‘black box’ of an algorithm? Not to mention that agencies have countless dashboards that are aimed at helping them make efficient, unbiased decisions, yet recommendations based on the data may run counter to what is needed politically or for other reasons in a given moment.

Could (should) the humanitarian sector assume greater leadership on ADS?

Most ADS are built by private sector partners. When they are sold to the public or INGO sector, these companies indemnify themselves against liability and keep their trade secrets. It becomes impossible to hold them to account for any harm produced. One person asked whether the humanitarian sector could lead by bringing in different incentives – transparency, multi-stakeholder design, participation, and a focus on wellbeing? Could we try this and learn from it and develop and document processes whereby this could be done at scale? Could the aid sector open source how ADS are designed and created so that data scientists and others could improve?

Some were skeptical about whether the aid sector would be capable of this. “Theoretically we could do this,” said one person, “but it would then likely be concentrated in the hands of these few large agencies. In order to have economies of scale, it will have to be them because automation requires large scale. If that is to happen, then the smaller organizations will have to trust the big ones, but currently the small organizations don’t trust the big ones to manage or protect data.” And what about the involvement of governments, said another person, we would need to consider the role of the public sector.

“I like the idea of the humanitarian sector leading,” added one person, “but aid agencies don’t have the greatest track record for putting their constituencies in the driving seat. That’s not how it works. A lot of people are trying to correct that, but aid sector employees are not the people who will be affected by these systems in the end. We could think about working with organizations who have the outreach capacity to do work with these groups, but again, these organizations are not made up of the affected people. We have to remember that.”

How can we address governance and accountability?

When you bring in government, private sector, aid agencies, software developers, data, and the like, said another person, you will have issues of intellectual property, ownership, and governance. What are the local laws related to data transmission and storage? Is it enough to open source just the code or ADS framework without any data in it? If you work with local developers and force them to open source the algorithm, what does that mean for them and their own sustainability as local businesses?

Legal agreements? Another person suggested that we focus on open sourcing legal agreements rather than algorithms. “There are always risks, duties, and liabilities listed in contracts and legal agreements. The private sector in particular will always play the indemnity card. And that means there is no commercial incentive to fix the tools that are being used. What if we pivoted this conversation to commercial liability? If a model is developed in Manhattan, it won’t work in Malawi — a company has a commercial duty to flag and recognize that. This type of issue is hidden if we focus the conversation on open software or open models. It’s rare that all the technology will be open and transparent. What we should push for is open contracting, and that could help a lot with governance.”

Certification? Others suggested that we adapt existing audit systems like the LEED certification (which allows engineers and architects to audit whether buildings are actually environmentally sustainable) or the IRB process (external boards that review research to flag ethical issues). “What if there were a team of data scientists and others who could audit ADS and determine the flaws and biases?” suggested one person. “That way the entire thing wouldn’t need to be open, but it could still be audited independently”. This was questioned, however, in that a stamp of approval on a single system could lead people to believe that every system designed by a particular group would pass the test.

Ethical frameworks could be a tool, yet which framework? A recent article cited 84 different ethical frameworks for Artificial Intelligence.

Regulation? Self-regulation has failed, said one person. Why aren’t we talking about actual regulation? The General Data Protection Regulation (GDPR) in Europe has a specific article (Article 22) about ADS that states that people have a right to know when ADS are used to made decisions that affect them, the right to contest decisions made by ADS, and right to request that humans review ADS decisions.

SPHERE Standards / Core Humanitarian Standard? Because of the legal complexities of working across multiple countries and with different entities in different jurisdictions (including some like the UN who are exempt from the law), an add-on to the SPHERE standards might be considered, said one person. Or something linked to the Core Humanitarian Standard (CHS), which includes a certification process. Donors will often ask whether an agency is CHS certified.

So, is there any good to come from ADS?

We tend to judge ADS with higher standards than we judge humans, said one Salon participant. Loan officers have been making biased decisions for years. How can we apply the standards of impartiality and transparency to both ADS and human decision making? ADS may be able to fix some of our current faulty and biased decisions. This may be useful for large systems, where we can’t afford to deploy humans at scale. Let’s find some potential bright spots for ADS.

Some positive examples shared by participants included:

  • Human rights organizations are using satellite imagery to identify areas that have been burned or otherwise destroyed during conflict. This application of automated decision making doesn’t deal directly with people or allocation of resources, it supports human rights research.
  • In California, ADS has been used to expunge the records of people convicted for marijuana-related violations now that marijuana has been legalized. This example supports justice and fairness.
  • During Hurricane Irma, an organization in the Virgin Islands used an excel spreadsheet to track whether people met the criteria for assistance. Aid workers would interview people and the sheet would calculate automatically whether they were eligible. This was not high tech or sexy, but it was automated and fast. The government created the criteria and these were open and transparently communicated to people ahead of time so that if they didn’t receive benefits, they were clear about why.
  • Flood management is an area where there is a lot of data and forecasting. Governments have been using ADS to evacuate people before it’s too late. This sector can gain in efficiency with ADS, which could be expanded to other weather-based hazards. Because it is a straightforward use case that involves satellites and less personal data it may be a less political space, making deployment easier.
  • Drones also use ADS to stitch together hundreds of thousands of photos to create large images of geographical areas. Though drone data still needs to be ground truthed, it is less of an ethical minefield than when personal or household level data is collected, said one participant. Other participants, however, had issues with the portrayal of drones as less of an ethical minefield, citing surveillance, privacy, and challenges with the ownership and governance of the final knowledge product, the data for which was likely collected without people’s consent.

How can the humanitarian sector prepare for ADS?

In conclusion, one participant summed up that decision making has always been around. As ADS is explored more in-depth with groups like the one at this Salon and as we delve into the ethics and improve on ADS, there is great potential. ADS will probably never totally replace humans but can supplement humans to make better decisions.

How are we in the humanitarian sector preparing people at all levels of the system to engage with these systems, design them ethically, reduce harm, and make them more transparent? How are we working to build capacities at the local level to understand and use ADS? How are we figuring out ways to ensure that the populations who will be affected by ADS are aware of what is happening? How are we ensuring recourse and redress in the case of bad decisions or bias? What jobs might be created (rather than eliminated) with the introduction of more ADS?

ADS are not going to go away, and the humanitarian sector doesn’t have to wait until they are perfected to get involved in shaping and improving them so that they support our work in ethical and useful ways rather than in harmful or unethical ways.

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

Read Full Post »

Our Technology Salon on Digital ID (“Will Digital Identities Support or Control Us”) took place at the OSF offices on June 3 with lead discussants Savita Bailur and Emrys Schoemaker from Caribou Digital and Aiden Slavin from ID2020.

In general, Salon Participants noted the potential positives of digital ID, such as improved access to services, better service delivery, accountability, and better tracking of beneficiaries. However, they shared concerns about potential negative impacts, such as surveillance and discrimination, disregard for human rights and privacy, lack of trust in government and others running digital ID systems, harm to marginalized communities, lack of policy and ethical frameworks, complexities of digital ID systems and their associated technological requirements, and low capacity within NGOs to protect data and to deal with unintended consequences.

What do we mean by digital identity (digital ID)?

Arriving at a basic definition of digital ID is difficult due to its interrelated aspects. To begin with: What is identity? A social identity arises from a deep sense of who we are and where we come from. A person’s social identity is a critical part of how they experience an ID system. Analog ID systems have been around for a very long time and digitized versions build on them.

The three categories below (developed by Omidyar) are used by some to differentiate among types of ID systems:

  • Issued ID includes state or national issued identification like birth certificates, driver’s licenses, and systems such as India’s biometric ID system (Aadhar), built on existing analog models of ID systems and controlled by institutions.
  • De facto ID is an emerging category of ID that is formed through data trails that people leave behind when using digital devices, including credit scoring based on mobile phone use or social media use. De facto ID is somewhat outside of an individual’s control, as it is often based on analysis of passive data for which individuals have not given consent to collect or use in this way. De facto ID also includes situations where refugees are tracked via cellphone data records (CDRs). De facto ID is a new and complex way of being identified and categorized.
  • Self-asserted ID is linked to the decentralization of ID systems. It is based on the possession of forms of ID that prove who we are that we manage ourselves. A related term is self-managed ID, which recognizes that there is no ID that is “self-asserted” because our identity is relational and always relies on others recognizing us as who we are and who we believe ourselves to be.

(Also see this glossary of Digital ID definitions.)

As re-identification technologies are becoming more and more sophisticated, the line between de-facto and official, issued IDs is blurring, noted one participant. Others said they prefer using a broad umbrella term “Identity in the Digital Age” to cover the various angles.

Who is digital ID benefiting?

Salon Participants tended to think that digital ID is mainly of interest to institutions. Most IDs are developed, designed, managed, and issued by institutions. Thus the interests baked into the design of an ID system are theirs. Institutions tend to be excited about digital ID systems because they are interoperable, and helps them with beneficiary management, financial records, entry/exit across borders and the like.

This very interoperability, however, is what raises privacy, vulnerability, and data protection issues. Some of the most cutting-edge Digital ID systems are being tested on some of the most vulnerable populations in the world:  refugees in Jordan, Uganda, Lebanon, and Myanmar. These digital ID systems have created massive databases for analysis, e.g., the UNHCR’s Progress data base has 80 million records.

This brings with it a huge responsibility to protect. It also raises questions about the “one ID system to rule them all” idea. On the one hand, a single system can offer managerial control, reduce fraud, and improve tracking. Yet, as one person said, “what a horrifying prospect that an institution can have this much control! Should we instead be supporting one thousand ID systems to bloom?”

Can we trust institutions and governments to manage digital ID Systems?

One of the institutions positioning itself as the leader in Digital ID is the World Food Program (WFP). As one participant highlighted, this is an agency that has come under strong criticism for its partnership with Palantir and a lack of transparency around where data goes and who can access it. Seismic downstream effects that affect trust in the entire sector can be generated these kinds of partnerships. “This has caused a lot of angst in the sector. The WFP wants to have the single system to rule them all, whereas many of us would rather see an interoperable ecosystem.” Some organizations consider their large-scale systems to have more rigorous privacy, security, and informed consent measures than the WFP’s SCOPE system.

Trust is a critical component of a Digital ID system. The Estonian model, for example, offers visibility into which state departments are accessing a person’s data and when, which builds citizen’s trust in the system. Some Salon participants expressed concern over their own country governments running a Digital ID system. “In my country, we don’t trust institutions because we have a failed state,” said one person, “so people would never want the government to have their information in that way.” Another person said that in his country, the government is known for its corruption, and the idea that the government could manage an ID system with any kind of data integrity was laughable. “If these systems are not monitored or governed properly, they can be used to target certain segments of the population for outright repression. People do want greater financial inclusion, for example, but these ID systems can be easily weaponized and used against us.”

Fear and mistrust in digital ID systems is not universal, however. One Salon participant said that their research in Indonesia found that a digital ID was seen to be part of being a “good citizen,” even if local government was not entirely trusted. A Salon participant from China reported that in her experience, the digital ID system there has not been questioned much by citizens. Rather, it is seen as a convenient way for people to learn about new government policies and to carry out essential transactions more quickly.

What about data integrity and redress?

One big challenge with digital ID systems as they are currently managed is that there is very little attention to redress. “How do you fix errors in information? Where are the complaints mechanisms?” asked one participant. “We think of digital systems as being really flexible, but they are really hard to clean out,” said another. “You get all these faulty data crumbs that stick around. And they seem so far removed from the user. How do people get data errors fixed? No one cares about the integrity of the system. No one cares but you if your ID information is not correct. There is really very little incentive to address discrepancies and provide redress mechanisms.”

Another challenge is the integrity of the data that goes into the system. In some countries, people go back to their villages to get a birth certificate, at point at which data integrity can suffer due to faulty information or bribes, among other things. In one case, researchers spoke to a woman who changed her religion on her birth certificate thinking it would save her from discrimination when she moved to a new town. In another case, the village chief made a woman change her name to a Muslim name on her birth certificate because the village was majority Muslim.” There are power dynamics at the local level that can challenge the integrity of the ID system.

Do digital ID systems improve the lives of women and children?

There is a long-standing issue in many parts of the world with children not having a birth certificate, said one Salon discussant. “If you don’t have a legal ID, technically you don’t exist, so that first credential is really important.” As could probably be expected, however, fewer females than males have legal ID.

In a three-country research project, the men interviewed thought that women do not need ID as much as men did. However, when talking with women it was clear that they are the ones who are dealing with hospitals and schools and other institutions who require ID. The study found that In Bangladesh, when women did have ID, it was commonly held and controlled by their husbands. In one case study, a woman wanted to sign up as a cook for an online cooking service, but she needed an ID to do so. She had to ask her husband for the ID, explain what she needed it for, and get his permission in order to join the cooking service. In another, a woman wanted to provide beauty care services through an online app. She needed to produce her national ID and two photos to join up with the app and to create a bKash mobile money account. Her husband did not want her to have a bKash account, so she had to provide his account details, meaning that all of her earnings went to her husband (see more here on how ID helps women access work). In India, a woman wanted to escape her husband, so she moved from the countryside to Bangalore to work as a maid. Her in-laws retained all of her ID, and so she had to rely on her brother to set up everything for her in Bangalore.

Another Salon participant explained that in India also, micro-finance institutions had imposed a regulation that when a woman registered to be part of a project, she had to provide the name of a male member to qualify her identity. When it was time to repay the loan or if a woman missed a payment, her brother or husband would then receive a text about it. The question is how to create trust-based systems that do not reinforce patriarchal values and where individuals are clear about and have control over how information is shared?

“ID is embedded in your relationships and networks,” it was explained. “It creates a new set of dependencies and problems that we need to consider.” In order to understand the nuances in how ID and digital ID are impacting people, we need more of these micro-level stories. “What is actually happening? What does it mean when you become more identifiable?”

Is it OK to use digital ID systems for social control and social accountability? 

The Chinese social credit system, according to one Salon participant, includes a social control function. “If you have not repaid a loan, you are banned from purchasing a first-class air ticket or from checking into expensive hotels.” An application used In Nairobi called Tala also includes a social accountability function, explained another participant. “Tala is a social credit scoring app that gives small loans. You download an app with all your contacts, and it works out via algorithms if you are credit-worthy. If you are, you can get a small loan. If you stop paying your loans, however, Tala alerts everyone in your contact list. In this way, the app has digitized a social accountability function.”

The initial reaction from Salon Participants was shock, but it was pointed out that traditional Village Savings and Loans Associations (VSLAs) function the same way – through social sanction. “The difference here is transparency and consent,” it was noted. “In a community you might not have choice about whether everyone knows you defaulted on your small loan. But you are aware that this is what will happen. With Tala, people didn’t realize that the app had access to their contacts and that it would alert those contacts, so consent and transparency are the issues.”

The principle of informed consent in the humanitarian space poses a constant challenge. “Does a refugee who registers with UNHCR really have any choice? If they need food and have to provide minimal information to get it, is that consent? What if they have zero digital literacy?” Researcher Helen Nissenbaum, it was noted, has written that consent is problematic and that we should not pursue it. “It’s not really about individual consent. It’s about how we set standards and ensure transparency and accountability for how an individual’s information is used,” explained one Salon participant.

These challenges with data use and consent need to be considered beyond just individual privacy, however, as another participant noted. “There is all manner of vector-based data in the WFP’s system. Other agencies don’t have this kind of disaggregated data at the village level or lower. What happens if Palantir, via the WFP, is the first company in the world to have that low level disaggregation? And what happens with the digital ID of particularly vulnerable groups of people such as refugee communities or LGBTQI communities? How could these Digital IDs be used to discriminate or harm entire groups of people? What does it mean if a particular category or tag like ‘refugee’ or ‘low income’ follows you around forever?”

One Salon participant said that in Jordanian camps, refugees would register for one thing and be surprised at how their data then automatically popped up on the screen of a different partner organization. Other participants expressed concerns about how Digital ID systems and their implications could be explained to people with less digital experience or digital literacy. “Since the GDPR came into force, people have the right to an explanation if they are subject to an automated decision,” noted one person “But what does compliance look like? How would anyone ever understand what is going on?” This will become increasingly complex as technology advances and we begin to see things like digital phenotyping being used to serve up digital content or determine our benefits.

Can we please have better standards, regulations and incentives?

A final question raised about Digital ID systems was who should be implementing and managing them: UN agencies? Governments? Private Sector? Start-ups? At the moment the ecosystem includes all sorts of actors and feels a bit “Wild Wild West” due to insufficient control and regulation. At the same time, there are fears (as noted above) about a “one system to rule them all approach.” “So,” asked one person, “what should we be doing then? Should UN agencies be building in-house expertise? Should we be partnering better with the private sector? We debate this all the time internally and we can never agree.” Questions also remain about what happens with the biometric and other data that failed start-ups or discontinued digital ID systems hold. And is it a good idea to support government-controlled ID systems in countries with corrupt or failed governments, or those who will use these systems to persecute or exercise undue control over their populations?

As one person asked, “Why are we doing this? Why are we even creating these digital ID systems?”

Although there are huge concerns about Digital ID, the flip side is that a Digital ID system could potentially offer better security for sensitive information, at least in the case of humanitarian organizations. “Most organizations currently handle massive amounts of data in Excel sheets and Google docs with zero security,” said one person. “There is PII [personally identifiable information] flowing left, right, and center.” Where donors have required better data management standards, there has been improvement, but it requires massive investment, and who will pay for it?” Sadly, donors are currently not covering these costs. As a representative from one large INGO explained, “we want to avoid the use of Excel to track this stuff. We are hoping that our digital ID system will be more secure. We see this as a very good idea if you can nail down the security aspects.”

The EU’s General Data Protection Regulation (GDPR) is often quoted as the “gold standard,” yet implementation is complex and the GDPR is not specific enough, according to some Salon participants. Not to mention, “if you are UN, you don’t have to follow GDPR.” Many at the Salon felt that the GDPR has had very positive effects but called out the lack of incentive structures that would encourage full adoption. “No one does anything unless there is an enforcing function.” Others felt that the GDPR was too prescriptive about what to do, rather than setting limits on what not to do.

One effort to watch is the Pan Canadian Trust Foundation, mentioned as a good example of creating a functioning and decentralized ecosystem that could potentially address some of the above challenges.

The Salon ended with more questions than answers, however there is plenty of research and conversation happening about digital ID and a wide range of actors engaging with the topic. If you’d like to read more, check out this list of resources that we put together for the Salon and add any missing documents, articles, links and resources!

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

 

 

 

Read Full Post »

For our Tuesday, July 27th Salon, we discussed partnerships and interoperability in global health systems. The room housed a wide range of perspectives, from small to large non-governmental organizations to donors and funders to software developers to designers to healthcare professionals to students. Our lead discussants were Josh Nesbit, CEO at Medic Mobile; Jonathan McKay, Global Head of Partnerships and Director of the US Office of Praekelt.org; and Tiffany Lentz, Managing Director, Office of Social Change Initiatives at ThoughtWorks

We started by hearing from our discussants on why they had decided to tackle issues in the area of health. Reasons were primarily because health systems were excluding people from care and organizations wanted to find a way to make healthcare inclusive. As one discussant put it, “utilitarianism has infected global health. A lack of moral imagination is the top problem we’re facing.”

Other challenges include requests for small scale pilots and customization/ bespoke applications, lack of funding and extensive requirements for grant applications, and a disconnect between what is needed on the ground and what donors want to fund. “The amount of documentation to get a grant is ridiculous, and then the system that is requested to be built is not even the system that needs to be made,” commented one person. Another challenge is that everyone is under constant pressure to demonstrate that they are being innovative. [Sidenote: I’m reminded of this post from 2010….] “They want things that are not necessarily in the best interest of the project, but that are seen to be innovations. Funders are often dragged along by that,” noted another person.

The conversation most often touched on the unfulfilled potential of having a working ecosystem and a common infrastructure for health data as well as the problems and challenges that will most probably arise when trying to develop these.

“There are so many uncoordinated pilot projects in different districts, all doing different things,” said one person. “Governments are doing what they can, but they don’t have the funds,” added another, “and that’s why there are so many small pilots happening everywhere.” One company noted that it had started developing a platform for SMS but abandoned it in favor of working with an existing platform instead. “Can we create standards and protocols to tie some of this work together? There isn’t a common infrastructure that we can build on,” was the complaint. “We seem to always start from scratch. I hope donors and organizations get smart about applying pressure in the right areas. We need an infrastructure that allows us to build on it and do the work!” On the other hand, someone warned of the risks of pushing everyone to “jump on a mediocre software or platform just because we are told to by a large agency or donor.”

The benefits of collaboration and partnership are apparent: increased access to important information, more cooperation, less duplication, the ability to build on existing knowledge, and so on. However, though desirable, partnerships and interoperability is not easy to establish. “Is it too early for meaningful partnerships in mobile health? I was wondering if I could say that…” said one person. “I’m not even sure I’m actually comfortable saying it…. But if you’re providing essential basic services, collecting sensitive medical data from patients, there should be some kind of infrastructure apart from private sector services, shouldn’t there?” The question is who should own this type of a mediator platform: governments? MNOs?

Beyond this, there are several issues related to control and ownership. Who would own the data? Is there a way to get to a point where the data would be owned by the patients and demonetized? If the common system is run by the private sector, there should be protections surrounding the patients’ sensitive information. Perhaps this should be a government-run system. Should it be open source?

Open source has its own challenges. “Well… yes. We’ve practiced ‘hopensource’,” said one person (to widespread chuckles).

Another explained that the way we’ve designed information systems has held back shifts in health systems. “When we’re comparing notes and how we are designing products, we need to be out ahead of the health systems and financing shifts. We need to focus on people-centered care. We need to gather information about a person over time and place. About the teams who are caring for them. Many governments we’re working with are powerless and moneyless. But even small organizations can do something. When we show up and treat a government as a systems owner that is responsible to deliver health care to their citizens, then we start to think about them as a partner, and they begin to think about how they could support their health systems.”

One potential model is to design a platform or system such that it can eventually be handed off to a government. This, of course, isn’t a simple idea in execution. Governments can be limited by their internal expertise. The personnel that a government has at the time of the handoff won’t necessarily be there years or months later. So while the handoff itself may be successful in the short term, there’s no firm guarantee that the system will be continually operational in the future. Additionally, governments may not be equipped with the knowledge to make the best decisions about software systems they purchase. Governments’ negotiating capacity must be expanded if they are to successfully run an interoperable system. “But if we can bring in a snazzy system that’s already interoperable, it may be more successful,” said one person.

Having a common data infrastructure is crucial. However, we must also spend some time thinking about what the data itself should look like. Can it be standardized? How can we ensure that it is legible to anyone with access to it?

These are only some of the relevant political issues, and at a more material level, one cannot ignore the technical challenges of maintaining a national scale system. For example, “just getting a successful outbound dialing rate is hard!” said one person. “If you are running servers in Nigeria it just won’t always be up! I think human centered design is important. But there is also a huge problem simply with making these things work at scale. The hardcore technical challenges are real. We can help governments to filter through some of the potential options. Like, can a system demonstrate that it can really operate at massive scale?” Another person highlighted that “it’s often non-profits who are helping to strengthen the capacity of governments to make better decisions. They don’t have money for large-scale systems and often don’t know how to judge what’s good or to be a strong negotiator. They are really in a bind.”

This is not to mention that “the computers have plastic over them half the time. Electricity, computers, literacy, there are all these issues. And the TelCo infrastructure! We have layers of capacity gaps to address,” said one person.

There are also donors to consider. They may come into a project with unrealistic expectations of what is normal and what can be accomplished. There is a delicate balance to be struck between inspiring the donors to take up the project and managing expectations so that they are not disappointed.” One strategy is to “start hopeful and steadily temper expectations.” This is true also with other kinds of partnerships. “Building trust with organizations so that when things do go bad, you can try to manage it is crucial. Often it seems like you don’t want to be too real in the first conversation. I think, ‘if I lay this on them at the start it can be too real and feel overwhelming.…'” Others recommended setting expectations about how everyone together is performing. “It’s more like, ‘together we are going to be looking at this, and we’ll be seeing together how we are going to work and perform together.”

Creating an interoperable data system is costly and time-consuming, oftentimes more so than donors and other stakeholders imagine, but there are real benefits. Any step in the direction of interoperability must deal with challenges like those considered in this discussion. Problems abound. Solutions will be harder to come by, but not impossible.

So, what would practitioners like to see? “I would like to see one country that provides an incredible case study showing what good partnership and collaboration looks like with different partners working at different levels and having a massive impact and improved outcomes. Maybe in Uganda,” said one person. “I hope we see more of us rally around supporting and helping governments to be the system owners. We could focus on a metric or shared cause – I hope in the near future we have a view into the equity measure and not just the vast numbers. I’d love to see us use health equity as the rallying point,” added another. From a different angle, one person felt that “from a for-profit, we could see it differently. We could take on a country, a clinic or something as our own project. What if we could sponsor a government’s health care system?”

A participant summed the Salon up nicely: “I’d like to make a flip-side comment. I want to express gratitude to all the folks here as discussants. This is one of the most unforgiving and difficult environments to work in. It’ SO difficult. You have to be an organization super hero. We’re among peers and feel it as normal to talk about challenges, but you’re really all contributing so much!”

Salons are run under Chatham House Rule so not attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

At the 2016 American Evaluation Association conference, I chaired a session on benefits and challenges with ICTs in Equity-Focused Evaluation. The session frame came from a 2016 paper on the same topic. Panelists Kecia Bertermann from Girl Effect, and Herschel Sanders from RTI added fascinating insights on the methodological challenges to consider when using ICTs for evaluation purposes and discussant Michael Bamberger closed out with critical points based on his 50+ years doing evaluations.

ICTs include a host of technology-based tools, applications, services, and platforms that are overtaking the world. We can think of them in three key areas: technological devices, social media/internet platforms and digital data.

An equity focus evaluation implies ensuring space for the voices of excluded groups and avoiding the traditional top-down approach. It requires:

  • Identifying vulnerable groups
  • Opening up space for them to make their voices heard through channels that are culturally responsive, accessible and safe
  • Ensuring their views are communicated to decision makers

It is believed that ICTs, especially mobile phones, can help with inclusion in the implementation of development and humanitarian programming. Mobile phones are also held up as devices that can allow evaluators to reach isolated or marginalized groups and individuals who are not usually engaged in research and evaluation. Often, however, mobiles only overcome geographic inclusion. Evaluators need to think harder when it comes to other types of exclusion – such as that related to disability, gender, age, political status or views, ethnicity, literacy, or economic status – and we need to consider how these various types of exclusions can combine to exacerbate marginalization (e.g., “intersectionality”).

We are seeing increasing use of ICTs in evaluation of programs aimed at improving equity. Yet these tools also create new challenges. The way we design evaluations and how we apply ICT tools can make all the difference between including new voices and feedback loops or reinforcing existing exclusions or even creating new gaps and exclusions.

Some of the concerns with the use of ICTs in equity- based evaluation include:

Methodological aspects:

  • Are we falling victim to ‘elite capture’ — only hearing from higher educated, comparatively wealthy men, for example? How does that bias our information? How can we offset that bias or triangulate with other data and multi-methods rather than depending only on one tool-based method?
  • Are we relying too heavily on things that we can count or multiple-choice responses because that’s what most of these new ICT tools allow?
  • Are we spending all of our time on a device rather than in communities engaging with people and seeking to understand what’s happening there in person?
  • Is reliance on mobile devices or self-reporting through mobile surveys causing us to miss contextual clues that might help us better interpret the data?
  • Are we falling into the trap of fallacy in numbers – in other words, imagining that because lots of people are saying something, that it’s true for everyone, everywhere?

Organizational aspects:

  • Do digital tools require a costly, up-front investment that some organizations are not able to make?
  • How do fear and resistance to using digital tools impact on data gathering?
  • What kinds of organizational change processes are needed amongst staff or community members to address this?
  • What new skills and capacities are needed?

Ethical aspects:

  • How are researchers and evaluators managing informed consent considering the new challenges to privacy that come with digital data? (Also see: Rethinking Consent in the Digital Age)?
  • Are evaluators and non-profit organizations equipped to keep data safe?
  • Is it possible to anonymize data in the era of big data given the capacity to cross data sets and re-identify people?
  • What new risks might we be creating for community members? To local enumerators? To ourselves as evaluators? (See: Developing and Operationalizing Responsible Data Policies)

Evaluation of Girl Effect’s online platform for girls

Kecia walked us through how Girl Effect has designed an evaluation of an online platform and applications for girls. She spoke of how the online platform itself brings constraints because it only works on feature phones and smart phones, and for this reason it was decided to work with 14-16 year old urban girls in megacities who have access to these types of devices yet still experience multiple vulnerabilities such as gender-based violence and sexual violence, early pregnancy, low levels of school completion, poor health services and lack of reliable health information, and/or low self-esteem and self-confidence.

The big questions for this program include:

  • Is the content reaching the girls that Girl Effect set out to reach?
  • Is the content on the platform contributing to change?

Because the girl users are on the platform, Girl Effect can use features such as polls and surveys for self-reported change. However, because the girls are under 18, there are privacy and security concerns that sometimes limit the extent to which the organization feels comfortable tracking user behavior. In addition, the type of phones that the girls are using and the fact that they may be borrowing others’ phones to access the site adds another level of challenges. This means that Girl Effect must think very carefully about the kind of data that can be gleaned from the site itself, and how valid it is.

The organization is using a knowledge, attitudes and practices (KAP) framework and exploring ways that KAP can be measured through some of the exciting data capture options that come with an online platform. However it’s hard to know if offline behavior is actually shifting, making it important to also gather information that helps read into the self-reported behavior data.

Girl Effect is complementing traditional KAP indicators with web analytics (unique users, repeat visitors, dwell times, bounce rates, ways that users arrive to the site) with push-surveys that go out to users and polls that appear after an article (“Was this information helpful? Was it new to you? Did it change your perceptions? Are you planning to do something different based on this information?”) Proxy indicators are also being developed to help interpret the data. For example, does an increase in frequency of commenting on the site by a particular user have a link with greater self-esteem or self-efficacy?

However, there is only so much that can be gleaned from an online platform when it comes to behavior change, so the organization is complementing the online information with traditional, in-person, qualitative data gathering. The site is helpful there, however, for recruiting users for focus groups and in-depth interviews. Girl Effect wants to explore KAP and online platforms, yet also wants to be careful about making assumptions and using proxy indicators, so the traditional methods are incorporated into the evaluation as a way of triangulating the data. The evaluation approach is a careful balance of security considerations, attention to proxy indicators, digital data and traditional offline methods.

Using SMS surveys for evaluation: Who do they reach?

Herschel took us through a study conducted by RTI (Sanders, Lau, Lombaard, Baker, Eyerman, Thalji) in partnership with TNS about the use of SMS surveys for evaluation. She noted that the rapid growth of mobile phones, particularly in African countries, opens up new possibilities for data collection. There has been an explosion of SMS surveys for national, population-based surveys.

Like most ICT-enabled MERL methods, use of SMS for general population surveys brings both promise:

  • High mobile penetration in many African countries means we can theoretically reach a large segment of the population.
  • These surveys are much faster and less expensive than traditional face-to- face surveys.
  • SMS surveys work on virtually any GSM phone.
  • SMS offers the promise of reach. We can reach a large and geographically dispersed population, including some areas that are excluded from FTF surveys because of security concerns.

And challenges:

  • Coverage: We cannot include illiterate people or those without access to a mobile phone. Also, some sample frames may not include the entire population with mobile phones.
  • Non-response: Response rates are expected to be low for a variety of reasons, including limited network connectivity or electricity; if two or people share a phone, we may not reach all people associated with that phone; people may feel a lack of confidence with technology. These factors might affect certain sub-groups differently, so we might underrepresent the poor, rural areas, or women.
  • Quality of measurement. We only have 160 CHARACTERS for both the question AND THE RESPONSE OPTIONS. Further, an interviewer is not present to clarify any questions.

RTI’s research aimed to answer the question: How representative are general population SMS surveys and are there ways to improve representativeness?

Three core questions were explored via SMS invitations sent in Kenya, Ghana, Nigeria and Uganda:

  • Does the sample frame match the target population?
  • Does non-response have an impact on representativeness?
  • Can we improve quality of data by optimizing SMS designs?

One striking finding was the extent to which response rates may vary by country, Hershel said. In some cases this was affected by agreements in place in each country. Some required a stronger opt-in process. In Kenya and Uganda, where a higher percentage of users had already gone through an opt-in process and had already participated in SMS-based surveys, there was a higher rate of response.

screen-shot-2016-11-03-at-2-23-26-pm

These response rates, especially in Ghana and Nigeria, are noticeably low, and the impact of the low response rates in Nigeria and Ghana is evident in the data. In Nigeria, where researchers compared the SMS survey results against the face-to-face data, there was a clear skew away from older females, towards those with a higher level of education and who are full-time employed.

Additionally, 14% of the face-to-face sample, filtered on mobile users, had a post-secondary education, whereas in the SMS data this figure is 60%.

Additionally, Compared to face-to-face data, SMS respondents were:

  • More likely to have more than 1 SIM card
  • Less likely to share a SIM card
  • More likely to be aware of and use the Internet.

This sketches a portrait of a more technological savvy respondent in the SMS surveys, said Herschel.

screen-shot-2016-11-03-at-2-24-18-pm

The team also explored incentives and found that a higher incentive had no meaningful impact, but adding reminders to the design of the SMS survey process helped achieve a wider slice of the sample and a more diverse profile.

Response order effects were explored along with issues related to questionnaire designers trying to pack as much as possible onto the screen rather than asking yes/no questions. Hershel highlighted that that when multiple-choice options were given, 76% of SMS survey respondents only gave 1 response compared to 12% for the face-to-face data.

screen-shot-2016-11-03-at-2-23-53-pmLastly, the research found no meaningful difference in response rate between a survey with 8 questions and one with 16 questions, she said. This may go against common convention which dictates that “the shorter, the better” for an SMS survey. There was no observable break off rate based on survey length, giving confidence that longer surveys may be possible via SMS than initially thought.

Hershel noted that some conclusions can be drawn:

  • SMS excels for rapid response (e.g., Ebola)
  • SMS surveys have substantial non-response errors
  • SMS surveys overrepresent

These errors mean SMS cannot replace face-to-face surveys … yet. However, we can optimize SMS survey design now by:

  • Using reminders during data collection
  • Be aware of response order effects. So we need to randomize substantive response options to avoid bias.
  • Not using “select all that apply” questions. It’s ok to have longer surveys.

However, she also noted that the landscape is rapidly changing and so future research may shed light on changing reactions as familiarity with SMS and greater access grow.

Summarizing the opportunities and challenges with ICTs in Equity-Focused Evaluation

Finally we heard some considerations from Michael, who said that people often get so excited about possibilities for ICT in monitoring, evaluation, research and learning that they neglect to address the challenges. He applauded Girl Effect and RTI for their careful thinking about the strengths and weaknesses in the methods they are using. “It’s very unusual to see the type of rigor shown in these two examples,” he said.

Michael commented that a clear message from both presenters and from other literature and experiences is the need for mixed methods. Some things can be done on a phone, but not all things. “When the data collection is remote, you can’t observe the context. For example, if it’s a teenage girl answering the voice or SMS survey, is the mother-in-law sitting there listening or watching? What are the contextual clues you are missing out on? In a face-to-face context an evaluator can see if someone is telling the girl how to respond.”

Additionally,“no survey framework will cover everyone,” he said. “There may be children who are not registered on the school attendance list that is being used to identify survey respondents. What about immigrants who are hiding from sight out of fear and not registered by the government?” He cautioned evaluators to not forget about folks in the community who are totally missed out and skipped over, and how the use of new technology could make that problem even greater.

Another point Michael raised is that communicating through technology channels creates a different behavior dynamic. One is not better than the other, but evaluators need to be aware that they are different. “Everyone with teenagers knows that the kind of things we communicate online are very different than what we communicate in a face-to-face situation,” he said. “There is a style of how we communicate. You might be more frank and honest on an online platform. Or you may see other differences in just your own behavior dynamics on how you communicate via different kinds of tools,” he said.

He noted that a range of issues has been raised in connection to ICTs in evaluation, but that it’s been rare to see priority given to evaluation rigor. The study Herschel presented was one example of a focus on rigor and issues of bias, but people often get so excited that they forget to think about this. “Who has access.? Are people sharing phones? What are the gender dynamics? Is a husband restricting what a woman is doing on the phone? There’s a range of selection bias issues that are ignored,” he said.

Quantitative bias and mono-methods are another issue in ICT-focused evaluation. The tool choice will determine what an evaluator can ask and that in turn affects the quality of responses. This leads to issues with construct validity. If you are trying to measure complex ideas like girls’ empowerment and you reduce this to a proxy, there can often be a large jump in interpretation. This doesn’t happen only when using mobile phones for evaluation data collection purposes but there are certain areas that may be exacerbated when the phone is the tool. So evaluators need to better understand behavior dynamics and how they related to the technical constraints of a particular digital or mobile platform.

The aspect of information dissemination is another one worth raising, said Michael. “What are the dynamics? When we incorporate new tools, we tend to assume there is just one-step between the information sharer and receiver, yet there is plenty of literature that shows this is normally at least 2 steps. Often people don’t get information directly, but rather they share and talk with someone else who helps them verify and interpret the information they get on a mobile phone. There are gatekeepers who control or interpret, and evaluators need to better understand those dynamics. Social network analysis can help with that sometimes – looking at who communicates with whom? Who is part of the main infuencer hub? Who is marginalized? This could be exciting to explore more.”

Lastly, Michael reiterated the importance of mixed methods and needing to combine online information and communications with face-to-face methods and to be very aware of invisible groups. “Before you do an SMS survey, you may need to go out to the community to explain that this survey will be coming,” he said. “This might be necessary to encourage people to even receive the survey, to pay attention or to answer it.” The case studies in the paper “The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges” explore some of these aspects in good detail.

Read Full Post »

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo courtesy of Amy O'Donnell).

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo by Amy O’Donnell).

At Catholic Relief Services’ ICT4D Conference in May 2016, I worked with Amy O’Donnell  (Oxfam GB) and Paul Perrin (CRS) to facilitate a participatory session that explored notions of Digital Privacy, Security and Safety. We had a full room, with a widely varied set of experiences and expertise.

The session kicked off with stories of privacy and security breaches. One person told of having personal data stolen when a federal government clearance database was compromised. We also shared how a researcher in Denmark scraped very personal data from the OK Cupid online dating site and opened it up to the public.

A comparison was made between the OK Cupid data situation and the work that we do as development professionals. When we collect very personal information from program participants, they may not expect that their household level income, health data or personal habits would be ‘opened’ at some point.

Our first task was to explore and compare the meaning of the terms: Privacy, Security and Safety as they relate to “digital” and “development.”

What do we mean by privacy?

The “privacy” group talked quite a bit about contextuality of data ownership. They noted that there are aspects of privacy that cut across different groups of people in different societies, and that some aspects of privacy may be culturally specific. Privacy is concerned with ownership of data and protection of one’s information, they said. It’s about who owns data and who collects and protects it and notions of to whom it belongs. Private information is that which may be known by some but not by all. Privacy is a temporal notion — private information should be protected indefinitely over time. In addition, privacy is constantly changing. Because we are using data on our mobile phones, said one person, “Safaricom knows we are all in this same space, but we don’t know that they know.”

Another said that in today’s world, “You assume others can’t know something about you, but things are actually known about you that you don’t even know that others can know. There are some facts about you that you don’t think anyone should know or be able to know, but they do.” The group mentioned website terms and conditions, corporate ownership of personal data and a lack of control of privacy now. Some felt that we are unable to maintain our privacy today, whereas others felt that one could opt out of social media and other technologies to remain in control of one’s own privacy. The group noted that “privacy is about the appropriate use of data for its intended purpose. If that purpose shifts and I haven’t consented, then it’s a violation of privacy.”

What do we mean by security?

The Security group considered security to relate to an individual’s information. “It’s your information, and security of it means that what you’re doing is protected, confidential, and access is only for authorized users.” Security was also related to the location of where a person’s information is hosted and the legal parameters. Other aspects were related to “a barrier – an anti-virus program or some kind of encryption software, something that protects you from harm…. It’s about setting roles and permissions on software and installing firewalls, role-based permissions for accessing data, and cloud security of individuals’ data.” A broader aspect of security was linked to the effects of hacking that lead to offline vulnerability, to a lack of emotional security or feeling intimidated in an online space. Lastly, the group noted that “we, not the systems, are the weakest link in security – what we click on, what we view, what we’ve done. We are our own worst enemies in terms of keeping ourselves and our data secure.”

What do we mean by safety?

The Safety group noted that it’s difficult to know the difference between safety and security. “Safety evokes something highly personal. Like privacy… it’s related to being free from harm personally, physically and emotionally.” The group raised examples of protecting children from harmful online content or from people seeking to harm vulnerable users of online tools. The aspect of keeping your online financial information safe, and feeling confident that a service was ‘safe’ to use was also raised. Safety was considered to be linked to the concept of risk. “Safety engenders a level of trust, which is at the heart of safety online,” said one person.

In the context of data collection for communities we work with – safety was connected to data minimization concepts and linked with vulnerability, and a compounded vulnerability when it comes to online risk and safety. “If one person’s data is not safely maintained it puts others at risk,” noted the group. “And pieces of information that are innocuous on their own may become harmful when combined.” Lastly, the notion of safety as related to offline risk or risk to an individual due to a specific online behavior or data breach was raised.

It was noted that in all of these terms: privacy, security and safety, there is an element of power, and that in this type of work, a power relations analysis is critical.

The Digital Data Life Cycle

After unpacking the above terms, Amy took the group through an analysis of the data life cycle (courtesy of the Engine Room’s Responsible Data website) in order to highlight the different moments where the three concepts (privacy, security and safety) come into play.

Screen Shot 2016-05-25 at 6.51.50 AM

  • Plan/Design
  • Collect/Find/Acquire
  • Store
  • Transmit
  • Access
  • Share
  • Analyze/use
  • Retention
  • Disposal
  • Afterlife

Participants added additional stages in the data life cycle that they passed through in their work (coordinate, monitor the process, monitor compliance with data privacy and security policies). We placed the points of the data life cycle on the wall, and invited participants to:

  • Place a pink sticky note under the stage in the data life cycle that resonates or interests them most and think about why.
  • Place a green sticky note under the stage that is the most challenging or troublesome for them or their organizations and think about why.
  • Place a blue sticky note under the stage where they have the most experience, and to share a particular experience or tip that might help others to better manage their data life cycle in a private, secure and safe way.

Challenges, concerns and lessons

Design as well as policy are important!

  • Design drives everScreen Shot 2016-05-25 at 7.21.07 AMything else. We often start from the point of collection when really it’s at the design stage when we should think about the burden of data collection and define what’s the minimum we can ask of people? How we design – even how we get consent – can inform how the whole process happens.
  • When we get part-way through the data life cycle, we often wish we’d have thought of the whole cycle at the beginning, during the design phase.
  • In addition to good design, coordination of data collection needs to be thought about early in the process so that duplication can be reduced. This can also reduce fatigue for people who are asked over and over for their data.
  • Informed consent is such a critical issue that needs to be linked with the entire process of design for the whole data life cycle. How do you explain to people that you will be giving their data away, anonymizing, separating out, encrypting? There are often flow down clauses in some contracts that shifts responsibilities for data protection and security and it’s not always clear who is responsible for those data processes? How can you be sure that they are doing it properly and in a painstaking way?
  • Anonymization is also an issue. It’s hard to know to what level to anonymize things like call data records — to the individual? Township? District Level? And for how long will anonymization actually hold up?
  • The lack of good design and policy contributes to overlapping efforts and poor coordination of data collection efforts across agencies. We often collect too much data in poorly designed databases.
  • Policy is not enough – we need to do a much better job of monitoring compliance with policy.
  • Institutional Review Boards (IRBs) and compliance aspects need to be updated to the new digital data reality. At the same time, sometimes IRBs are not the right instrument for what we are aiming to achieve.

Data collection needs more attention.

  • Data collection is the easy part – where institutions struggle is with analyzing and doing something with the data we collect.
  • Organizations often don’t have a well-structured or systematic process for data collection.
  • We need to be clearer about what type of information we are collecting and why.
  • We need to update our data protection policy.

Reasons for data sharing are not always clear.

  • How can share data securely and efficiently without building duplicative systems? We should be thinking more during the design and collection phase about whether the data is going to be interoperable and who needs to access it.
  • How can we get the right balance in terms of data sharing? Some donors really push for information that can put people in real danger – like details of people who have participated in particular programs that would put them at risk with their home governments. Organizations really need to push back against this. It’s an education thing with donors. Middle management and intermediaries are often the ones that push for this type of data because they don’t really have a handle on the risk it represents. They are the weak points because of the demands they are putting on people. This is a challenge for open data policies – leaving it open to people leaves it to doing the laziest job possible of thinking about the potential risks for that data.
  • There are legal aspects of sharing too – such as the USAID open data policy where those collecting data have to share with the government. But we don’t have a clear understanding of what the international laws are about data sharing.
  • There are so many pressures to share data but they are not all fully thought through!

Data analysis and use of data are key weak spots for organizations.

  • We are just beginning to think through capturing lots of data.
  • Data is collected but not always used. Too often it’s extractive data collection. We don’t have the feedback loops in place, and when there are feedback loops we often don’t use the the feedback to make changes.
  • We forget often to go back to the people who have provided us with data to share back with them. It’s not often that we hold a consultation with the community to really involve them in how the data can be used.

Secure storage is a challenge.

  • We have hundreds of databases across the agency in various formats, hard drives and states of security, privacy and safety. Are we able to keep these secure?
  • We need to think more carefully about where we hold our data and who has access to it. Sometimes our data is held by external consultants. How should we be addressing that?

Disposing of data properly in a global context is hard!

  • Screen Shot 2016-05-25 at 7.17.58 AMIt’s difficult to dispose of data when there are multiple versions of it and a data footprint.
  • Disposal is an issue. We’re doing a lot of server upgrades and many of these are remote locations. How do we ensure that the right disposal process is going on globally, short of physically seeing that hard drives are smashed up!
  • We need to do a better job of disposal on personal laptops. I’ve done a lot of data collection on my personal laptop – no one has ever followed up to see if I’ve deleted it. How are we handling data handover? How do you really dispose of data?
  • Our organization hasn’t even thought about this yet!

Tips and recommendations from participants

  • Organizations should be using different tools. They should be using Pretty Good Privacy techniques rather than relying on free or commercial tools like Google or Skype.
  • People can be your weakest link if they are not aware or they don’t care about privacy and security. We send an email out to all staff on a weekly basis that talks about taking adequate measures. We share tips and stories. That helps to keep privacy and security front and center.
  • Even if you have a policy the hard part is enforcement, accountability, and policy reform. If our organizations are not doing direct policy around the formation of best practices in this area, then it’s on us to be sure we understand what is best practice, and to advocate for that. Let’s do what we can before the policy catches up.
  • The Responsible Data Forum and Tactical Tech have a great set of resources.
  • Oxfam has a Responsible Data Policy and Girl Effect have developed a Girls’ Digital Privacy, Security and Safety Toolkit that can also offer some guidance.

In conclusion, participants agreed that development agencies and NGOs need to take privacy, security and safety seriously. They can no longer afford to implement security at a lower level than corporations. “Times are changing and hackers are no longer just interested in financial information. People’s data is very valuable. We need to change and take security as seriously as corporates do!” as one person said.

 

 

Read Full Post »

‘I believe that many ICSOs [international civil society organizations] urgently need to overcome the stalemate in their global governance; they don’t need another governance reform, they need a governance revolution.’  Burkhard Gnarig, Berlin Civil Society Center.

The Berlin Civil Society Center believes that CSO governance models are increasingly facing major challenges. These include that they are typically:

  • dominated by national affiliates but increasingly challenged by the need for global decisions and their implementation;
  • shaped by Northern countries and cultures while the emerging powers in a multipolar world are located in the South;
  • serving one specific mission focused on development or environment or human rights while the interdependence of challenges and the need for integrated solutions become more and more obvious;
  • caught up in the conflict between democratic and participatory decision making on one side and the need for quick and consistent decisions on the other;
  • characterised by a clear definition of “inside” and “outside” the organisation while the Internet and the habits of the next generation demand platforms for joint action rather than well defined boxes.

In order to address these issues, the Berlin Center is working on a participatory project aimed at developing new governance models for best practice in CSO governance*. The models are aimed at serving ‘board Members, Chairs and CEOs who aim to undertake future governance reforms more strategically and more effectively.’

Different governance models are needed, however, because not all organizations can and will follow one single model.

The project concept notes that:

  • Firstly, ICSOs working in human rights, poverty alleviation, environmental protection, humanitarian response or children’s rights have different governance needs resulting from the type of work they do. For example, an organisation focussing on wildlife conservation compared to one working for poverty eradication will have different needs and possibilities of including partners and beneficiaries in their governance.
  • Secondly, there are different possible models to synchronise and balance local, national and global requirements and resources. At present these are reflected in global set ups ranging from loose networks over confederations and federations to unitary organisations.
  • Thirdly, when trying to secure future relevance of a governance system, much depends on different expectations of how future developments will turn out and which elements of these developments are considered most relevant in governance terms.

In an open letter, the Berlin Center director, Burkhard Gnarig explains that ‘with our Global Governance Project the Berlin Civil Society Center tries to lay the groundwork on which ICSOs can develop their own Global Governance Vision. A small Working Group which the Center has brought together will develop a handful of standard governance models that may serve as guidance on ICSOs’ specific paths to developing their own vision for their future governance.’

In order to bring a wider group of aid and development practitioners into the discussion, I volunteered to open a “CSO Governance Revolution” discussion on AidSource asking:

  • What are some of the major challenges you’ve seen with ICSO/INGO governance?
  • How do current governance models that you know of constrain the effectiveness of ICSOs or impact on development outcomes?
  • What CSO governance models have you seen that do work? What do they look like?
  • What are some of the underlying values and principles needed for effective ICSO governance?
  • What are some core elements of effective and successful ICSO governance models?
  • How do new information and communication technologies (ICTs) and trends in new media/social media impact on governance models and visions and people’s expectations of governance models?
  • What literature, research or existing documentation should be included as background resources for this discussion?
  • What other questions should be raised regarding ICSO governance?
I hope we can get some lively debate going to feed into the broader discussion at the Berlin Center. Join the AidSource discussion here.

More information on the Global Governance Project Concept can be accessed here or at the project page on the Berlin Civil Society Center’s website.

(*Note: I have no formal affiliation with the Berlin Center or this initiative, I just find it interesting and volunteered to try to get some additional discussion happening around it.)

Read Full Post »

This is a guest post by (my boss) Tessie San Martin, CEO of Plan International USA. Tessie presented at Fail Faire DC last night. These are her thoughts on the event, and about failure in general.

I attended the most extraordinary event, hosted by the World Bank and organized and sponsored by a variety of organizations including Development Gateway, Inveneo, Jhpiego, and Facilitating Change.

The objective of the event was to share our failures using technology in a development context, and to be bold, forthright, honest, and (this is very important when talking about one’s shortcomings!) humorous. There were 10 presenters (including me).   We all agreed to be on the record.   The event, and the fact that I agreed to be on the record did make my IT and Communications teams a wee bit anxious.  But I was keen to take on this opportunity.

We do not celebrate failure often enough.  But we should.  As Tim Harford has said in his very entertaining book, Adapt, “Few company bosses would care to admit it, but the market fumbles its way to success, as successful ideas take off and unsuccessful ones die.  When we see the survivors of this process – such as…General Electric and Procter and Gamble – we shouldn’t merely see success.  We should also see the long, tangled history of failure…”

In my presentation I spoke about what I call organizational kryptonite (all the geeky readers out there like me will know that kryptonite is matter that weakens – and slowly kills with extended exposure – Superman):  being silent about your failures.  If we do not share – and learn from – failures, we will never learn what works.  If we do not take risks, and encourage experimentation, we will never advance.  The successful organizations are those that motivate risk taking. As well as transparency and openness, about what is working and what is not.

So I attended this Fail Fair, and happily shared with the audience our various challenges (a nice euphemism don’t you think?) with the application of technology for not just what I could learn (and I learned a lot) but also for what attending and presenting says about Plan.  We are failing.  And in that failure we are learning, adapting and advancing, and therefore improving our ability to improve the lives of children around the world.

Read Full Post »