Feeds:
Posts
Comments

Archive for the ‘transparency’ Category

If you work in the aid and development sector, you’ll have done some soul searching and had a few difficult conversations with friends, donors, and colleagues* about ‘the Oxfam scandal’ this past week. Much has been written about the topic already. Here’s a (growing) compilation of 60+ posts (of varying degrees of quality).

Many in the sector are now scrambling to distance themselves from Oxfam. They want to send a message, rid themselves of stain-by-association, and avoid the fallout. Some seem to want to punish Oxfam for bringing shame upon the aid industry.

These responses, however, compound an existing problem in the sector — a focus on short-term fixes rather than long-term solutions. Actions and statements that treat Oxfam as the problem overlook the fact that it is one part of a broken system in desperate need of fixing.

I’ve worked in the sector for a long time. We all have stories about gender discrimination; sexual harassment, abuse and exploitation; racial discrimination; mistreatment; and mismanagement. We all know ‘that guy’ who got promoted, showed up at a partner or donor organization, or was put out to pasture after a massive screwup, abuse, or generally poor performance that remained an open secret.

The issues go wide and deep, and we talk about them a lot — publicly and privately. Yet the sector never seems able or willing to address them at the core. Instead, we watch the manifestations of these core issues being hushed up — and sometimes we are brave enough to report things. Why do we stay on? Because despite all the warts and all our frustrations with our organizations and our donors, we know that there are parts of this work that really matter.

The UK Charity Commission has launched an investigation into the Oxfam situation. Oxfam itself says it will set up an independent commission to review its practices and culture. It will also create “a global database of accredited referees to end the use of forged, dishonest or unreliable references by past or current Oxfam staff” and invest resources in its safeguarding processes.

These are a good steps for Oxfam. But much more is needed to address the underlying issues across the sector. One systemic fix, for example, might be a global database that is open to all agencies who are hiring, rather than limiting it to Oxfam.

But what next?

We’ll have another big scandal within a day or two, and social media will target its opinions and outrage at something new. In addition to breathing a sigh of relief, leadership across organizations and funders should grapple seriously with the question of how to overhaul the entire sector. We need profound changes that force the industry to live its professed values.

This does not mean dumping more responsibilities on safeguarding, protection, gender, participation, and human resources teams without the corresponding resources and seniority. Staff working in these areas are usually women, and they often do their jobs with little glory or fanfare. This is part of the problem. Rather than handing over clean-up to the ‘feminine’ sectors and walking away, leadership should be placing these thematic areas and functions at the heart of organizations where they have some power. And donors should be funding this in meaningful ways.

Virtually every institution in the US is going through a systematic revealing of its deepest and most entrenched issues of racism, classism, and sexism. It’s no secret that the aid and development sectors were built on colonialism. Will the ‘Oxfam scandal’ push us to finally do something to unravel and deal with that at the global level?

Can we get serious and do the deep work required to address our own institutional racism and gender discrimination and unacceptable power dynamics? Will we work actively to shift internal power structures that reward certain ages, genders, races, classes, and cultures? Will this include how we hire? How we promote? How we listen? How we market and fundraise? How we live our lives both in and outside of our workdays? Are we prepared to go further than the superficial?

Will we actually involve and engage the people we work with (our ‘beneficiaries’) as equals? Will we go beyond ‘feedback mechanisms’ to create the safe and trusted environments that are needed in order for someone to actually provide input, feedback, or report wrongdoing? Will we change our structures to become open and responsive to feedback? Will we follow up on feedback and make real changes in how we operate? In how funding is allocated?

Reforming the sector will require focused attention and conviction. We’ll have uncomfortable conversations about power, and then we’ll need to actually do something about those conversations. We’ll need to unpack the whole industry, including donors, and the dynamics inherent in funding and receiving funding. Addressing these issues in practice might mean that our program timelines are longer and our efforts cost more (update: this post gets at many of those logistics issues – recommended read!). It won’t be just another standardized code of conduct to sign or half-hearted yearly training. Openness and accountability will need to be rewarded, not punished and scandalized.

We will need to resist the urge to shout: #notallaidworkers! Now is not the time to tell ourselves that we are different than the rest of the sector or to run individual PR campaigns to fix our image. Rather, it’s time to open up and examine our institutions and organizations and the wider ecosystem and its incentives so that we can make real change happen.

We have an opportunity – #metoo, #blacklivesmatter, and other movements have prepared the way. Will we dig in and do the work in an honest way, or will we hold our breath and hope it all goes away so we can go back to business as usual?

 

*Thanks to the friends and colleagues who have had these conversations with me this week and the past two decades, and thanks also to those who reviewed and provided input on this post (Tom, Lina, Wayan and J.)!

Read Full Post »

2016-01-14 16.51.09_resized

Photo: Duncan Edwards, IDS.

A 2010 review of impact and effectiveness of transparency and accountability initiatives, conducted by Rosie McGee and John Gaventa of the Institute of Development Studies (IDS), found a prevalence of untested assumptions and weak theories of change in projects, programs and strategies. This week IDS is publishing their latest Bulletin titled “Opening Governance,” which offers a compilation of evidence and contributions focusing specifically on Technology in Transparency and Accountability (Tech for T&A).

It has a good range of articles that delve into critical issues in the Tech for T&A and Open Government spaces; help to clarify concepts and design; explore gender inequity as related to information access; and unpack the ‘dark side’ of digital politics, algorithms and consent.

In the opening article, editors Duncan Edwards and Rosie McGee (both currently working with the IDS team that leads the Making All Voices Count Research, Learning and Evidence component) give a superb in-depth review of the history of Tech for T&A and outline some of the challenges that have stemmed from ambiguous or missing conceptual frameworks and a proliferation of “buzzwords and fuzzwords.”

They unpack the history of and links between concepts of “openness,” “open development,” “open government,” “open data,” “feedback loops,” “transparency,” “accountability,” and “ICT4D (ICT for Development)” and provide some examples of papers and evidence that could help to recalibrate expectations among scholars and practitioners (and amongst donors, governments and policy-making bodies, one hopes).

The editors note that conceptual ambiguity continues to plague the field of Tech for T&A, causing technical problems because it hinders attempts to demonstrate impact; and creating political problems “because it clouds the political and ideological differences between projects as different as open data and open governance.”

The authors hope to stoke debate and promote the existing evidence in order to tone down the buzz. Likewise, they aim to provide greater clarity to the Tech for T&A field by offering concrete conclusions stemming from the evidence that they have reviewed and digested.

Download the Opening Governance report here.

 

 

 

 

Read Full Post »

The July 7th Technology Salon in New York City focused on the role of Information and Communication Technologies (ICTs) in Public Consultation. Our lead discussants were Tiago Peixoto, Team Lead, World Bank Digital Engagement Unit; Michele Brandt, Interpeace’s Director of Constitution-Making for Peace; and Ravi Karkara, Co-Chair, Policy Strategy Group, World We Want Post-2015 Consultation. Discussants covered the spectrum of local, national and global public consultation.

We started off by delving into the elements of a high-quality public consultation. Then we moved into whether, when, and how ICTs can help achieve those elements, and what the evidence base has to say about different approaches.

Elements and principles of high quality public participation

Our first discussant started by listing elements that need to be considered, whether a public consultation process is local, national or global, and regardless of whether it incorporates:

  • Sufficient planning
  • Realistic time frames
  • Education for citizens to participate in the process
  • Sufficient time and budget to gather views via different mechanisms
  • Interest in analyzing and considering the views
  • Provision of feedback about what is done with the consultation results

Principles underlying public consultation processes are that they should be:

  • Inclusive
  • Representative
  • Transparent
  • Accountable

Public consultation process should also be accompanied by widespread public education processes to ensure that people are prepared to a) provide their opinions and b) aware of the wider context in which the consultation takes place, she said. Tech and media can be helpful for spreading the news that the consultation is taking place, creating the narrative around it, and encouraging participation of groups who are traditional excluded, such as girls and women or certain political, ethnic, economic or religious groups, a Salon participant added.

Technology increases scale but limits opportunities for empathy, listening and learning

When thinking about integrating technologies into national public consultation processes, we need to ask ourselves why we want to encourage participation and consultation, what we want to achieve by it, and how we can best achieve it. It’s critical to set goals and purpose for a national consultation, rather than to conduct one just to tick a box, continued the discussant.

The pros and cons of incorporating technology into public consultations are contextual. Technology can be useful for bringing more views into the consultation process, however face-to-face consultation is critical for stimulating empathy in decision makers. When people in positions of power actually sit down and listen to their constituencies, it can send a very powerful message to people across the nation that their ideas and voices matter. National consultation also helps to build consensus and capacity to compromise. If done according to the above-mentioned principles, public consultation can legitimize national processes and improve buy-in. When leaders are open to listening, it also transforms them, she said.

At times, however, those with leadership or in positions of power do not believe that people can participate; they do not believe that the people have the capacity to have an opinion about a complicated political process, for example the creation of a new constitution. For this reason there is often resistance to national level consultations from multilateral or bilateral donors, politicians, the elites of a society, large or urban non-governmental organizations, and political leaders. Often when public consultation is suggested as part of a constitution making process, it is rejected because it can slow down the process. External donors may want a quick process for political reasons, and they may impose deadlines on national leaders that do not leave sufficient time for a quality consultation process.

Polls often end up being one-off snapshots or popularity contests

One method that is seen as a quick way to conduct a national consultation is polling. Yet, as Salon participants discussed, polls may end up being more like a popularity contest than a consultation process. Polls offer limited space for deeper dialogue or preparing those who have never been listened to before to make their voices heard. Polling may also raise expectations that whatever “wins” will be acted on, yet often there are various elements to consider when making decisions. So it’s important to manage expectations about what will be done with people’s responses and how much influence they will have on decision-making. Additionally, polls generally offers a snapshot of how people feel at a distinct point in time, but it may be important to understand what people are thinking at various moments throughout a longer-term national process, such as constitution making.

In addition to the above, opinion polls often reinforce the voices of those who have traditionally had a say, whereas those who have been suffering or marginalized for years, especially in conflict situations, may have a lot to say and a need to be listened to more deeply, explained the discussant. “We need to compress the vertical space between the elites and the grassroots, and to be sure we are not just giving people a one-time chance to participate. What we should be doing is helping to open space for dialogue that continues over time. This should be aimed at setting a precedent that citizen engagement is important and that it will continue even after a goal, such as constitution writing, is achieved,” said the discussant.

In the rush to use new technologies, often we forget about more traditional ones like radio, added one Salon participant, who shared an example of using radio and face to face meetings to consult with boys and girls on the Afghan constitution. Another participant suggested we broaden our concept of technology. “A plaza or a public park is actually a technology,” he noted, and these spaces can be conducive to dialogue and conversation. It was highlighted that processes of dialogue between a) national government and the international community and b) national government and citizens, normally happen in parallel and at odds with one another. “National consultations have historically been organized by a centralized unit, but now these kinds of conversations are happening all the time on various channels. How can those conversations be considered part of a national level consultation?” wondered one participant.

Aggregation vs deliberation

There is plenty of research on aggregation versus deliberation, our next discussant pointed out, and we know that the worst way to determine how many beans are in a jar is to deliberate. Aggregation (“crowd sourcing”) is a better way to find that answer. But for a trial, it’s not a good idea to have people vote on whether someone is guilty or not. “Between the jar and the jury trial, however,” he said, “we don’t know much about what kinds of policy issues lend themselves better to aggregation or to deliberation.”

For constitution making, deliberation is probably better, he said. But for budget allocation, it may be that aggregation is better. Research conducted across 132 countries indicated that “technology systematically privileges those who are better educated, male, and wealthier, even if you account for the technology access gaps.” This discussant mentioned that in participatory budgeting, people tend to just give up and let the educated “win” whereas maybe if it were done by a simple vote it would be more inclusive.

One Salon participated noted that it’s possible to combine deliberation and aggregation. “We normally only put things out for a vote after they’ve been identified through a deliberative process,” he said, “and we make sure that there is ongoing consultation.” Others lamented that decision makers often only want to see numbers – how many voted for what – and they do not accept more qualitative consultation results because they usually happen with fewer people participating. “Congress just wants to see numbers.”

Use of technology biases participation towards the elite

Some groups are using alternative methods for participatory democracy work, but the technology space has not thought much about this and relies on self-selection for the most part, said the discussant, and results end up being biased towards wealthier, urban, more educated males. Technology allows us to examine behaviors by looking at data that is registered in systems and to conduct experiments, however those doing these experiments need to be more responsible, and those who do not understand how to conduct research using technology need to be less empirical. “It’s a unique moment to build on what we’ve learned in the past 100 years about participation,” he said. Unfortunately, many working in the field of technology-enabled consultation have not done their research.

These biases towards wealthier, educated, urban males are very visible in Europe and North America, because there is so much connectivity, yet whether online or offline, less educated people participate less in the political process. In ‘developing’ countries, the poor usually participate more than the wealthy, however. So when you start using technology for consultation, you often twist that tendency and end up skewing participation toward the elite. This is seen even when there are efforts to proactively reach out to the poor.

Internal advocacy and an individual’s sense that he or she is capable of making a judgment or influencing an outcome is key for participation, and this is very related to education, time spent in school and access to cultural assets. With those who are traditionally marginalized, these internal assets are less developed and people are less confident. In order to increase participation in consultations, it’s critical to build these internal skills among more marginalized groups.

Combining online and offline public consultations

Our last discussant described how a global public consultation was conducted on a small budget for the Sustainable Development Goals, reaching an incredible 7.5 million people worldwide. Two clear goals of the consultation were that it be inclusive and non-discriminatory. In the end, 49% who voted identified as female, 50% as male and 1% as another gender. Though technology played a huge part in the process, the majority of people who voted used a paper ballot. Others participated using SMS, in locally-run community consultation processes, or via the website. Results from the voting were visualized on a data dashboard/data curation website so that it would be easier to analyze them, promote them, and encourage high-level decision makers to take them into account.

Some of the successful elements of this online/offline process included that transparency was a critical aspect. The consultation technology was created as open source so that those wishing to run their own consultations could open it, modify it, and repackage it however they wanted to suit their local context. Each local partner could manage their own URL and track their own work, and this was motivating to them.

Other key learning was that a conscious effort has to be made to bring in voices of minority groups; investment in training and capacity development was critical for those running local consultations; honesty and transparency about the process (in other words, careful management of expectations); and recognize that there will be highs and lows in the participation cycle (be sensitive to people’s own cycles and available time to participate).

The importance of accountability

Accountability was a key aspect for this process. Member states often did not have time to digest the results of the consultation, and those running it had to find ways to capture the results in short bursts and visually simple graphics so that the consultation results would be used for decision making. This required skill and capacity for not only gathering and generating data but also curating it for the decision-making audience.

It was also important to measure the impact of the consultation – were people’s voices included in the decision-making process and did it make a difference? And were those voices representative of a wide range of people? Was the process inclusive?

Going forward, in order to build on the consultation process and to support the principle of accountability, the initiative will shift focus to become a platform for public participation in monitoring and tracking the implementation of the Sustainable Development Goals.

Political will and responsiveness

A question came up about the interest of decision-makers in actually listening. “Leaders often are not at all interested in what people have to say. They are more concerned with holding onto their power, and if leaders have not agreed to a transparent and open process of consultation, it will not work. You can’t make them listen if they don’t want to. If there is no political will, then the whole consultation process will just be propaganda and window dressing,” one discussant commented. Another Salon participant what can be done to help politicians see the value of listening. “In the US, for example, we have lobbyists, issues groups, PACs, etc., so our politicians are being pushed on and demanded from all sides. If consultation is going to matter, you need to look at the whole system.” “How can we develop tools that can help governments sort through all these pressures and inputs to make good decisions?” wondered one participant.

Another person mentioned Rakesh Rajani’s work, noting that participation is mainly about power. If participation is not part of a wider system change, part of changing power structures, then using technology for participation is just a new tool to do the same old thing. If the process is not transparent and accountable, or if you engage and do not deliver anything based on the engagement, then you will lose future interest to engage.

Responsiveness was also raised. How many of these tech-fueled participation processes have led to governments actually changing, doing something different? One discussant said that evidence of impact of ICT-enabled participation processes was found in only 25 cases, and of those only 5 could show any kind of impact. All the others had very unclear impact – it was ambiguous. Did using ICTs make a difference? There was really no evidence of any. Another commented that clearly technology will only help if government is willing and able to receive consultation input and act on it. We need to find ways to help governments to do that, noted another person.

As always, conversation could have continued on for quite some time but our 2 hours was up. For more on ICTs and public consultations, here is a short list of resources that we compiled. Please add any others that would be useful! And as a little plug for a great read on technology and its potential in development and political work overall, I highly recommend checking out Geek Heresy: Rescuing Social Change from the Cult of Technology from Kentaro Toyama. Kentaro’s “Law of Amplification” is quite relevant in the space of technology-enabled participation, in that technology amplifies existing human behaviors and tendencies, and benefits those who are already primed to benefit while excluding those who have been traditionally excluded. Hopefully we’ll get Kentaro in for a Tech Salon in the Fall!

Thanks to our lead discussants, Michele, Tiago and Ravi, and to Thoughtworks for their generous hosting of the Salon! Salons are conducted under Chatham House Rule so no attribution has been made in this post. Sign up here if you’d like to receive Technology Salon invitations.

Read Full Post »

Last week’s Technology Salon New York City touched on ethics in technology for democracy initiatives. We heard from lead discussants Malavika Jayaram, Berkman Center for Internet and SocietyIvan Sigal, Global Voices; and Amilcar Priestley, Afrolatin@ Project. Though the topic was catalyzed by the Associated Press’ article on ‘Zunzuneo’ (a.k.a. ‘Cuban Twitter’) and subsequent discussions in the press and elsewhere, we aimed to cover some of the wider ethical issues encountered by people and organizations who implement technology for democracy programs.

Salons are off the record spaces, so no attribution is made in this post, but I’ve summarized the discussion points here:

First up: Zunzuneo

The media misinterpreted much of the Zunzuneo story. Zunzuneo was not a secret mission, according to one Salon participant, as it’s not in the remit of USAID to carry out covert operations. The AP article conflated a number of ideas regarding how USAID works and the contracting mechanisms that were involved in this case, he said. USAID and the Office of Transition Initiatives (OTI) frequently disguise members, organizations, and contractors that work for it on the ground for security reasons. (See USAID’s side of the story here). This may still be an ethical question, but it is not technically “spying.” The project was known within the OTI and development community, but on a ‘need to know’ basis. It was not a ‘fly by night’ operation; it was more a ‘quietly and not very effectively run project.’

There were likely ethics breaches in Zunzuneo, from a legal standpoint. It’s not clear whether the data and phone numbers collected from the Cuban public for the project were obtained in a legal or ethical way. Some reports say they were obtained through a mid-level employee (a “Cuban engineer who had gotten the phone list” according to the AP article). (Note: I spoke separately to someone close to the project who told me that user opt-in/opt-out and other standard privacy protocols were in place). It’s also not entirely clear whether, as the AP states, the user information collected was being categorized into segments who were loyal or disloyal to the Cuban government, information which could put users at risk if found out.

Zunzuneo took place in a broader historical and geo-political context. As one person put it, the project followed Secretary Clinton’s speeches on Internet Freedom. There was a rush to bring technology into the geopolitical space, and ‘the articulation of why technology was important collided with a bureaucratic process in USAID and the State Department (the ‘F process’) that absorbed USAID into the State Department and made development part of the State Department’s broader political agenda.’ This agenda had been in the works for quite some time, and was part of a wider strategy of quietly moving into development spaces and combining development, diplomacy, intelligence and military (defense), the so-called 3 D’s.

Implementers failed to think through good design, ethics and community aspects of the work. In a number of projects of this type, the idea was that if you give people technology, they will somehow create bottom up pressure for political social change. As one person noted, ‘in the Middle East, as a counter example, the tech was there to enable and assist people who had spent 8-10 years building networks. The idea that we can drop tech into a space and an uprising will just happen and it will coincidentally push the US geopolitical agenda is a fantasy.’ Often these kinds of programs start with a strategic communications goal that serves a political end of the US Government. They are designed with the idea that a particular input equals some kind of a specific result down the chain. The problem comes when the people doing the seeding of the ideas and inputs are not familiar with the context they will be operating in. They are injecting inputs into a space that they don’t understand. The bigger ethical question is: Why does this thought process prevail in development? Much of that answer is found in US domestic politics and the ways that initiatives get funded.

Zunzuneo was not a big surprise for Afrolatino organizations. According to one discussant, Afrolatino organizations were not surprised when the Zunzuneo article came out, given the geopolitical history and the ongoing presence of the US in Latin America. Zunzuneo was seen as a 21st Century version of what has been happening for decades. Though it was criticized, it was not seen as particularly detrimental. Furthermore, the Afrolatino community (within the wider Latino community) has had a variety of relationships with the US over time – for example, some Afrolatino groups supported the Contras. Many Afrolatino groups have felt that they were not benefiting overall from the mestizo governments who have held power. In addition, much of Latin America’s younger generation is less tainted by the Cold War mentality, and does not see US involvement in the region as necessarily bad. Programs like Zunzuneo come with a lot of money attached, so often wider concerns about their implications are not in the forefront because organizations need to access funding. Central American and Caribbean countries are only just entering into a phase of deeper analysis of digital citizenship, and views and perceptions on privacy are still being developed.

Perceptions of privacy

There are differences in perception when it comes to privacy and these perceptions are contextual. They vary within and across countries and communities based on age, race, gender, economic levels, comfort with digital devices, political perspective and past history. Some older people, for example, are worried about the privacy violation of having their voice or image recorded, because the voice, image and gaze hold spiritual value and power. These angles of privacy need to be considered as we think through what privacy means in different contexts and adapt our discourse accordingly.

Privacy is hard to explain, as one discussant said: ‘There are not enough dead bodies yet, so it’s hard to get people interested. People get mad when the media gets mad, and until an issue hits the media, it may go unnoticed. It’s very hard to conceptualize the potential harm from lack of privacy. There may be a chilling effect but it’s hard to measure. The digital divide comes in as well, and those with less exposure may have trouble understanding devices and technology. They will then have even greater trouble understanding beyond the device to data doubles, disembodied information and de-anonymization, which are about 7 levels removed from what people can immediately see. Caring a lot about privacy can get you labeled as paranoid or a crazy person in many places.’

Fatalism about privacy can also hamper efforts. In the developing world, many feel that everything is corrupt and inept, and that there is no point in worrying about privacy and security. ‘Nothing ever works anyway, so even if the government wanted to spy on us, they’d screw it up,’ is the feeling. This is often the attitude of human rights workers and others who could be at greatest risk from privacy breaches or data collection, such as that which was reportedly happening within Zunzuneo. Especially among populations and practitioners who have less experience with new technologies and data, this can create large-scale risk.

Intent, action, context and consequences

Good intentions with little attention to privacy vs data collection with a hidden political agenda. Where are the lines when data that are collected for a ‘good cause’ (for example, to improve humanitarian response) might be used for a different purpose that puts vulnerable people at risk? What about data that are collected with less altruistic intentions? What about when the two scenarios overlap? Data might be freely given or collected in an emergency that would be considered a privacy violation in a ‘development’ setting, or the data collection may lead to a privacy violation post-emergency. Often, slapping the ‘obviously good and unarguably positive’ label of ‘Internet freedom’ on something implies that it’s unquestionably positive when it may in fact be part of a political agenda with a misleading label. There is a long history of those with power collecting data that helps them understand and/or control those with less power, as one Salon participant noted, and we need to be cognizant of that when we think about data and privacy.

US Government approaches to political development often take an input/output approach, when, in fact, political development is not the same as health development. ‘In political work, there is no clear and clean epidemiological goal we are trying to reach,’ noted a Salon participant. Political development is often contentious and the targets and approaches are very different than those of health. When a health model and rhetoric is used to work on other development issues, it is misleading. The wholesale adoption of these kinds of disease model approaches leaves people and communities out of the decision making process about their own development. Similarly, the rhetoric of strategic communications and its inclusion into the development agenda came about after the War on Terror, and it is also a poor fit for political development. The rhetoric of ‘opening’ and ‘liberating’ data is similar. These arguments may work well for one kind of issue, but they are not transferable to a political agenda. One Salon participant pointed out the rhetoric of the privatization model also, and explained that a profound yet not often considered implication of the privatization of services is that once a service passes over to the private sector, the Freedom of Information Act (FOIA) does not apply, and citizens and human rights organizations lose FOIA as a tool. Examples included the US prison system and the Blackwater case of several years ago.

It can be confusing for implementers to know what to do, what tools to use, what funding to accept and when it is OK to bring in an outside agenda. Salon participants provided a number of examples where they had to make choices and felt ethics could have been compromised. Is it OK to sign people up on Facebook or Gmail during an ICT and education project, given these companies’ marketing and privacy policies? What about working on aid transparency initiatives in places where human rights work or crime reporting can get people killed or individual philanthropists/donors might be kidnapped or extorted? What about a hackathon where the data and solutions are later given to a government’s civilian-military affairs office? What about telling LGBT youth about a social media site that encourages LGBT youth to connect openly with one another (in light of recent harsh legal penalties against homosexuality)? What about employing a user-centered design approach for a project that will eventually be overlaid on top of a larger platform, system or service that does not pass the privacy litmus test? Is it better to contribute to improving healthcare while knowing that your software system might compromise privacy and autonomy because it sits on top of a biometric system, for example? Participants at the Salon face these ethical dilemmas every day, and as one person noted, ‘I wonder if I am just window dressing something that will look and feel holistic and human-centered, but that will be used to justify decisions down the road that are politically negative or go against my values.’ Participants said they normally rely on their own moral compass, but clearly many Salon participants are wrestling with the potential ethical implications of their actions.

What we can do? Recommendations from Salon participants

Work closely with and listen to local partners, who should be driving the process and decisions. There may be a role for an outside perspective, but the outside perspective should not trump the local one. Inculcate and support local communities to build their own tools, narratives, and projects. Let people set their own agendas. Find ways to facilitate long-term development processes around communities rather than being subject to agendas from the outside.

Consider this to be ICT for Discrimination and think in every instance and every design decision about how to dial down discrimination. Data lead to sorting, and data get lumped into clusters. Find ways during the design process to reduce the discrimination that will come from that sorting and clustering process. The ‘Do no harm’ approach is key. Practitioners and designers should also be wary of the automation of development and the potential for automated decisions to be discriminatory.

Call out hypocrisy. Those of us who sit at Salons or attend global meetings hold tremendous privilege and power as compared to most of the rest of the world. ‘It’s not landless farmers or disenfranchised young black youth in Brazil who get to attend global meetings,’ said one Salon attendee. ‘It’s people like us. We need to be cognizant of the advantage we have as holders of power.’ Here in the US, the participant added, we need to be more aware of what private sector US technology companies are doing to take advantage of and maintain their stronghold in the global market and how the US government is working to allow US corporations to benefit disproportionately from the current Internet governance structure.

Use a rights-based approach to data and privacy to help to frame these issues and situations. Disclosure and consent are sometimes considered extraneous, especially in emergency situations. People think ‘this might be the only time I can get into this disaster or conflict zone, so I’m going to Hoover up as much data as possible without worrying about privacy.’ On the other hand, sometimes organizations are paternalistic and make choices for people about their own privacy. Consent and disclosure are not new issues; they are merely manifested in new ways as new technology changes the game and we cannot guarantee anonymity or privacy any more for research subjects. There is also a difference between information a person actively volunteers and information that is passively collected and used without a person’s knowledge. Framing privacy in a human rights context can help place importance on both processes and outcomes that support people’s rights to control their own data and that increase empowerment.

Create a minimum standard for privacy. Though we may not be able to determine a ceiling for privacy, one Salon participant said we should at least consider a floor or a minimum standard. Actors on the ground will always feel that privacy standards are a luxury because they have little know-how and little funding, so creating and working within an ethical standard should be a mandate from donors. The standard could be established as an M&E criterion.

Establish an ethics checklist to decide on funding sources and create policies and processes that help organizations to better understand how a donor or sub-donor would access and/or use data collected as part of a project or program they are funding. This is not always an easy solution, however, especially for cash-strapped local organizations. In India, for example, organizations are legally restricted from receiving certain types of funding based on government concerns that external agencies are trying to bring in Western democracy and Western values. Local organizations have a hard time getting funding for anti-censorship or free speech efforts. As one person at the Salon said, ‘agencies working on the ground are in a bind because they can’t take money from Google because it’s tainted, they can’t take money from the State Department because it’s imperialism and they can’t take money from local donors because there are none.’

Use encryption and other technology solutions. Given the low levels of understanding and awareness of these tools, more needs to be done so that more organizations learn how to use them, and they need to be made simpler, more accessible and user-friendly. ‘Crypto Parties’ can help get organizations familiar with encryption and privacy, but better outreach is needed so that organizations understand the relevance of encryption and feel welcome in tech-heavy environments.

Thanks to participants and lead discussants for the great discussions and to ThoughtWorks for hosting us at their offices!

 If you’d like to attend future Salons, sign up here!

Read Full Post »

At the November 8th Technology Salon in New York City, we looked at the role of ICTs in communication for development (C4D) initiatives with marginalized adolescent girls. Lead discussants Kerida McDonald and Katarzyna Pawelczyk discussed recent UNICEF reports related to the topic, and John Zoltner spoke about FHI360’s C4D work in practice.

To begin, it was pointed out that C4D is not donor communications or marketing. It is the use of communication approaches and methodologies to achieve influence at various levels —  e.g., family, institutional and policy —  to change behavior and social norms. C4D is one approach that is being used to address the root causes of gender inequality and exclusion.

Screen Shot 2013-10-11 at 7.24.48 AMAs the UNICEF report on ICTs and C4D* notes, girls may face a number of situations that contribute to and/or are caused by their marginalization: early pregnancy, female genital cutting, early marriage, high rates of HIV/AIDS, low levels of education, lack of control over resources. ICTs alone cannot resolve these, because there is a deep and broad set of root causes. However, ICTs can be integrated systematically into the set of C4D tools and approaches that contribute to positive change.

Issues like bandwidth, censorship and electricity need to be considered when integrating ICTs into C4D work, and approaches that fit the context need to be developed. Practitioners should use tools that are in the hands of girls and their communities now, yet be aware of advances in access and new technologies, as these change rapidly.

Key points:

Interactivity is more empowering than one-way messaging:  Many of the ICT solutions being promoted today focus on sending messages out via mobile phones. However C4D approaches aim for interactivity and multi-channel, multi-directional communication, which has proven more empowering.

Content: Traditional media normally goes through a rigorous editorial process and it is possible to infuse it with a gender balance. Social media does not have the same type of filters, and it can easily be used to reinforce stereotypes about girls. This is something to watch and be aware of.

Purpose: It’s common with ICT-related approaches to start with the technology rather than starting with the goals. As one Salon participant asked “What are the results we want to see for ourselves? What are the results that girls want to see? What are the root causes of discrimination and how are we trying to address them? What does success look like for girls? For organizations? Is there a role for ICTs in helping achieve success? If so, what is it?” These questions need to be the starting point, rather than the technology.

Participation: One Salon participant mentioned a 2-year project that is working together with girls to define their needs and their vision of success. The process is one co-design, and it is aimed at understanding what girls want. Many girls expressed a feeling of isolation and desire for connection, and so the project is looking at how ICTs can help them connect. As the process developed, the diversity of needs became very clear and plans have changed dramatically based on input from a range of girls from different contexts. Implementors need to be prepared to change, adapt and respond to what girls say they want and to local realities.

****

Screen Shot 2013-11-23 at 10.41.22 PMA second study commissioned by UNICEF explores how young people use social media. The researchers encountered some challenges in terms of a strong gender approach for the study. Though a gender lens was used for analysis, there is little available data disaggregated by sex. The study does not focus on the most marginalized, because it looks at the use of social media, which normally requires a data connection or Internet access, which the most marginalized youth usually do not have.

The authors of the report found that youth most commonly used the Internet and social media for socializing and communicating with friends. Youth connected less often for schoolwork. One reason for this may be that in the countries/contexts where the research took place, there is no real integration of ICTs into the school system. It was emphasized that the  findings in the report are not comparable or nationally representative, and blanket statements such as “this means x for the whole developing world” should be avoided.

Key points:

Self-reporting biases. Boys tend to have higher levels of confidence and self-report greater ICT proficiency than girls do. This may skew results and make it seem that boys have higher skill levels.

Do girls really have less access? We often hear that girls have less access than boys. The evidence gathered for this particular report found that “yes and no.” In some places, when researchers asked “Do you have access to a mobile,” there was not a huge difference between urban and rural or between boys and girls. When they dug deeper, however, it became more complex. In the case of Zambia, access and ownership were similar for boys and girls, but fewer girls were connecting at all to the Internet as compared to boys. Understanding connectivity and use was quite complicated.

What are girls vs. boys doing online? This is an important factor when thinking about what solutions are applicable to which situation(s). Differences came up here in the study. In Argentina, girls were doing certain activities more frequently, such as chatting and looking for information, but they were not gaming. In Zambia, girls were doing some things less often than boys; for example, fewer girls than boys were looking for health information, although the number was still significant. A notable finding was that both girls and boys were accessing general health information more often than they were accessing sensitive information, such as sexual health or mental health.

What are the risks in the online world? A qualitative portion of the study in Kenya used focus groups with girls and boys, and asked about their uses of social media and experience of risk. Many out-of-school girls aged 15-17 reported that they used social media as a way to meet a potential partner to help them out of their financial situation. They reported riskier behavior, contact with older men, and relationships more often than girls who were in school. Girls in general were more likely to report unpleasant online encounters than boys, for example, request for self-exposure photos.

Hiding social media use. Most of the young people that researchers spoke with in Kenya were hiding social media use from their parents, who disapproved of it. This is an important point to note in C4D efforts that plan on using social media, and program designers will want to take parental attitudes about different media and communication channels into consideration as they design C4D programs.

****

When implementing programs, it is noteworthy how boys and girls tend to use ICT and media tools. Gender issues often manifest themselves right away. “The boys grab the cameras, the boys sit down first at the computers.” If practitioners don’t create special rules and a safe space for girls to participate, girls may be marginalized. In practical ICT and media work, it’s common for boys and girls to take on certain roles. “Some girls like to go on camera, but more often they tend to facilitate what is being done rather than star in it.” The gender gap in ICT access and use, where it exists, is a reflection of the power gaps of society in general.

In the most rural areas, even when people have access, they usually don’t have the resources and skills to use ICTs.  Very simple challenges can affect girls’ ability to participate in projects, for example, oftentimes a project will hold training at times when it’s difficult for girls to attend. Unless someone systematically goes through and applies a gender lens to a program, organizations often don’t notice the challenges girls may face in participating. It’s not enough to do gender training or measure gender once a year; gendered approaches needs to be built into program design.

Long-terms interventions are needed if the goal is to emancipate girls, help them learn better, graduate, postpone pregnancy, and get a job. This cannot be done in a year with a simple project that has only one focus, because girls are dealing with education, healthcare, and a whole series of very entrenched social issues. What’s needed is to follow a cohort of girls and to provide information and support across all these sectors over the long-term.

Key points:

Engaging boys and men: Negative reactions from men are a concern if and when girls and women start to feel more empowered or to access resources. For example, some mobile money and cash transfer programs direct funds to girls and women, and some studies have found that violence against women increases when women start to have more money and more freedom. Another study, however, of a small-scale effort that provides unconditional cash transfers to girls ages 18-19 in rural Kenya, is demonstrating just the opposite: girls have been able to say where money is spent and the gender dynamics have improved. This raises the question of whether program methodologies need to be oriented towards engaging boys and men and involving them in changing gender dynamics, and whether engaging boys and men can help avoid an increase in violence. Working with boys to become “girl champions” was cited as a way to help to bring boys into the process as advocates and role models.

Girls as producers, not just consumers. ICTs are not only tools for sending content to girls. Some programs are working to help girls produce content and create digital stories in their own languages. Sometimes these stories are used to advocate to decision makers for change in favor of girls and their agendas. Digital stories are being used as part of research processes and to support monitoring, evaluation and accountability work through ‘real-time’ data.

ICTs and social accountability. Digital tools are helping young people address accountability issues and inform local and national development processes. In some cases, youth are able to use simple, narrow bandwidth tools to keep up to date on actions of government officials or to respond to surveys to voice their priorities. Online tools can also lead to offline, face-to-face engagement. One issue, however, is that in some countries, youth are able to establish communication with national government ministers (because there is national-level capacity and infrastructure) but at local level there is very little chance or capability for engagement with elected officials, who are unprepared to respond and engage with youth or via social media. Youth therefore tend to bypass local government and communicate with national government. There is a need for capacity building at local level and decentralized policies and practices so that response capacity is strengthened.

Do ICTs marginalize girls? Some Salon participants worried that as conversations and information increasingly move to a digital environment, ICTs are magnifying the information and communication divide and further marginalizing some girls. Others felt that the fact that we are able to reach the majority of the world’s population now is very significant, and the inability to reach absolutely everyone doesn’t mean we should stop using ICTs. For this very reason – because sharing of information is increasingly digital – we should continue working to get more girls online and strengthen their confidence and abilities to use ICTs.

Many thanks to UNICEF for hosting the Salon!

(Salons operate under Chatham House Rule, thus no attribution has been given in the above summary. Sign up here if you’d like to attend Salons in the future!)

*Disclosure: I co-authored this report with Keshet Bachan.

Read Full Post »

This is a guest post from Anna Crowe, Research Officer on the Privacy in the Developing World Project, and  Carly Nyst, Head of International Advocacy at Privacy International, a London-based NGO working on issues related to technology and human rights, with a focus on privacy and data protection. Privacy International’s new report, Aiding Surveillance, which covers this topic in greater depth was released this week.

by Anna Crowe and Carly Nyst

NOV 21 CANON 040

New technologies hold great potential for the developing world, and countless development scholars and practitioners have sung the praises of technology in accelerating development, reducing poverty, spurring innovation and improving accountability and transparency.

Worryingly, however, privacy is presented as a luxury that creates barriers to development, rather than a key aspect to sustainable development. This perspective needs to change.

Privacy is not a luxury, but a fundamental human right

New technologies are being incorporated into development initiatives and programmes relating to everything from education to health and elections, and in humanitarian initiatives, including crisis response, food delivery and refugee management. But many of the same technologies being deployed in the developing world with lofty claims and high price tags have been extremely controversial in the developed world. Expansive registration systems, identity schemes and databases that collect biometric information including fingerprints, facial scans, iris information and even DNA, have been proposed, resisted, and sometimes rejected in various countries.

The deployment of surveillance technologies by development actors, foreign aid donors and humanitarian organisations, however, is often conducted in the complete absence of the type of public debate or deliberation that has occurred in developed countries. Development actors rarely consider target populations’ opinions when approving aid programmes. Important strategy documents such as the UN Office for Humanitarian Affairs’ Humanitarianism in a Networked Age and the UN High-Level Panel on the Post-2015 Development Agenda’s A New Global Partnership: Eradicate Poverty and Transfer Economies through Sustainable Development give little space to the possible impact adopting new technologies or data analysis techniques could have on individuals’ privacy.

Some of this trend can be attributed to development actors’ systematic failure to recognise the risks to privacy that development initiatives present. However, it also reflects an often unspoken view that the right to privacy must necessarily be sacrificed at the altar of development – that privacy and development are conflicting, mutually exclusive goals.

The assumptions underpinning this view are as follows:

  • that privacy is not important to people in developing countries;
  • that the privacy implications of new technologies are not significant enough to warrant special attention;
  • and that respecting privacy comes at a high cost, endangering the success of development initiatives and creating unnecessary work for development actors.

These assumptions are deeply flawed. While it should go without saying, privacy is a universal right, enshrined in numerous international human rights treaties, and matters to all individuals, including those living in the developing world. The vast majority of developing countries have explicit constitutional requirements to ensure that their policies and practices do not unnecessarily interfere with privacy. The right to privacy guarantees individuals a personal sphere, free from state interference, and the ability to determine who has information about them and how it is used. Privacy is also an “essential requirement for the realization of the right to freedom of expression”. It is not an “optional” right that only those living in the developed world deserve to see protected. To presume otherwise ignores the humanity of individuals living in various parts of the world.

Technologies undoubtedly have the potential to dramatically improve the provision of development and humanitarian aid and to empower populations. However, the privacy implications of many new technologies are significant and are not well understood by many development actors. The expectations that are placed on technologies to solve problems need to be significantly circumscribed, and the potential negative implications of technologies must be assessed before their deployment. Biometric identification systems, for example, may assist in aid disbursement, but if they also wrongly exclude whole categories of people, then the objectives of the original development intervention have not been achieved. Similarly, border surveillance and communications surveillance systems may help a government improve national security, but may also enable the surveillance of human rights defenders, political activists, immigrants and other groups.

Asking for humanitarian actors to protect and respect privacy rights must not be distorted as requiring inflexible and impossibly high standards that would derail development initiatives if put into practice. Privacy is not an absolute right and may be limited, but only where limitation is necessary, proportionate and in accordance with law. The crucial aspect is to actually undertake an analysis of the technology and its privacy implications and to do so in a thoughtful and considered manner. For example, if an intervention requires collecting personal data from those receiving aid, the first step should be to ask what information is necessary to collect, rather than just applying a standard approach to each programme. In some cases, this may mean additional work. But this work should be considered in light of the contribution upholding human rights and the rule of law make to development and to producing sustainable outcomes. And in some cases, respecting privacy can also mean saving lives, as information falling into the wrong hands could spell tragedy.

A new framing

While there is an increasing recognition among development actors that more attention needs to be paid to privacy, it is not enough to merely ensure that a programme or initiative does not actively harm the right to privacy; instead, development actors should aim to promote rights, including the right to privacy, as an integral part of achieving sustainable development outcomes. Development is not just, or even mostly, about accelerating economic growth. The core of development is building capacity and infrastructure, advancing equality, and supporting democratic societies that protect, respect and fulfill human rights.

The benefits of development and humanitarian assistance can be delivered without unnecessary and disproportionate limitations on the right to privacy. The challenge is to improve access to and understanding of technologies, ensure that policymakers and the laws they adopt respond to the challenges and possibilities of technology, and generate greater public debate to ensure that rights and freedoms are negotiated at a societal level.

Technologies can be built to satisfy both development and privacy.

Download the Aiding Surveillance report.

Read Full Post »

This post was originally published on the Open Knowledge Foundation blog

A core theme that the Open Development track covered at September’s Open Knowledge Conference was Ethics and Risk in Open Development. There were more questions than answers in the discussions, summarized below, and the Open Development working group plans to further examine these issues over the coming year.

Informed consent and opting in or out

Ethics around ‘opt in’ and ‘opt out’ when working with people in communities with fewer resources, lower connectivity, and/or less of an understanding about privacy and data are tricky. Yet project implementers have a responsibility to work to the best of their ability to ensure that participants understand what will happen with their data in general, and what might happen if it is shared openly.

There are some concerns around how these decisions are currently being made and by whom. Can an NGO make the decision to share or open data from/about program participants? Is it OK for an NGO to share ‘beneficiary’ data with the private sector in return for funding to help make a program ‘sustainable’? What liabilities might donors or program implementers face in the future as these issues develop?

Issues related to private vs. public good need further discussion, and there is no one right answer because concepts and definitions of ‘private’ and ‘public’ data change according to context and geography.

Informed participation, informed risk-taking

The ‘do no harm’ principle is applicable in emergency and conflict situations, but is it realistic to apply it to activism? There is concern that organizations implementing programs that rely on newer ICTs and open data are not ensuring that activists have enough information to make an informed choice about their involvement. At the same time, assuming that activists don’t know enough to decide for themselves can come across as paternalistic.

As one participant at OK Con commented, “human rights and accountability work are about changing power relations. Those threatened by power shifts are likely to respond with violence and intimidation. If you are trying to avoid all harm, you will probably not have any impact.” There is also the concept of transformative change: “things get worse before they get better. How do you include that in your prediction of what risks may be involved? There also may be a perception gap in terms of what different people consider harm to be. Whose opinion counts and are we listening? Are the right people involved in the conversations about this?”

A key point is that whomever assumes the risk needs to be involved in assessing that potential risk and deciding what the actions should be — but people also need to be fully informed. With new tools coming into play all the time, can people be truly ‘informed’ and are outsiders who come in with new technologies doing a good enough job of facilitating discussions about possible implications and risk with those who will face the consequences? Are community members and activists themselves included in risk analysis, assumption testing, threat modeling and risk mitigation work? Is there a way to predict the likelihood of harm? For example, can we determine whether releasing ‘x’ data will likely lead to ‘y’ harm happening? How can participants, practitioners and program designers get better at identifying and mitigating risks?

When things get scary…

Even when risk analysis is conducted, it is impossible to predict or foresee every possible way that a program can go wrong during implementation. Then the question becomes what to do when you are in the middle of something that is putting people at risk or leading to extremely negative unintended consequences. Who can you call for help? What do you do when there is no mitigation possible and you need to pull the plug on an effort? Who decides that you’ve reached that point? This is not an issue that exclusively affects programs that use open data, but open data may create new risks with which practitioners, participants and activists have less experience, thus the need to examine it more closely.

Participants felt that there is not enough honest discussion on this aspect. There is a pop culture of ‘admitting failure’ but admitting harm is different because there is a higher sense of liability and distress. “When I’m really scared shitless about what is happening in a project, what do I do?” asked one participant at the OK Con discussion sessions. “When I realize that opening data up has generated a huge potential risk to people who are already vulnerable, where do I go for help?” We tend to share our “cute” failures, not our really dismal ones.

Academia has done some work around research ethics, informed consent, human subject research and use of Internal Review Boards (IRBs). What aspects of this can or should be applied to mobile data gathering, crowdsourcing, open data work and the like? What about when citizens are their own source of information and they voluntarily share data without a clear understanding of what happens to the data, or what the possible implications are?

Do we need to think about updating and modernizing the concept of IRBs? A major issue is that many people who are conducting these kinds of data collection and sharing activities using new ICTs are unaware of research ethics and IRBs and don’t consider what they are doing to be ‘research’. How can we broaden this discussion and engage those who may not be aware of the need to integrate informed consent, risk analysis and privacy awareness into their approaches?

The elephant in the room

Despite our good intentions to do better planning and risk management, one big problem is donors, according to some of the OK Con participants.  Do donors require enough risk assessment and mitigation planning in their program proposal designs? Do they allow organizations enough time to develop a well-thought-out and participatory Theory of Change along with a rigorous risk assessment together with program participants? Are funding recipients required to report back on risks and how they played out? As one person put it, “talk about failure is currently more like a ‘cult of failure’ and there is no real learning from it. Systematically we have to report up the chain on money and results and all the good things happening. and no one up at the top really wants to know about the bad things. The most interesting learning doesn’t get back to the donors or permeate across practitioners. We never talk about all the work-arounds and backdoor negotiations that make development work happen. This is a serious systemic issue.”

Greater transparency can actually be a deterrent to talking about some of these complexities, because “the last thing donors want is more complexity as it raises difficult questions.”

Reporting upwards into government representatives in Parliament or Congress leads to continued aversion to any failures or ‘bad news’. Though funding recipients are urged to be innovative, they still need to hit numeric targets so that the international aid budget can be defended in government spaces. Thus, the message is mixed: “Make sure you are learning and recognizing failure, but please don’t put anything too serious in the final report.” There is awareness that rigid program planning doesn’t work and that we need to be adaptive, yet we are asked to “put it all into a log frame and make sure the government aid person can defend it to their superiors.”

Where to from here?

It was suggested that monitoring and evaluation (M&E) could be used as a tool for examining some of these issues, but M&E needs to be seen as a learning component, not only an accountability one. M&E needs to feed into the choices people are making along the way and linking it in well during program design may be one way to include a more adaptive and iterative approach. M&E should force practitioners to ask themselves the right questions as they design programs and as they assess them throughout implementation. Theory of Change might help, and an ethics-based approach could be introduced as well to raise these questions about risk and privacy and ensure that they are addressed from the start of an initiative.

Practitioners have also expressed the need for additional resources to help them predict and manage possible risk: case studies, a safe space for sharing concerns during implementation, people who can help when things go pear-shaped, a menu of methodologies, a set of principles or questions to ask during program design, or even an ICT4D Implementation Hotline or a forum for questions and discussion.

These ethical issues around privacy and risk are not exclusive to Open Development. Similar issues were raised last week at the Open Government Partnership Summit sessions on whistle blowing, privacy, and safeguarding civic space, especially in light of the Snowden case. They were also raised at last year’s Technology Salon on Participatory Mapping.

A number of groups are looking more deeply into this area, including the Capture the Ocean Project, The Engine Room, IDRC’s research network, The Open Technology InstitutePrivacy InternationalGSMA, those working on “Big Data,” those in the Internet of Things space, and others.

I’m looking forward to further discussion with the Open Development working group on all of this in the coming months, and will also be putting a little time into mapping out existing initiatives and identifying gaps when it comes to these cross-cutting ethics, power, privacy and risk issues in open development and other ICT-enabled data-heavy initiatives.

Please do share information, projects, research, opinion pieces and more if you have them!

Read Full Post »

This is a cross-post by Duncan Edwards from the Institute of Development Studies. Duncan and I collaborated on some sessions for the Open Development stream at September’s Open Knowledge Conference, and we are working on a few posts to sum up what we discussed there and highlight some lingering thoughts on open development and open data. This post was originally published on the Open Knowledge Foundation blog on October 21, 2013

by Duncan Edwards

I’ve had a lingering feeling of unease that things were not quite right in the world of open development and ICT4D (Information and communication technology for development), so at September’s Open Knowledge Conference in Geneva I took advantage of the presence of some of the world’s top practitioners in these two areas to explore the question: How does “openness” really effect change within development?

Inspiration for the session came from a number of conversations I’ve had over the last few years. My co-conspirator/co-organiser of the OKCon side event “Reality check: Ethics and Risk in Open Development,” Linda Raftree, had also been feeling uncomfortable with the framing of many open development projects, assumptions being made about how “openness + ICTs = development outcomes,” and a concern that risks and privacy were not being adequately considered. We had been wondering whether the claims made by Open Development enthusiasts were substantiated by any demonstrable impact. For some reason, as soon as you introduce the words “open data” and “ICT,” good practice in development gets thrown out the window in the excitement to reach “the solution”.

A common narrative in many “open” development projects goes along the lines of “provide access to data/information –> some magic occurs –> we see positive change.” In essence, because of the newness of this field, we only know what we THINK happens, we don’t know what REALLY happens because there is a paucity of documentation and evidence.

It’s problematic that we often use the terms data, information, and knowledge interchangeably, because:
Data is NOT knowledge.
Data is NOT information.
Information is NOT knowledge.
Knowledge IS what you know. It’s the result of information you’ve consumed, your education, your culture, beliefs, religion, experience – it’s intertwined with the society within which you live.

Data cake metaphor developed by Mark Johnstone.

Understanding and thinking through how we get from the “openness” of data, to how this affects how and what people think, and consequently how they MIGHT act, is critical in whether “open” actually has any additional impact.

At Wednesday’s session, panellist Matthew Smith from the International Development Research Centre (IDRC) talked about the commonalities across various open initiatives. Matthew argued that a larger Theory of Change (ToC) around how ‘open’ leads to change on a number of levels could allow practitioners to draw out common points. The basic theory we see in open initiatives is “put information out, get a feedback loop going, see change happen.” But open development can be sliced in many ways, and we tend to work in silos when talking about openness. We have open educational resources, open data, open government, open science, etc. We apply ideas and theories of openness in a number of domains but we are not learning across these domains.

We explored the theories of change underpinning two active programmes that incorporate a certain amount of “openness” in their logic. Simon Colmer from the Knowledge Services department at the Institute of Development Studies outlined his department’s theory of change of how research evidence can help support decision-making in development policy-making and practice. Erik Nijland from HIVOS presented elements of the theory of change that underpins the Making All Voices Count programme, which looks to increase the links between citizens and governments to improve public services and deepen democracy. Both of these ToCs assume that because data/information is accessible, people will use it within their decision-making processes.

They also both assume that intermediaries play a critical role in analysis, translation, interpretation, and contextualisation of data and information to ensure that decision makers (whether citizens, policy actors, or development practitioners) are able to make use of it. Although access is theoretically open, in practice even mediated access is not equal – so how might this play out in respect to marginalised communities and individuals?

What neither ToC really does is unpack who these intermediaries are. What are their politics? What are their drivers for mediating data and information? What is the effect of this? A common assumption is that intermediaries are somehow neutral and unbiased – does this assumption really hold true?

What many open data initiatives do not consider is what happens after people are able to access and internalise open data and information. How do people act once they know something? As Vanessa Herringshaw from the Transparency and Accountability Initiative said in the “Raising the Bar for ambition and quality in OGP” session, “We know what transparency should look like but things are a lot less clear on the accountability end of things”.

There are a lot of unanswered questions. Do citizens have the agency to take action? Who holds power? What kind of action is appropriate or desirable? Who is listening? And if they are listening, do they care?

Linda finished up the panel by raising some questions around the assumptions that people make decisions based on information rather than on emotion, and that there is a homogeneous “public” or “community” that is waiting for data/information upon which to base their opinions and actions.

So as a final thought, here’s my (perhaps clumsy) 2013 update on Gil Scott Heron’s 1970 song “The Revolution will not be televised”:

“The revolution will NOT be in Open data,
It will NOT be in hackathons, data dives, and mobile apps,
It will NOT be broadcast on Facebook, Twitter, and YouTube,
It will NOT be live-streamed, podcast, and available on catch-up
The revolution will not be televised”

Heron’s point, which holds true today, was that “the revolution” or change, starts in the head. We need to think carefully about how we get far beyond access to data.

Look out for a second post coming soon on Theories of Change in Open, and a third post on ethics and risk in open data and open development.

And if you’re interested in joining the conversation, \sign up to our Open Development mailing list.

Read Full Post »

This is a cross-post from the always thoughtful and eloquent Ian Thorpe, who notes that fundraising is a means to help non-profit organizations fulfill their wider mission; it should not be mistaken as the end goal of non-profit organizations. Consistency in how we achieve our missions across all of our operations becomes ever more important in this age of growing transparency. Read the original post here.

by Ian Thorpe

I’ve been reflecting on a couple of interesting discussions lately on aid communication and fundraising.  In the first, Kurante organized a Google Hangout on “Poverty Porn” i.e. the use of negative, shocking images in aid campaigns (the recording and the twitter storify of the discussion can be found on Tom Murphy’s blog here). During the discussion @meowtree  shared a link to this rather discouraging blog post by a fundraising guru here that suggests that those who criticize the use of negative images are undermining the organizations they work for and should be fired!

A second twitter discussion concerned a new “buy one, give one” programme and whether or not it is harmful or helpful and on what basis this type of programme might be judged.

What comes out of both of these is the potential conflict between what makes good aid versus what makes good fundraising. It’s quite possible to raise money, a lot of money, if one is willing to do whatever it takes, use any kind of images and words and tactics in order to open their wallets. Marketers and fundraisers, to give them their due, make extensive use research and evidence in their work, perhaps more so than programme people, and much research backs up the claim that negative imagery is often more successful than positive imagery in evoking a response and getting out checkbooks.

If you were a private company then “maximizing shareholder value” by going where the money is might well be a great strategy. But aid agencies and civil society organizations are generally in place to serve a mission. The mission of the organization is a huge asset both in motivating staff and in generating support – but it’s also an important constraint in that in places limits around what you will be prepared to do to raise funds or attention. Essentially, if you exist to pursue a mission then all your activities need to be consistent with it. Generally an aid mission is not simply to raise as much money as possible, it’s to achieve a purpose such as reducing poverty or protecting children from harm. And it’s often more complicated to pursue this goal to maximize the amount of positive impact on your beneficiaries – you also need to do this in a principled way informed by your organization’s values such as in respecting the human dignity of the people of the people you aim to help and not exploiting them (even if with the aim of helping them).

I recall a conversation from when I worked on communication in UNICEF with our fundraisers about a similar topic (from more than 10 years ago so I’m not spilling any secrets). At that stage the organization was looking to move more into “upstream policy work” and on scaling back on “service delivery”, especially in middle-income countries. Programmatically this made a lot of sense, but the fundraisers were naturally concerned about the impact on their ability to talk about this shift in fundraising campaigns. It’s much easier to fundraise using images of nicely branded supplies coming in on trucks being handed out by aid workers to poor people than it is to “show” work on, or the results of influencing government policy, improving data collection and building capacity of civil servants.  But at the end of the discussion we were ready to say that while it might be harder to raise money for upstream work, and we might be able to raise less money as a result – if this is the work that needs to be done, then the task was to fund better ways of fundraising about this work, rather than changing the nature of the work to make it easier to raise funds.

Of course aid organizations rely on external funding (whether government, corporate or individual) and they need professional fundraisers to be able to get the resources they need to do their work. Professional fundraisers and communicators know better than programme staff, from their experience and research, how to put together effective fundraising communications in terms of who to approach, what approaches to use and what information is needed from programme staff to support it. That can include coming up with novel approaches to raising funds for something that is already a priority, even if these appear gimmicky to aid workers on the ground (such as sending a quarter coin to people to get them to send in donations or getting them to buy something to give something).

But it’s important to ensure that the fundraising is in service of the organization’s goals rather than the reverse. It can be easy to be tempted to do something because it’s popular with donors even if it isn’t fully consistent with your mission and values, and hard to forswear potential opportunities when aid funding is tight. In particular it can be tempting to agree to programmes which are appealing to donors but for which there isn’t a demand, or worse that do unintended harm. But if the organization exists to serve a mission – then it’s important to keep that front and centre in decision-making on what opportunities to pursue or what tactics to use to pursue them – in fundraising just as much as in programmes.

In fact in an age of increasing aid transparency it becomes ever more important to focus on your mission and values since it’s much more obvious if your communications, partnerships and programmes are not consistent with each other or with your mission, and your reputation will suffer as a result –as will the cause you are pursuing.

Greater transparency is also an opportunity to bring donors and beneficiaries closer together so that donors can see and hear the results of aid work directly from those being helped rather than via a “story” whether positive or negative constructed by the aid agency for the benefit of donors. Similarly donors can also hear more from those they are helping about what they want and need, seeing them more as individuals with dignity, aspirations and agency to improve their lives aided by donors rather than as passive objects of pity and charity. This way instead of going where donors give most now, you can change the discussion to educate and encourage them to give money to where it is really needed, and to understand better what their support really does and can do.

Read Full Post »

About 15 years ago, I was at a regional management meeting where a newly hired colleague was introduced. The guy next to me muttered “Welcome to the Titanic.”

In the past 20 years, we’ve seen the disruption of the record, photo, newspaper, and other industries. Though music, photos, and news continue to play a big role in people’s lives, the old ‘owners’ of the space were disrupted by changes in technology and new expectations from consumers. Similar changes are happening in the international civil society space, and organizations working there need to think more systematically about what these changes mean.

I spent last week with leadership from a dozen or so international civil society organizations (ICSOs) thinking about what is disrupting our space and strategizing about how to help the space, including our organizations, become more resilient and adaptive to disruption. Participants in the meeting came from several types of organizations (large INGOs doing service delivery and policy work, on-line organizing groups, social enterprises, think tanks, and big campaigning organizations), both new and old, headquartered and/or founded in both the “North” and the “South.”

We approached discussions from the premise that, like music, photos, and news; our sector does have value and does serve an important function. The world is not a perfect place, and government and the private sector need to be balanced and kept in check by a strong and organized “third sector.” However, many ICSOs are dinosaurs whose functions may be replaced by new players and new ways of working that better fit the external environment.

Changes around and within organizations are being prompted by a number of converging factors, including new technology, global financial shifts, new players and ways of working, and new demands from “beneficiaries,” constituencies, and donors. All of these involve shifting power. On top of power shifts, an environmental disaster looms (because we are living beyond the means of the planet), and we see civil society space closing in many contexts while at the same time organized movements are forcing open space for civic uprising and citizen voice.

ICSOs need to learn how to adapt to the shifting shape and context of civil society, and to work and collaborate in a changing ecosystem with new situations and new players. This involves:

  • Detecting and being open to changes and potential disruptors
  • Preparing in a long-term, linear way by creating more adaptive, iterative and resilient organizations
  • Responding quickly and nimbly to disruption and crises when they hit

Key elements of preparing for and navigating disruption are:

  • Maintaining trust and transparency – both internally and externally
  • Collective action
  • Adaptability
  • Being aware of and able to analyze and cope with power shifts

Organization cannot prepare for every specific disruption or crisis, and the biggest crises and shocks come out of nowhere. ICSOs should however become more adaptive and agile by creating built-in responsiveness. We surfaced a number of ideas for getting better at this:

  • Networking/Exchange: actively building networks, learning across sectors, engaging and working with non-traditional partners, bringing in external thinkers and doers for exchange and learning
  • Trend spotting and constant monitoring: watching and participating in spaces where potential disruptions are springing up (for example, challenge funds, contest and innovation prizes); exit interviews to understand why innovative staff are leaving, where they are going, and why; scanning a wide range of sources (staff, people on the ground, traditional media, social media, political analysts) including all ICSO’s audiences – eg., donors, supporters, communities
  • Predicting. Keeping predictable shocks on the radar (hurricane season, elections) and preparing for them, scenario planning as part of the preparatory phase
  • Listening: Ensuring that middle level, often unheard parts of the organization are listened to and that there are open and fluid communication lines between staff and middle and upper management; listening to customers, users, beneficiaries, constituencies; basically listening to everyone
  • Confident humility: Being humble and open, yet also confident, systematic and not desperate/chaotic
  • Meta-learning: Finding systematic ways to scan what is happening and understand it; learning from successes and failures at the ‘meta’ and the cross-sector level not just the organizational or project level
  • Slack time: Giving staff some slack for thinking, experimenting and reflecting; establishing a system for identifying what an organization can stop doing to enable staff to have slack time to think and be creative and try new things
  • Training. Ensuring that staff have skills to do strategic decision making, monitoring, scenario planning
  • Decentralized decision-making. Allow local pods and networks to take control of decision-making rather than having all decisions weighed in on by everyone or taking place at the top or the center; this should be backed by policies and protocols that enable quick decision making at the local level and quick communication across the organization
  • Trust. Hiring staff you can trust and trusting your staff (human resources departments need strengthening in order to do this well; they need to better understand the core business and what kind of staff an organization needs in these new times)

Culture, management, and governance changes are all needed to improve an organization’s ability to adapt. Systems need to be adjusted so that organizations can be more flexible and adaptive. Organizational belief systems and values also need to shift. Trying out adaptive actions and flexible culture in small doses to develop an organization’s comfort level and confidence and helping to amass shared experience of acting in a new way can help move an organization forward. Leadership should also work to identify innovation across the organization, highlight it, and scale it, and to reward staff who take risk and experiment rather than punishing them.

These changes are very difficult for large, established organizations. Staff and management tend to be overworked and spread thin as it is, managing an existing workload and with little “slack time” to manage change processes. In addition, undesignated funds are shrinking, meaning that organizations have little funding to direct towards new areas or for scanning and preparing, testing and learning. Many organizations are increasingly locked into implementing projects and programs per a donor’s requirements and there are few resources to strategize and focus on organizational adaptation and change. Contractual commitments and existing promises and community partnerships can make it difficult for ICSOs to stop doing certain programs in order to dedicate resources to new areas. The problem is usually not a shortage of innovative ideas and opportunities, but rather the bandwidth to explore and test them, and the systems for determining which ideas are most likely to succeed so that scarce resources can be allocated to them.

Despite all the challenges, the organizations in the room were clear that ICSOs need to change and disrupt themselves, because if they don’t, someone else will. We profiled three types of organizations: the conservative avoider, the opportunistic navigator, and the active disruptor, and determined that the key to survival for many ICSOs will be “dialing up the pain of staying the same and reducing the pain of changing.”

What might an adaptive organization look like?

  • Focused on its mission, not its traditional means of achieving the mission (get across the water in the best way possible, don’t worry if it’s via building a bridge or taking a boat or swimming)
  • Not innovating for the sake of innovation or disrupting for the sake of it – accompanying innovation and disruption with longer-term and systematic follow through
  • Periodically updating its mission to reflect the times
  • Piloting, gaining experience, monitoring, evaluating, building evidence and learning iteratively and at the meta-level from trends and patterns
  • Sub-granting to new, innovative players and seeding new models
  • Open, in the public domain, supporting others to innovate, decentralized, networked, flexible, prepared for new levels of transparency
  • Systematically discovering new ways of working and new partners, testing them, learning and mainstreaming them
  • Keeping its ear to the ground
  • Learning to exit and say no in order to free up slack time to experiment and try new things

Many “dinosaur” organizations are adopting a head-in-the-sand approach, believing that they can rely on their age, their hierarchical systems and processes, or their brand to carry them through the current waves of change. This is no longer enough, and we can expect some of these organizations to die off. Other organizations are in the middle of an obvious shift where parts of the organization are pushing to work under new rules but other parts are not ready. This internal turmoil, along with the overstretched staff, and ineffective boards in some cases, make it difficult to deal with external disruption while managing internal change.

Newer organizations and those that are the closest to the ground seem to have the best handle on disruption. They tend to be more adaptive and nimble, whereas those far from the ground can be insulated from external realities and less aware of the need for ISCOs to change. Creating a “burning platform” can encourage organizational change and a sense of urgency, however, this type of change effort needs to be guided by a clear and positive vision of why change is needed, where change is heading, and why it will be beneficial to achieving an organization’s mission.

After our week of intense discussions, the group felt we still had not answered the question: Can ISCOs be nimble? As in any ecosystem, as the threats and problems to civil society shift and change, a wide array of responses from a number of levels, players and approaches is necessary. Some will not be fit or will not adapt and will inevitably die off. Others will shift to occupy a new space. Some will swallow others up or replace them. Totally new ones will continue to arise. For me, the important thing in the end is that the problems that civil society addresses are dealt with, not that individual organizations maintain their particular position in the ecosystem.

Read Full Post »

Older Posts »