This post was originally published on the Open Knowledge Foundation blog.
A core theme that the Open Development track covered at September’s Open Knowledge Conference was Ethics and Risk in Open Development. There were more questions than answers in the discussions, summarized below, and the Open Development working group plans to further examine these issues over the coming year.
Informed consent and opting in or out
Ethics around ‘opt in’ and ‘opt out’ when working with people in communities with fewer resources, lower connectivity, and/or less of an understanding about privacy and data are tricky. Yet project implementers have a responsibility to work to the best of their ability to ensure that participants understand what will happen with their data in general, and what might happen if it is shared openly.
There are some concerns around how these decisions are currently being made and by whom. Can an NGO make the decision to share or open data from/about program participants? Is it OK for an NGO to share ‘beneficiary’ data with the private sector in return for funding to help make a program ‘sustainable’? What liabilities might donors or program implementers face in the future as these issues develop?
Issues related to private vs. public good need further discussion, and there is no one right answer because concepts and definitions of ‘private’ and ‘public’ data change according to context and geography.
Informed participation, informed risk-taking
The ‘do no harm’ principle is applicable in emergency and conflict situations, but is it realistic to apply it to activism? There is concern that organizations implementing programs that rely on newer ICTs and open data are not ensuring that activists have enough information to make an informed choice about their involvement. At the same time, assuming that activists don’t know enough to decide for themselves can come across as paternalistic.
As one participant at OK Con commented, “human rights and accountability work are about changing power relations. Those threatened by power shifts are likely to respond with violence and intimidation. If you are trying to avoid all harm, you will probably not have any impact.” There is also the concept of transformative change: “things get worse before they get better. How do you include that in your prediction of what risks may be involved? There also may be a perception gap in terms of what different people consider harm to be. Whose opinion counts and are we listening? Are the right people involved in the conversations about this?”
A key point is that whomever assumes the risk needs to be involved in assessing that potential risk and deciding what the actions should be — but people also need to be fully informed. With new tools coming into play all the time, can people be truly ‘informed’ and are outsiders who come in with new technologies doing a good enough job of facilitating discussions about possible implications and risk with those who will face the consequences? Are community members and activists themselves included in risk analysis, assumption testing, threat modeling and risk mitigation work? Is there a way to predict the likelihood of harm? For example, can we determine whether releasing ‘x’ data will likely lead to ‘y’ harm happening? How can participants, practitioners and program designers get better at identifying and mitigating risks?
When things get scary…
Even when risk analysis is conducted, it is impossible to predict or foresee every possible way that a program can go wrong during implementation. Then the question becomes what to do when you are in the middle of something that is putting people at risk or leading to extremely negative unintended consequences. Who can you call for help? What do you do when there is no mitigation possible and you need to pull the plug on an effort? Who decides that you’ve reached that point? This is not an issue that exclusively affects programs that use open data, but open data may create new risks with which practitioners, participants and activists have less experience, thus the need to examine it more closely.
Participants felt that there is not enough honest discussion on this aspect. There is a pop culture of ‘admitting failure’ but admitting harm is different because there is a higher sense of liability and distress. “When I’m really scared shitless about what is happening in a project, what do I do?” asked one participant at the OK Con discussion sessions. “When I realize that opening data up has generated a huge potential risk to people who are already vulnerable, where do I go for help?” We tend to share our “cute” failures, not our really dismal ones.
Academia has done some work around research ethics, informed consent, human subject research and use of Internal Review Boards (IRBs). What aspects of this can or should be applied to mobile data gathering, crowdsourcing, open data work and the like? What about when citizens are their own source of information and they voluntarily share data without a clear understanding of what happens to the data, or what the possible implications are?
Do we need to think about updating and modernizing the concept of IRBs? A major issue is that many people who are conducting these kinds of data collection and sharing activities using new ICTs are unaware of research ethics and IRBs and don’t consider what they are doing to be ‘research’. How can we broaden this discussion and engage those who may not be aware of the need to integrate informed consent, risk analysis and privacy awareness into their approaches?
The elephant in the room
Despite our good intentions to do better planning and risk management, one big problem is donors, according to some of the OK Con participants. Do donors require enough risk assessment and mitigation planning in their program proposal designs? Do they allow organizations enough time to develop a well-thought-out and participatory Theory of Change along with a rigorous risk assessment together with program participants? Are funding recipients required to report back on risks and how they played out? As one person put it, “talk about failure is currently more like a ‘cult of failure’ and there is no real learning from it. Systematically we have to report up the chain on money and results and all the good things happening. and no one up at the top really wants to know about the bad things. The most interesting learning doesn’t get back to the donors or permeate across practitioners. We never talk about all the work-arounds and backdoor negotiations that make development work happen. This is a serious systemic issue.”
Greater transparency can actually be a deterrent to talking about some of these complexities, because “the last thing donors want is more complexity as it raises difficult questions.”
Reporting upwards into government representatives in Parliament or Congress leads to continued aversion to any failures or ‘bad news’. Though funding recipients are urged to be innovative, they still need to hit numeric targets so that the international aid budget can be defended in government spaces. Thus, the message is mixed: “Make sure you are learning and recognizing failure, but please don’t put anything too serious in the final report.” There is awareness that rigid program planning doesn’t work and that we need to be adaptive, yet we are asked to “put it all into a log frame and make sure the government aid person can defend it to their superiors.”
Where to from here?
It was suggested that monitoring and evaluation (M&E) could be used as a tool for examining some of these issues, but M&E needs to be seen as a learning component, not only an accountability one. M&E needs to feed into the choices people are making along the way and linking it in well during program design may be one way to include a more adaptive and iterative approach. M&E should force practitioners to ask themselves the right questions as they design programs and as they assess them throughout implementation. Theory of Change might help, and an ethics-based approach could be introduced as well to raise these questions about risk and privacy and ensure that they are addressed from the start of an initiative.
Practitioners have also expressed the need for additional resources to help them predict and manage possible risk: case studies, a safe space for sharing concerns during implementation, people who can help when things go pear-shaped, a menu of methodologies, a set of principles or questions to ask during program design, or even an ICT4D Implementation Hotline or a forum for questions and discussion.
These ethical issues around privacy and risk are not exclusive to Open Development. Similar issues were raised last week at the Open Government Partnership Summit sessions on whistle blowing, privacy, and safeguarding civic space, especially in light of the Snowden case. They were also raised at last year’s Technology Salon on Participatory Mapping.
A number of groups are looking more deeply into this area, including the Capture the Ocean Project, The Engine Room, IDRC’s research network, The Open Technology Institute, Privacy International, GSMA, those working on “Big Data,” those in the Internet of Things space, and others.
I’m looking forward to further discussion with the Open Development working group on all of this in the coming months, and will also be putting a little time into mapping out existing initiatives and identifying gaps when it comes to these cross-cutting ethics, power, privacy and risk issues in open development and other ICT-enabled data-heavy initiatives.
Please do share information, projects, research, opinion pieces and more if you have them!
This may seem tangential but ethics and ICTD came up obliquely in February’s Alpine Rendezvous (ARV) workshop tackling TEL, technology enhanced learning in its broadest sense but read it as ICT or ICTD if it helps, The aims of the workshop were to:
discuss the relationships between TEL and varieties of change, discontinuity and dislocation we observe in the wider world;
explore how communities and research traditions involved in TEL can learn from each other, particularly to bring about more open, participative, emancipatory and fluid models of TEL;
consider and shape a research agenda for TEL that will allow relevant, rigorous and useful responses on the part of educational organisations and actors to the various discontinuities we have identified.
As specific outcomes we hope we’d:
contributed to a clearer and more politically engaged formulation of the Grand Challenges for TEL as part of the ARV process;
clarified, refined and challenged our own ideas, leading to a special issue or publication.
Whilst we certainly did address these topics, I was left with several questions,
1. Have we implicitly assumed that the western/European model of universities is necessarily the sole or best expression of a culture’s or a community’s higher learning and intellectual enquiry and endeavours?
2. Have we assumed the primacy of reason and rationality in a world where faith and loyalty are often the determinants of action? Is our depiction of crisis characterised by the increase of irrationality?
3. What is humanist about our position or our analysis? Does it make a difference? How do we relate or engage with a world of religions, and with a world where those religions are expressed with extreme certainty.
4. As western/European pedagogy, or rather the corporatised, globalised version of it, now deploys powerful and universal digital technologies in the interests of profit-driven business models, should we look at empowering and preserving more local and culturally appropriate forms of understanding, knowing, learning and enquiring?
5. Is encapsulating the world’s higher learning in institutions increasingly modelled on one format and driven by the same narrow global drivers resilient and robust enough, diverse and flexible enough to enable different communities, cultures and individuals to flourish amongst the dislocation and disruption we portray as characterising the crises?
6. Technology and education, and TEL, are not unconditionally benign and are often used in the service of the urban, monetised, industrialised sedentarised state, used at the expense of the marginal, the peripheral, the subsistence and nomadic. In spite of apparent differences in how this plays out in the North and South, are their coherent global responses?
7. Our responses, for example personal learning environments or the digital literacies agenda, seem implicitly but unnecessarily framed within this western/European higher education discourse – can these be widened to empowered other communities and cultures entitled to the critical skills and participation necessary to flourish in a world of powerful digital technologies in the hands of alien and external governments, corporations and institutions?
8. In the days before TEL, educational interventions in distant and different communities were difficult and thus the moral focus was on duty and obligation. TEL now makes these interventions and activities easy and the moral focus should be on rights; rather than exploring our duty to educate these communities we should now concentrate on exploring our rights to?
9. Is the notion of individual one-off informed consent as the basis for research intervention inappropriate in a post-positivist world, inappropriate for collectivist cultures and inappropriate for fluid, complex and abstract systems such as TEL?
10. Education is in many ways a process of acculturation and identity transformation, of transforming non-traditional working class students in the North, of indigenous peoples in the South; how do we reconcile accessing national educational opportunities with the preservation of culture, and how do we articulate and resist the intrusion of self-interested global capital and corporations into education’s moral balance between informed consent and blissful ignorance? Something people like ‘us’ do to people like ‘them’ in order to co-opt them, but do we have an ethical basis for this?
My take-away was, once i’d remember to ‘First do no harm’ was to ask, “whose interests are being served?’
Wow what a great set of questions, John!
[…] Linda Raftree has written convincingly about this (and with much more nuance than I am able to present here) in the context of open development; John Traxler has further written extensively on ethics in the mobile context. Tim Unwin and Matt Haikin have all proven instructive in my work as well. I refer often to BERA’s Ethical Guidelines for Research or the AERA’s Research Ethics. As I am based, I follow closely Korea’s research ethics developments. There are many more writers and resources I could point to, but ultimately, ethics involves the practical application of idea into activity. So this post is about essentially the preliminary steps as I see them towards establishing a baseline for ethical research in the development context that apparently was a lot cloudier than I had realized. Many of the following are more a wishlist than an actual recommendation, but bear with me. […]
Hi Linda, you’re /were clearly ahead of the curve. Through IDRC funding the SIRCA programme based in NTU Singapore is just starting research (desk then field) to critique ‘open development’; the programme is led by Arul Chib whilst Rich Ling and I are responsible for ‘trust in open learning and health’. I think Melissa Densmore, Marion Walton, Andy Dearden and others are involved in other themes (see http://www.sirca.org.sg/programs/about-sirca-iii/)