Feeds:
Posts
Comments

Archive for November, 2013

This is a cross-post from Tom Murphyeditor of the aid blog A View From the Cave. The original article can be found on Humanosphere. The post summarizes discussions at our November 21st New York City Technology Salon: Are Mobile Money Cash Grants the Future of Development?  If you’d like to join us for future Salons, sign up here.

by Tom Murphy

Decades ago, some of the biggest NGOs simply gave away money to individuals in communities. People lined up and were just given cash.

The once popular form of aid went out of fashion, but it is now making a comeback.

Over time, coordination became extremely difficult. Traveling from home to home costs time and money for the NGO and the same problem exists for recipients when they have to go to a central location. More significant was the shift in development thinking that said giving hand outs was causing long term damage.

The backlash against ‘welfare queens’ in the US, UK and elsewhere during the 1980s was reflected in international development programming. Problem was that it was all based on unproven theories of change and anecdotal evidence, rather than hard evidence.

Half a decade later, new research shows that just giving people money can be an effective way to build assets and even incomes. The findings were covered by major players like NPR and the Economist.

While exciting and promising, cash transfers are not a new tool in the development utility belt.

Various forms of transfers have emerged over the past decade. Food vouchers were used by the World Food Programme when responding to the 2011 famine in the Horn of Africa. Like food stamps in the US, people could go buy food from local markets and get exactly what they need while supporting the local economy.

The differences have sparked a sometimes heated debate within the development community as to what the findings about cash transfers mean going forward. A Technology Salon hosted conversation at ThoughtWorks in New York City last week, featured some of the leading researchers and players in the cash transfer sector.

The salon style conversation featured Columbia University and popular aid blogger Chris Blattman, GiveDirectly co-founder and UCSD researcher Paul Neihaus and Plan USA CEO Tessie San Martin. The ensuing discussion, operating under the Chatham House Rule of no attribution, featured representatives from large NGOs, microfinance organizations and UN agencies.

Research from Kenya, Uganda and Liberia show both the promise and shortcomings of cash transfers. For example, giving out cash in addition to training was successful in generating employment in Northern Uganda. Another program, with the backing of the Ugandan government, saw success with the cash alone.

Cash transfers have been argued as the new benchmark for development and aid programs. Advocates in the discussion made the case that programs should be evaluated in terms of impact and cost-effectiveness against just giving people cash.

That idea saw some resistance. The research from Liberia, for example, showed that money given to street youth would not be wasted, but it was not sufficient to generate long-lasting employment or income. There are capacity problems and much larger issues that probably cannot be addressed by cash alone.

An additional concern is the unintended negative consequences caused by cash transfers. One example given was that of refugees in Syria. Money was distributed to families labeled for rent. Despite warnings not to label the transfer, the program went ahead.

As a result, rents increased. The money intended to help reduce the cost incurred by rent was rendered largely useless. One participant raised the concern that cash transfers in such a setting could be ‘taxed’ by rebels or government fighters. There is a potential that aid organizations could help fund fighting by giving unrestricted cash.

The discussion made it clear that the applications of cash transfers are far more nuanced than they might appear. Kenya saw success in part because of the ease of sending money to people through mobile phones. Newer programs in India, for example, rely on what are essentially ATM cards.

Impacts, admitted practitioners, can go beyond simple incomes. There has been care to make sure that implementing cash transfer programs to not dramatically change social structures in ways that cause problems for the community and recipients. In one case, giving women cash allowed for them to participate in the local markets, a benefit to everyone except for the existing shop oligarchs.

Governments in low and middle-income countries are seeing increasing pressure to establish social programs. The success of cash transfer programs in Brazil and Mexico indicate that it can be an effective way to lift people out of poverty. Testing is underway to bring about more efficient and context appropriate cash transfer schemes.

An important component in the re-emergence of cash transfers is looking back to previous efforts, said one NGO official. The individual’s organization is systematically looking back at communities where the NGO used to work in order to see what happened ten years later. The idea is to learn what impacts may or may not have been on that community in order to inform future initiatives.

“Lots of people have concerns about cash, but we should have concerns about all the programs we are doing,” said a participant.

The lessons from the cash transfer research shows that there is increasing need for better evidence across development and aid programs. Researchers in the group argued that the ease of doing evaluations is improving.

Read the “Storified” version of the Technology Salon on Mobiles and Cash Transfers here.

Advertisements

Read Full Post »

At the November 8th Technology Salon in New York City, we looked at the role of ICTs in communication for development (C4D) initiatives with marginalized adolescent girls. Lead discussants Kerida McDonald and Katarzyna Pawelczyk discussed recent UNICEF reports related to the topic, and John Zoltner spoke about FHI360’s C4D work in practice.

To begin, it was pointed out that C4D is not donor communications or marketing. It is the use of communication approaches and methodologies to achieve influence at various levels —  e.g., family, institutional and policy —  to change behavior and social norms. C4D is one approach that is being used to address the root causes of gender inequality and exclusion.

Screen Shot 2013-10-11 at 7.24.48 AMAs the UNICEF report on ICTs and C4D* notes, girls may face a number of situations that contribute to and/or are caused by their marginalization: early pregnancy, female genital cutting, early marriage, high rates of HIV/AIDS, low levels of education, lack of control over resources. ICTs alone cannot resolve these, because there is a deep and broad set of root causes. However, ICTs can be integrated systematically into the set of C4D tools and approaches that contribute to positive change.

Issues like bandwidth, censorship and electricity need to be considered when integrating ICTs into C4D work, and approaches that fit the context need to be developed. Practitioners should use tools that are in the hands of girls and their communities now, yet be aware of advances in access and new technologies, as these change rapidly.

Key points:

Interactivity is more empowering than one-way messaging:  Many of the ICT solutions being promoted today focus on sending messages out via mobile phones. However C4D approaches aim for interactivity and multi-channel, multi-directional communication, which has proven more empowering.

Content: Traditional media normally goes through a rigorous editorial process and it is possible to infuse it with a gender balance. Social media does not have the same type of filters, and it can easily be used to reinforce stereotypes about girls. This is something to watch and be aware of.

Purpose: It’s common with ICT-related approaches to start with the technology rather than starting with the goals. As one Salon participant asked “What are the results we want to see for ourselves? What are the results that girls want to see? What are the root causes of discrimination and how are we trying to address them? What does success look like for girls? For organizations? Is there a role for ICTs in helping achieve success? If so, what is it?” These questions need to be the starting point, rather than the technology.

Participation: One Salon participant mentioned a 2-year project that is working together with girls to define their needs and their vision of success. The process is one co-design, and it is aimed at understanding what girls want. Many girls expressed a feeling of isolation and desire for connection, and so the project is looking at how ICTs can help them connect. As the process developed, the diversity of needs became very clear and plans have changed dramatically based on input from a range of girls from different contexts. Implementors need to be prepared to change, adapt and respond to what girls say they want and to local realities.

****

Screen Shot 2013-11-23 at 10.41.22 PMA second study commissioned by UNICEF explores how young people use social media. The researchers encountered some challenges in terms of a strong gender approach for the study. Though a gender lens was used for analysis, there is little available data disaggregated by sex. The study does not focus on the most marginalized, because it looks at the use of social media, which normally requires a data connection or Internet access, which the most marginalized youth usually do not have.

The authors of the report found that youth most commonly used the Internet and social media for socializing and communicating with friends. Youth connected less often for schoolwork. One reason for this may be that in the countries/contexts where the research took place, there is no real integration of ICTs into the school system. It was emphasized that the  findings in the report are not comparable or nationally representative, and blanket statements such as “this means x for the whole developing world” should be avoided.

Key points:

Self-reporting biases. Boys tend to have higher levels of confidence and self-report greater ICT proficiency than girls do. This may skew results and make it seem that boys have higher skill levels.

Do girls really have less access? We often hear that girls have less access than boys. The evidence gathered for this particular report found that “yes and no.” In some places, when researchers asked “Do you have access to a mobile,” there was not a huge difference between urban and rural or between boys and girls. When they dug deeper, however, it became more complex. In the case of Zambia, access and ownership were similar for boys and girls, but fewer girls were connecting at all to the Internet as compared to boys. Understanding connectivity and use was quite complicated.

What are girls vs. boys doing online? This is an important factor when thinking about what solutions are applicable to which situation(s). Differences came up here in the study. In Argentina, girls were doing certain activities more frequently, such as chatting and looking for information, but they were not gaming. In Zambia, girls were doing some things less often than boys; for example, fewer girls than boys were looking for health information, although the number was still significant. A notable finding was that both girls and boys were accessing general health information more often than they were accessing sensitive information, such as sexual health or mental health.

What are the risks in the online world? A qualitative portion of the study in Kenya used focus groups with girls and boys, and asked about their uses of social media and experience of risk. Many out-of-school girls aged 15-17 reported that they used social media as a way to meet a potential partner to help them out of their financial situation. They reported riskier behavior, contact with older men, and relationships more often than girls who were in school. Girls in general were more likely to report unpleasant online encounters than boys, for example, request for self-exposure photos.

Hiding social media use. Most of the young people that researchers spoke with in Kenya were hiding social media use from their parents, who disapproved of it. This is an important point to note in C4D efforts that plan on using social media, and program designers will want to take parental attitudes about different media and communication channels into consideration as they design C4D programs.

****

When implementing programs, it is noteworthy how boys and girls tend to use ICT and media tools. Gender issues often manifest themselves right away. “The boys grab the cameras, the boys sit down first at the computers.” If practitioners don’t create special rules and a safe space for girls to participate, girls may be marginalized. In practical ICT and media work, it’s common for boys and girls to take on certain roles. “Some girls like to go on camera, but more often they tend to facilitate what is being done rather than star in it.” The gender gap in ICT access and use, where it exists, is a reflection of the power gaps of society in general.

In the most rural areas, even when people have access, they usually don’t have the resources and skills to use ICTs.  Very simple challenges can affect girls’ ability to participate in projects, for example, oftentimes a project will hold training at times when it’s difficult for girls to attend. Unless someone systematically goes through and applies a gender lens to a program, organizations often don’t notice the challenges girls may face in participating. It’s not enough to do gender training or measure gender once a year; gendered approaches needs to be built into program design.

Long-terms interventions are needed if the goal is to emancipate girls, help them learn better, graduate, postpone pregnancy, and get a job. This cannot be done in a year with a simple project that has only one focus, because girls are dealing with education, healthcare, and a whole series of very entrenched social issues. What’s needed is to follow a cohort of girls and to provide information and support across all these sectors over the long-term.

Key points:

Engaging boys and men: Negative reactions from men are a concern if and when girls and women start to feel more empowered or to access resources. For example, some mobile money and cash transfer programs direct funds to girls and women, and some studies have found that violence against women increases when women start to have more money and more freedom. Another study, however, of a small-scale effort that provides unconditional cash transfers to girls ages 18-19 in rural Kenya, is demonstrating just the opposite: girls have been able to say where money is spent and the gender dynamics have improved. This raises the question of whether program methodologies need to be oriented towards engaging boys and men and involving them in changing gender dynamics, and whether engaging boys and men can help avoid an increase in violence. Working with boys to become “girl champions” was cited as a way to help to bring boys into the process as advocates and role models.

Girls as producers, not just consumers. ICTs are not only tools for sending content to girls. Some programs are working to help girls produce content and create digital stories in their own languages. Sometimes these stories are used to advocate to decision makers for change in favor of girls and their agendas. Digital stories are being used as part of research processes and to support monitoring, evaluation and accountability work through ‘real-time’ data.

ICTs and social accountability. Digital tools are helping young people address accountability issues and inform local and national development processes. In some cases, youth are able to use simple, narrow bandwidth tools to keep up to date on actions of government officials or to respond to surveys to voice their priorities. Online tools can also lead to offline, face-to-face engagement. One issue, however, is that in some countries, youth are able to establish communication with national government ministers (because there is national-level capacity and infrastructure) but at local level there is very little chance or capability for engagement with elected officials, who are unprepared to respond and engage with youth or via social media. Youth therefore tend to bypass local government and communicate with national government. There is a need for capacity building at local level and decentralized policies and practices so that response capacity is strengthened.

Do ICTs marginalize girls? Some Salon participants worried that as conversations and information increasingly move to a digital environment, ICTs are magnifying the information and communication divide and further marginalizing some girls. Others felt that the fact that we are able to reach the majority of the world’s population now is very significant, and the inability to reach absolutely everyone doesn’t mean we should stop using ICTs. For this very reason – because sharing of information is increasingly digital – we should continue working to get more girls online and strengthen their confidence and abilities to use ICTs.

Many thanks to UNICEF for hosting the Salon!

(Salons operate under Chatham House Rule, thus no attribution has been given in the above summary. Sign up here if you’d like to attend Salons in the future!)

*Disclosure: I co-authored this report with Keshet Bachan.

Read Full Post »

Screen Shot 2013-11-24 at 7.42.24 AM

America Meet World – a contest to source the best comedy the world has to offer.

I met Trina Das Gupta when she headed up the GSMA’s mWomen program. We bonded at the personal level over a shared frustration with old-school aid and development approaches and an urge to change the way the US public understands the rest of the world.

Trina is always one step ahead of everyone, and she has the know-how and connections to make her visions happen. It’s no surprise then, that she’s moving full steam ahead with a cool new idea that aims to bring people together through comedy: America Meet World.

Rather than get preachy about race and stereotyping, America Meet World will take Americans around the world through laughter and entertainment. The premise is that there is more that makes us the same than that makes us different.

To curate content, Trina’s production company Single Palm Tree has launched a global video contest designed to find the best comedy the world has to offer. The contest’s winner (the video with the most votes) will win a sit-down meeting with executives from The Daily Show with Jon Stewart, and have a chance to ask for advice on building an entertainment career in the US.

Videos will also be reviewed by an all-star panel of judges, including Rebecca Paoletti, former head of video at Yahoo! North American and CEO of CakeWorks; Tim Rosta, senior VP of Integrated Marketing at E! Entertainment Networks; Baratunde Thurston, CEO and Co-Founder of Cultivated Wit and former digital director of The Onion; and John Vorhaus, best-selling author of The Comic Toolbox: How to be Funny Even if You’re Not.

Finalist videos will be featured on AmericaMeetWorld.com and across syndicated channels. The videos submitted, along with other content that Single Palm Tree produces and curates, will feed into a digital video portal that connects US audiences with global comedians. The portal will go live in 2014.

Categories include: stand-up, sketch, reality, satire. Video submissions must be from Africa, Asia, Latin America or the Middle East, and they must not exceed 3 minutes. They must be in English or have accurate English subtitles. See contest rules and tips on how to create a successful entry here.

All you comedians, now’s your chance!

Read Full Post »

Back in May I participated in a discussion on if and how International Civil Society Organizations (ICSOs) are adapting to changes around them. Here’s my summary of the conversations: Can International Civil Society Organizations be nimble?

A final report from the meeting is ready (download Riding the Wave: A proposal for Boards and CEOs on how to prepare their organizations for disruptive change) and here’s a video summary:

I’m curious what other folks think about this topic and the analysis and recommendations in the report.

Read Full Post »

This is a guest post from Anna Crowe, Research Officer on the Privacy in the Developing World Project, and  Carly Nyst, Head of International Advocacy at Privacy International, a London-based NGO working on issues related to technology and human rights, with a focus on privacy and data protection. Privacy International’s new report, Aiding Surveillance, which covers this topic in greater depth was released this week.

by Anna Crowe and Carly Nyst

NOV 21 CANON 040

New technologies hold great potential for the developing world, and countless development scholars and practitioners have sung the praises of technology in accelerating development, reducing poverty, spurring innovation and improving accountability and transparency.

Worryingly, however, privacy is presented as a luxury that creates barriers to development, rather than a key aspect to sustainable development. This perspective needs to change.

Privacy is not a luxury, but a fundamental human right

New technologies are being incorporated into development initiatives and programmes relating to everything from education to health and elections, and in humanitarian initiatives, including crisis response, food delivery and refugee management. But many of the same technologies being deployed in the developing world with lofty claims and high price tags have been extremely controversial in the developed world. Expansive registration systems, identity schemes and databases that collect biometric information including fingerprints, facial scans, iris information and even DNA, have been proposed, resisted, and sometimes rejected in various countries.

The deployment of surveillance technologies by development actors, foreign aid donors and humanitarian organisations, however, is often conducted in the complete absence of the type of public debate or deliberation that has occurred in developed countries. Development actors rarely consider target populations’ opinions when approving aid programmes. Important strategy documents such as the UN Office for Humanitarian Affairs’ Humanitarianism in a Networked Age and the UN High-Level Panel on the Post-2015 Development Agenda’s A New Global Partnership: Eradicate Poverty and Transfer Economies through Sustainable Development give little space to the possible impact adopting new technologies or data analysis techniques could have on individuals’ privacy.

Some of this trend can be attributed to development actors’ systematic failure to recognise the risks to privacy that development initiatives present. However, it also reflects an often unspoken view that the right to privacy must necessarily be sacrificed at the altar of development – that privacy and development are conflicting, mutually exclusive goals.

The assumptions underpinning this view are as follows:

  • that privacy is not important to people in developing countries;
  • that the privacy implications of new technologies are not significant enough to warrant special attention;
  • and that respecting privacy comes at a high cost, endangering the success of development initiatives and creating unnecessary work for development actors.

These assumptions are deeply flawed. While it should go without saying, privacy is a universal right, enshrined in numerous international human rights treaties, and matters to all individuals, including those living in the developing world. The vast majority of developing countries have explicit constitutional requirements to ensure that their policies and practices do not unnecessarily interfere with privacy. The right to privacy guarantees individuals a personal sphere, free from state interference, and the ability to determine who has information about them and how it is used. Privacy is also an “essential requirement for the realization of the right to freedom of expression”. It is not an “optional” right that only those living in the developed world deserve to see protected. To presume otherwise ignores the humanity of individuals living in various parts of the world.

Technologies undoubtedly have the potential to dramatically improve the provision of development and humanitarian aid and to empower populations. However, the privacy implications of many new technologies are significant and are not well understood by many development actors. The expectations that are placed on technologies to solve problems need to be significantly circumscribed, and the potential negative implications of technologies must be assessed before their deployment. Biometric identification systems, for example, may assist in aid disbursement, but if they also wrongly exclude whole categories of people, then the objectives of the original development intervention have not been achieved. Similarly, border surveillance and communications surveillance systems may help a government improve national security, but may also enable the surveillance of human rights defenders, political activists, immigrants and other groups.

Asking for humanitarian actors to protect and respect privacy rights must not be distorted as requiring inflexible and impossibly high standards that would derail development initiatives if put into practice. Privacy is not an absolute right and may be limited, but only where limitation is necessary, proportionate and in accordance with law. The crucial aspect is to actually undertake an analysis of the technology and its privacy implications and to do so in a thoughtful and considered manner. For example, if an intervention requires collecting personal data from those receiving aid, the first step should be to ask what information is necessary to collect, rather than just applying a standard approach to each programme. In some cases, this may mean additional work. But this work should be considered in light of the contribution upholding human rights and the rule of law make to development and to producing sustainable outcomes. And in some cases, respecting privacy can also mean saving lives, as information falling into the wrong hands could spell tragedy.

A new framing

While there is an increasing recognition among development actors that more attention needs to be paid to privacy, it is not enough to merely ensure that a programme or initiative does not actively harm the right to privacy; instead, development actors should aim to promote rights, including the right to privacy, as an integral part of achieving sustainable development outcomes. Development is not just, or even mostly, about accelerating economic growth. The core of development is building capacity and infrastructure, advancing equality, and supporting democratic societies that protect, respect and fulfill human rights.

The benefits of development and humanitarian assistance can be delivered without unnecessary and disproportionate limitations on the right to privacy. The challenge is to improve access to and understanding of technologies, ensure that policymakers and the laws they adopt respond to the challenges and possibilities of technology, and generate greater public debate to ensure that rights and freedoms are negotiated at a societal level.

Technologies can be built to satisfy both development and privacy.

Download the Aiding Surveillance report.

Read Full Post »

This post was originally published on the Open Knowledge Foundation blog

A core theme that the Open Development track covered at September’s Open Knowledge Conference was Ethics and Risk in Open Development. There were more questions than answers in the discussions, summarized below, and the Open Development working group plans to further examine these issues over the coming year.

Informed consent and opting in or out

Ethics around ‘opt in’ and ‘opt out’ when working with people in communities with fewer resources, lower connectivity, and/or less of an understanding about privacy and data are tricky. Yet project implementers have a responsibility to work to the best of their ability to ensure that participants understand what will happen with their data in general, and what might happen if it is shared openly.

There are some concerns around how these decisions are currently being made and by whom. Can an NGO make the decision to share or open data from/about program participants? Is it OK for an NGO to share ‘beneficiary’ data with the private sector in return for funding to help make a program ‘sustainable’? What liabilities might donors or program implementers face in the future as these issues develop?

Issues related to private vs. public good need further discussion, and there is no one right answer because concepts and definitions of ‘private’ and ‘public’ data change according to context and geography.

Informed participation, informed risk-taking

The ‘do no harm’ principle is applicable in emergency and conflict situations, but is it realistic to apply it to activism? There is concern that organizations implementing programs that rely on newer ICTs and open data are not ensuring that activists have enough information to make an informed choice about their involvement. At the same time, assuming that activists don’t know enough to decide for themselves can come across as paternalistic.

As one participant at OK Con commented, “human rights and accountability work are about changing power relations. Those threatened by power shifts are likely to respond with violence and intimidation. If you are trying to avoid all harm, you will probably not have any impact.” There is also the concept of transformative change: “things get worse before they get better. How do you include that in your prediction of what risks may be involved? There also may be a perception gap in terms of what different people consider harm to be. Whose opinion counts and are we listening? Are the right people involved in the conversations about this?”

A key point is that whomever assumes the risk needs to be involved in assessing that potential risk and deciding what the actions should be — but people also need to be fully informed. With new tools coming into play all the time, can people be truly ‘informed’ and are outsiders who come in with new technologies doing a good enough job of facilitating discussions about possible implications and risk with those who will face the consequences? Are community members and activists themselves included in risk analysis, assumption testing, threat modeling and risk mitigation work? Is there a way to predict the likelihood of harm? For example, can we determine whether releasing ‘x’ data will likely lead to ‘y’ harm happening? How can participants, practitioners and program designers get better at identifying and mitigating risks?

When things get scary…

Even when risk analysis is conducted, it is impossible to predict or foresee every possible way that a program can go wrong during implementation. Then the question becomes what to do when you are in the middle of something that is putting people at risk or leading to extremely negative unintended consequences. Who can you call for help? What do you do when there is no mitigation possible and you need to pull the plug on an effort? Who decides that you’ve reached that point? This is not an issue that exclusively affects programs that use open data, but open data may create new risks with which practitioners, participants and activists have less experience, thus the need to examine it more closely.

Participants felt that there is not enough honest discussion on this aspect. There is a pop culture of ‘admitting failure’ but admitting harm is different because there is a higher sense of liability and distress. “When I’m really scared shitless about what is happening in a project, what do I do?” asked one participant at the OK Con discussion sessions. “When I realize that opening data up has generated a huge potential risk to people who are already vulnerable, where do I go for help?” We tend to share our “cute” failures, not our really dismal ones.

Academia has done some work around research ethics, informed consent, human subject research and use of Internal Review Boards (IRBs). What aspects of this can or should be applied to mobile data gathering, crowdsourcing, open data work and the like? What about when citizens are their own source of information and they voluntarily share data without a clear understanding of what happens to the data, or what the possible implications are?

Do we need to think about updating and modernizing the concept of IRBs? A major issue is that many people who are conducting these kinds of data collection and sharing activities using new ICTs are unaware of research ethics and IRBs and don’t consider what they are doing to be ‘research’. How can we broaden this discussion and engage those who may not be aware of the need to integrate informed consent, risk analysis and privacy awareness into their approaches?

The elephant in the room

Despite our good intentions to do better planning and risk management, one big problem is donors, according to some of the OK Con participants.  Do donors require enough risk assessment and mitigation planning in their program proposal designs? Do they allow organizations enough time to develop a well-thought-out and participatory Theory of Change along with a rigorous risk assessment together with program participants? Are funding recipients required to report back on risks and how they played out? As one person put it, “talk about failure is currently more like a ‘cult of failure’ and there is no real learning from it. Systematically we have to report up the chain on money and results and all the good things happening. and no one up at the top really wants to know about the bad things. The most interesting learning doesn’t get back to the donors or permeate across practitioners. We never talk about all the work-arounds and backdoor negotiations that make development work happen. This is a serious systemic issue.”

Greater transparency can actually be a deterrent to talking about some of these complexities, because “the last thing donors want is more complexity as it raises difficult questions.”

Reporting upwards into government representatives in Parliament or Congress leads to continued aversion to any failures or ‘bad news’. Though funding recipients are urged to be innovative, they still need to hit numeric targets so that the international aid budget can be defended in government spaces. Thus, the message is mixed: “Make sure you are learning and recognizing failure, but please don’t put anything too serious in the final report.” There is awareness that rigid program planning doesn’t work and that we need to be adaptive, yet we are asked to “put it all into a log frame and make sure the government aid person can defend it to their superiors.”

Where to from here?

It was suggested that monitoring and evaluation (M&E) could be used as a tool for examining some of these issues, but M&E needs to be seen as a learning component, not only an accountability one. M&E needs to feed into the choices people are making along the way and linking it in well during program design may be one way to include a more adaptive and iterative approach. M&E should force practitioners to ask themselves the right questions as they design programs and as they assess them throughout implementation. Theory of Change might help, and an ethics-based approach could be introduced as well to raise these questions about risk and privacy and ensure that they are addressed from the start of an initiative.

Practitioners have also expressed the need for additional resources to help them predict and manage possible risk: case studies, a safe space for sharing concerns during implementation, people who can help when things go pear-shaped, a menu of methodologies, a set of principles or questions to ask during program design, or even an ICT4D Implementation Hotline or a forum for questions and discussion.

These ethical issues around privacy and risk are not exclusive to Open Development. Similar issues were raised last week at the Open Government Partnership Summit sessions on whistle blowing, privacy, and safeguarding civic space, especially in light of the Snowden case. They were also raised at last year’s Technology Salon on Participatory Mapping.

A number of groups are looking more deeply into this area, including the Capture the Ocean Project, The Engine Room, IDRC’s research network, The Open Technology InstitutePrivacy InternationalGSMA, those working on “Big Data,” those in the Internet of Things space, and others.

I’m looking forward to further discussion with the Open Development working group on all of this in the coming months, and will also be putting a little time into mapping out existing initiatives and identifying gaps when it comes to these cross-cutting ethics, power, privacy and risk issues in open development and other ICT-enabled data-heavy initiatives.

Please do share information, projects, research, opinion pieces and more if you have them!

Read Full Post »