Feeds:
Posts
Comments

Archive for the ‘accountability’ Category

Imagery and stories used to frame issues of humanitarian development for advocacy and funding are often sensational and culturally disrespectful, representing those living in poverty as helpless victims in need, rather than as empowered and capable individuals.

Fueled by intense Twitter and blog discussions about this topic (which is often referred to as “Poverty Porn,”) a few years ago a number of us decided to work together to create a space for wider dialogue around issues of representation of the people that aid and development organizations support.

Screen Shot 2013-05-19 at 2.53.22 PM

Today we launch our multimedia platform Regarding Humanity or “Re: Hum.” The platform aims to engage practitioners, educators, and students in discussions on how to represent  communities we work with in relevant and respectful ways.

Re: Hum is a website and blog that explore how the way we see, listen and frame stories of “the poor” often strips individuals of their agency and creates a general sense of hopelessness and disempowerment. More respectful and relevant methods of seeing, listening and framing can help tell more nuanced stories that respect people and their complex realities.

The Re: Hum website will source content from a diverse set of authors and creators in order to bring a global perspective to the issue. It will serve as an educational resource and discussion forum to teach visual literacy, the importance of ethnography, and ways to maintain narrative integrity. We will be expanding to a discussion series, research, and an educational curriculum over time and as resources permit.

We invite you to take a quick look at Re: Hum (which is still a work-in-progress) and let us know your thoughts and suggestions on how to generate constructive conversation and learning on this issue!

Read Full Post »

This is a summary of the May 14th Technology Salon in New York city on “Does social media exacerbate poverty porn”.

  1. MT @nidhi_c: #povertyporn is never about the beneficiary..it’s an org’s attempt to stay relevant. #techsalon
  2. RT @nidhi_c: How do we teach orgs & journalists that #povertyporn isn’t necessary to raise $ or awareness? #techsalon
  3. Fr my notes on today’s NYC #TechSalon on #povertyporn: Ppl visit Africa like it’s the zoo. Do you know the names of ppl u took photos with?
  4. #povertyporn is disempowering both to those it portrays and to those it’s aimed at, bc you only offer one solution – yours. #techsalon
  5. Telling only “positive” stories is also not the solution. Life anywhere is not only one or the other. It’s complex. #povertyporn #techsalon
  6. Great question from @meowtree & #TechSalon on #povertyporn – Do you know the names of people you take photos with?
  7. Stop hijacking people’s stories by putting your NGO/organization at the center of it. #povertyporn #techsalon
  8. transmedia storytelling is one good way to bring in more stories fr more angles to create a diverse narrative #techsalon #povertyporn
  9. Don’t need to take the western voice out. We need all the voices. But often the most vulnerable are not included. #povertyporn #techsalon
  10. Why do donors demand impact evals for ‘regular’ devt pjcts, but to prove soc med impact they’re fine w likes/clicks? #povertyporn #techsalon
  11. Where is the accountability to small individual donors/donations garnered via social media, i.e. for Kony2012? #povertyporn #techsalon
  12. Why do ppl from US photo’d during tragedy have name/story, yet ppl from other places are unknown victims? #povertyporn #techsalon
  13. Yet also, are we respecting privacy, consent and dignity when we photograph ppl in other countries? #povertyporn #techsalon
  14. Why ppl in US portrayed as heroes after tragedy (eg., Boston, 911) but not first responders in other countries? #povertyporn #techsalon
  15. How to get influentials w lrge audience (eg N Kristof) to see that external hero narrative not helpful in long term? #povertyporn #techsalon
  16. #PovertyPorn not only problem of “white” saviors/Africa. Privileged often view “the poor” this way in their own countries #techsalon
  17. Social media does not necessarily reduce the “othering” of #povertyporn. We still create our own filter bubbles #techsalon
  18. NGOs send media teams to find pre-conceived #povertyporn stories. Eg: this post by @morealtitude ht.ly/l23yQ #techsalon
  19. Diff to change existing orgs/system. But what is being done in schools re global education and media literacy? #povertyporn #techsalon
  20. Sites like everydayafrica.tumblr.com can help to overcome the #povertyporn narrative. #techsalon
  21. Or google “tumblr” and any country a hashtag (eg., #elsalvador) to find a diverse range of images, not only #povertyporn #techsalon
  22. How can we harness social media to show this range of images/realities to overcome the #povertyporn narrative? #techsalon
  23. And I’ll stop now – here’s @viewfromthecave‘s summary of this really thought provoking #techsalon on #povertyporn ht.ly/l24sJ

Read Full Post »

This is a cross-post from Tessie San Martin, CEO of Plan International USA. Tessie’s original post is published on the Plan International USA blogFor more on the status of the International Aid Transparency Initiative (IATI) in the US and information on which donors sit where on the Transparency Index, visit Publish What You Fund.

Over 40 governments, along with UN organizations and the World Bank, have committed to a common standard and time schedule for publishing aid information under the International Aid Transparency Initiative (IATI).  There are high expectations for this initiative. The ultimate objective is to increase the effectiveness of donor assistance, making aid work for those whom we are trying to help and contributing to accelerated development outcomes on the ground. IATI is good news for increased accountability, can help improve coordination, and provides a space for engaging donors, communities, governments and the general public in a broader development dialogue.

Secretary of State Clinton signed on behalf of the US Government on November 2011. While US engagement has been very welcomed, US Government performance in terms of actually executing IATI has left much to be desired.  Publish What You Fund, an organization helping to ensure governments are held to their initial aid transparency commitments, ranked only one out of six agencies (MCC) in the ‘fair’ category in terms of execution. Recently, organizations like Oxfam and ONE have rightly questioned the US Government’s commitment and progress, and exhorted the Obama administration to make full compliance with the IATI standard a priority.

But with all the attention focused on how USG is performing, what are INGOs doing about IATI?  After all, governments can only open access to the data they have. Official development assistance is an increasingly smaller proportion of the entire aid flows, so having INGOs — particularly, and at the very least, the largest global INGOs — also committed to this process is vital to the success of the Initiative.

What are INGO’s doing about IATI? The answer is: not much.

Very few INGOs have committed to publishing their information to the IATI standard.  INGOs that have complied are doing so primarily because a donor is requiring it.  For example, DfID, the UK foreign aid agency, has such a requirement and, as a result, the UK has the largest number of INGOs in compliance.  The US Government has not imposed this requirement on US-based INGOs and it is not likely to do so in the future.  It is therefore not surprising that US-based INGOs have not shown much interest in IATI.

This is a lost opportunity for everyone.  Accountability and transparency are as relevant to the private and the non-profit side of development assistance as they are to the public side.

At Plan International, an INGO with offices in almost 70 countries, it is not surprising that the part of our organization making the fastest strides in this area is our office in the United Kingdom.  As an important recipient of DfID money, they were instructed to do so.  In the US, though Plan International USA is not a major recipient of USG funding, we believe that making the investment to comply with IATI reporting format and timelines is good development practice; we are thus committed to publishing to IATI in the next year.  How can we effectively preach transparency and increased accountability to our recipient communities and to the governments with which we are working yet not commit to something as eminently common sensical as uniform formats, comparable data sets and systematic reporting frequencies?

We are not Pollyannaish about the task.  Like all INGOs pondering whether and how to comply with IATI, we have many concerns, including the costs of complying and what it will do to our overhead (and therefore to something like our Charity Navigator) rating.   We have established an internal project code so we can better capture, track and understand the costs involved in this initiative.  And we are evaluating where we draw the line in terms of the size of the projects on which we should be reporting, balancing costs with the desire to maximize disclosure (it is also worth remembering that rating agencies themselves are placing increasing emphasis on transparent reporting, so rating concerns may ultimately support a move towards greater IATI compliance).

As we have moved forward, we have had many issues to address, including privacy concerns, since a fair bit of Plan’s internal documentation was not written with the idea that it would one day be shared with the public.  Publishing some information may pose security risks for minority or political groups being supported.  These situations have been contemplated by IATI already, however, and there are valid exemptions for sensitive data.  We have also learned that there are many resources to help INGOs navigate the IATI compliance waters.  These resources are not well known to US INGOs, and need to be better publicized. Plan in the US, of course, is also benefiting from the research and hard work our UK office has done to comply with DfID’s mandate, allowing us to start off on a strong foundation of organizational experience.

I am convinced that IATI is not just good development practice but also makes good business sense. At the same time, it is worth remembering that IATI is not the entire solution.  IATI is designed to improve upward accountability to donors and taxpayers.  It is not designed explicitly to improve accountability to the children and communities with which we are partnering and whom we serve. And, as the ultimate goal is improved aid effectiveness, data must be accompanied by better information about goals, methodologies and approaches.  We also need to get better at sharing not just successes but failures within our federation and across all development organizations.

Despite all the shortcomings, IATI is a good start.  And as we push the US Government to do better, INGOs need to be pushing themselves to do better as well.

Read Full Post »

At the Community of Evaluators’ Evaluation Conclave last week, Jill Hannon from Rockefeller Foundation’s Evaluation Office and I organized a session on ICTs for Monitoring and Evaluation (M&E) as part of our efforts to learn what different organizations are doing in this area and better understand some of the challenges. We’ll do a couple of similar sessions at the Catholic Relief Services ICT4D Conference in Accra next week, and then we’ll consolidate what we’ve been learning.

Key points raised at this session covered experiences with ICTs in M&E and with ICT4D more generally, including:

ICTs have their advantages, including ease of data collection (especially as compared to carrying around paper forms); ability to collect and convey information from a large and diversely spread population through solutions like SMS; real-time or quick processing of information and ease of feedback; improved decision-making; and administration of large programs and funding flows from the central to the local level.

Capacity is lacking in the use of ICTs for M&E. In the past, the benefits of ICTs had to be sold. Now, the benefits seem to be clear, but there is not enough rigor in the process of selecting and using ICTs. Many organizations would like to use ICT but do not know how or whom to approach to learn. A key struggle is tailoring ICTs to suit M&E needs and goals and ensuring that the tools selected are the right ones for the job and the user. Organizations have a hard time deciding whether it is appropriate to use ICTs, and once they decide, they have trouble determining which solutions are right for their particular goals. People commonly start with the technology, rather than considering what problem they want the technology to help resolve. Often the person developing the M&E framework does not understand ICT, and the person developing the ICT does not understand M&E. There is need to further develop the capacities of M&E professionals who are using ICT systems. Many ICT solutions exist but organizations don’t know what questions to ask about them, and there is not enough information available in an easily understandable format to help them make decisions.

Mindsets can derail ICT-related efforts. Threats and fears around transparency can create resistance among employees to adopt new ICT tools for M&E. In some cases, lack of political makes it difficult to bring about institutional change. Earlier experiences of failure when using ICTs (eg, stolen or broken PCs or PDAs) can also ruin the appetite for trying ICTs again. One complaint was that some government employees nearing retirement age will participate in training as a perk or to collect per diem, yet be uninterested in actually learning any new ICT skills. This can take away opportunities from younger staff who may have a real interest in learning and implementing new approaches.

Privacy needs further study and care. It is not clear whether those who provide information through Internet, SMS, etc., understand how it is going to be used and organizations often do not do a good job of explaining. Lack of knowledge and trust in the privacy of their responses can affect willingness or correctness of responses. More effort needs to be made to guarantee privacy and build trust. Technological solutions to privacy such as data encryption can be implemented, but human behavior is likely the bigger challenge. Paper surveys with sensitive information often get piled up in a room where anyone could see them. In the same way, people do not take care with keeping data collected via ICTs safe; for example, they often share passwords. Organizations and agencies need to take privacy more seriously.

Internal Review Boards (IRBs) are missing in smaller organizations. Normally an IRB allows a researcher to be sure that a survey is not personal or potentially traumatizing, that data encryption is in place, and that data are sanitized. But these systems are usually not established in small, local organizations — they only exist in large organizations — leaving room for ethics breaches.

Information flows need quite a lot of thought, as unintended consequences may derail a project. One participant told of a community health initiative that helped women track their menstrual cycles to determine when they were pregnant. The women were sent information and reminders through SMS on prenatal care. The program ran into problems because the designers did not take into account that some women would miscarry. Women who had miscarried got reminders after their miscarriage, which was traumatic for them. Another participant gave an example of a program that publicized the mobile number of a staff member at a local NGO that supported women victims of violence so that women who faced violence could call to report it. The owner of the mobile phone was overwhelmed with the number of calls, often at night, and would switch the mobile off, meaning no response was available to the women trying to report violence. The organization therefore moved to IVR (interactive voice response), which resolved the original problem, however, with IVR, there was no response to the women who reported violence.

Research needs to be done prior to embarking on use of ICTs. A participant working with women in rural areas mentioned that her organization planned to use mobile games for an education and awareness campaign. They conducted research first on gender roles and parity and found that actually women had no command over phones. Husbands or sons owned them and women had access to them only when the men were around, so they did not proceed with the mobile games aspect of the project.

Literacy is an issue that can be overcome. Literacy is a concern, however there are many creative solutions to overcome literacy challenges, such as the use of symbols. A programme in an urban slum used symbols on hand-held devices for a poverty and infrastructure mapping exercise. In Nepal, an organization tried using SMS weather reports, but most people did not have mobiles and could not read SMS. So the organization instead sent an SMS to a couple of farmers in the community who could read, and who would then draw weather symbols on a large billboard. IVR is another commonly used tool in South Asia.

Qualitative data collection using ICTs should not be forgotten. There is often a focus on surveys, and people forget about the power of collecting qualitative data through video, audio, photos, drawings on mobiles and tablets and other such possibilities. A number of tools can be used for participatory monitoring and evaluation processes. For example, baseline data can be collected through video. tagging can be used to help sort content., video and audio files can be linked with text, and change and decision-making can be captured through video vignettes. People can take their own photos to indicate importance or value. Some participatory rural appraisal techniques can be done on a tablet with a big screen. Climate change and other visual data can be captured with tablets or phones or through digital maps. Photographs and GPS are powerful tools for validation and authentication, however care needs to be taken when using maps with those who may not easily orient themselves to an aerial map. One caution is that some of these kinds of initiatives are “boutique” designs that can be quite expensive, making scale difficult. As android devices and tablets become increasingly cheaper and more available, these kinds of solutions may become easier to implement.

Ubiquity and uptake are not the same thing. Even if mobile phones are “everywhere” it does not mean people will use them to do what organizations or evaluators want them to do. This is true for citizen feedback programs, said one participant, especially when there is a lack of response to reports. “It’s not just an issue of literacy or illiteracy, it’s about culture. It’s about not complaining, about not holding authorities accountable due to community pressures. Some people may not feed back because they are aware of the consequences of complaining and this goes beyond simple access and use of technology.” In addition, returning collected data to the community in a format they can understand and use for their own purposes is important. A participant observed that when evaluators go to the community to collect data for baseline, outcome, impact, etc., from a moral standpoint it is exploitative if they do not report the findings back to the community. Communities are not sure of what they get back from the exercise and this undermines the credibility of the feedback mechanism. Unless people see value in participation, they will not be willing to give their information or feedback. However, it’s important to note that responses to citizen or beneficiary feedback can also skew beneficiary feedback. “When people imagine a response will get them something, their feedback will be based on what they expect to get.”

There has not been enough evaluation of ICT-enabled efforts. A participant noted that despite apparent success, there are huge challenges with the use of ICTs in development initiatives: How effective has branchless banking been? How effective is citizen feedback? How are we evaluating the effectiveness of these ICT tools? And what about how these programs impact on different stakeholders? Some may be excited by these projects, whereas others are threatened.

Training and learning opportunities are needed. The session ended, yet the question of where evaluators can obtain additional guidance and support for using ICTs in M&E processes lingered. CLEAR South Asia has produced a guide on mobile data collection, and we’ll be on the lookout for additional resources and training opportunities to share, for example this series of reports on Mobile Data Collection in Africa from the World Wide Web Foundation or this online course Using ICT Tools for Effective Monitoring, Impact Evaluation and Research available through the Development Cafe.

Thanks to Mitesh Thakkar from Fieldata, Sanjay Saxena from Total Synergy Consulting, Syed Ali Asjad Naqvi from the Center for Economic Research in Pakistan (CERP) and Pankaj Chhetri from Equal Access Nepal for participating as lead discussants at the session; Siddhi Mankad from Catalyst Management Services Pvt. Ltd for serving as rapporteur; and Rockefeller Foundation’s Evaluation Office for supporting this effort.

We used the Technology Salon methodology for the session, including Chatham House Rule, therefore no attribution has been made in this summary post.

Other sessions in this series of Salons on ICTs and M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

In addition, here’s a post on how War Child Uganda is using participatory video for M&E

Read Full Post »

Here’s a recap of my panel talk at the Engineers Without Borders, Canada, Annual Policy Forum. (A summary of the wider discussions on Open Government and Community and Economic Development at the Forum is here)

Slide01Open data are having some impact as seen in 4 key areas (according to what I heard at July’s International Open Government Data Conference). These are:

  • economic growth/entrepreneurship
  • transparency, accountability and governance
  • improved resource allocation and provision of services
  • connecting data dots and telling stories the public needs to know

Open data should be part of the public’s right to information, not a service that government can decide whether to provide or not. Open government should include open attitudes, open ways of being, not only open data and use of technology. It should be inclusive and seek to engage those who do not normally participate, as well as those who are already active. It should go further than data about public services and also encompass those aspects that may be uncomfortable and politically charged.

Slide04

Opening data is only a first step – and there are still big gaps. ‘Open’ does not automatically mean accessible, useful, relevant or accountable. Although new ICTs offer huge potential, focusing too much on technologies and data can marginalize a range of voices from the current discussion about (and implementation of) open government initiatives and processes. Much about these processes is currently top down and focused at the international and national levels, or sometimes district level. Community level data would be a huge step towards local accountability work

Slide06We can address the gaps. First we need to understand, acknowledge and design for the barriers and/or challenges in each particular environment, including the barriers of ICT access for some groups; e.g:

  • lack of connectivity and electricity
  • cost of devices, cost of connection
  • lack of time and resources to participate
  • low education levels, low capacity to interpret data
  • power and culture, apathy, lack of incentives and motivation, lack of interest and/or fatalism, disempowerment
  • poor capacity and/or lack of interest by duty bearers/governments (or particular individuals within government) to respond to citizen demand for services or transparency/accountability

We also need to support:

  • consultations with and engagement of citizens in different places, different sectors, economic levels, etc., from the very beginning of the open government process
  • better understanding of what is important to citizens and communities
  • generation of awareness and demand, better local ownership, expectations of responsive government
  • champions within local and national government, strengthened capacity and motivation to collect and share data; strengthened coordination
  • space for dialogue and discussion among citizens, communities, civil society organizations and governments

Slide10Government responsiveness matters. A lot. So when working in open government we need to ensure that if there are ways to input and report, that there is also responsiveness, willingness on government side and the right attitude(s) or it will not succeed.

Open Data/Open Government portals are not enough. I’ve heard that donors know more about the open government portal in Kenya than Kenyan NGOs, Kenyan media and Kenyan citizens.  It’s important to work with skilled intermediaries, infomediaries and civil society organizations who have a transparency mandate to achieve bigger picture, social motivation, large-scale awareness and education, and help create demand from public. But these intermediaries need to strive to be as objective and unbiased as possible. If there is no response to citizen demand, the initiative is sunk. You may either go back to nothing, increase apathy, or find people using less peaceful approaches.

Great tech examples exist! But…. how to learn from them, adapt them or combine them to address the aforementioned barriers? Initiatives like Huduma, U-Report, I Paid a Bribe have gotten great press. We heard from Ugandan colleagues at the Open Knowledge Festival that people will use SMS and pay for it when the information they get is relevant; but we still need to think about who is being left out or marginalized and how to engage them.

Slide08We need to also consider age-old (well, 1970s) communication for development (C4D) and ‘educación popular’ approaches. New ICT tools can be added to these in some cases as well. For example, integrating SMS or call-in options make it possible for radio stations to interact more dynamically with listeners. Tools like FrontlineSMS Radio allow tracking, measuring and visualization of listener feedback.  The development of ‘critical consciousness’ and critical thinking should be a key part of these processes.

Existing successful social accountability tools, like community scorecardsparticipatory budget advocacysocial auditsparticipatory videoparticipatory theater and community mapping have all been used successfully in accountability and governance work and may be more appropriate tools in some cases than Internet and mobile apps to generate citizen engagement around open data.

Combining new ICTs with these well-established approaches can help take open data offline and bring community knowledge and opinions online, so that open data is not strictly a top-down thing and so that community knowledge and processes can be aggregated, added to or connected back to open data sets and more widely shared via the Internet (keeping in mind a community’s right also to not have their data shared).

A smart combination of information and communication tools – whether Internet, mobile apps, posters, print media, murals, song, drama, face-to-face, radio, video, comics, community bulletin boards, open community fora or others – and a bottom-up, consultative, ‘educación popular’ approach to open data could help open data reach a wider group of citizens and equip them not only with information but with a variety of channels through which to participate more broadly in the definition of the right questions to ask and a wider skill set to use open data to question power and push for more accountability and positive social change. Involved and engaged media or “data journalists” can help to bring information to the public and stimulate a culture of more transparency and accountability. Responsiveness and engagement of government and opportunities for open dialogue and discussion among various actors in a society are also key. Community organizing will remain a core aspect of successful civic participation and accountability efforts.

[Photo credits: (1) Phone charging in a community with limited electricity, photo by youth working with the Youth Empowerment through Arts and Media (YETAM) program in Senegal; (2) Youth training session during YETAM project Cameroon, photo by me (3) Gaps in open data and open government work, diagram by Liza Douglas, Plan International USA; (4) Local government authority and communities during discussions in Cameroon, photo by me; (5) Youth making a map of their community in Cameroon, photo by Ernest Kunbega]

Read Full Post »

policy forum

This past Monday I had the opportunity to join Engineers without Borders (EWB) in Calgary, Canada, at their Annual Policy Forum on Global Development to discuss “How can open government contribute to community and economic development?”

Morning panels covered some examples of open government initiatives from Finland, Ghana and Canada. In the afternoon we heard about some of the challenges with open data, open government and the International Aid Transparency Initiative. Table discussions followed both of the panels. The group was a mix of Canadian and African government representatives, people from organizations and groups working in different countries on open government and open data initiatives, and young people who are connected with EWB. The session was under Chatham House Rule in order to encourage frank conversation.

Drawing from such documents as the Open Government Partnership’s Open Government Declaration, Harlan Yu and David G. Robinson’s “The New Ambiguity of “Open Government,” Beth Noveck’s What’s in a Name? Open Gov and Good Gov and Nathaniel Heller, A Working Definition of ‘Open Government’, the following definition of Open Government was used to frame the discussions.

EWB Definition of Open Government

Below (in a very-much-longer-than-you-are-supposed-to-write-in-a-blogpost summary) are the highlights and points I found interesting and useful as related to Open Development, Open Data, Open Government and the International Aid Transparency Initiative (IATI)

1.  Participation thresholds need to be as low as possible for people to participate and engage in open government or open data initiatives. You need to understand well what engagement tools are most useful or comfortable for different groups. In some places, to engage the public you can use tools such as etherpad, wiki platforms, google docs, open tools and online collaboration spaces. In other places and with other populations, regardless of what country, you may be more successful with face-to-face methods or with traditional media like television and radio, but these need to be enhanced with different types of feedback methods like phone calls or surveys or going house to house so that your information is not only traveling one way. Community organizing skills are key to this work, regardless of whether the tools are digital or not.

2.  Literacy remains a huge challenge hindering access to information and citizen engagement in holding government accountable in many countries. This is why face-to-face engagement is important, as well as radio and more popular or broad-based communication channels. One participant asked “how can you make open government a rural, rather than an urban only, phenomenon?” This question resonated for participants from all countries.

3.  Language is still a critical issue. Language poses a big challenge for these kinds of initiatives, from the grassroots level to the global level, within and among countries, for citizens, governments, and anyone trying to share or collect data or information. It was noted that all the countries who have published data to IATI are publishing in English. All the IATI Standards are in English, as is the entire support system for IATI. As one participant noted, this begs the question of who the information in IATI is actually designed for and serving, and who are the expected users of it. Open data initiatives should consider the implications of language they publish in, both politically and practically.

4.  Open data can serve to empower the already empowered. As one speaker noted, “the idea that everyone has the potential to make use of open data is simply not true.” Access to digital infrastructure and educational resource may be missing, meaning that many do not have the ability to access, interpret or use data for their own purposes. Governments can also manipulate data and selectively release data that serves their own interests. Some questioned government motives, citing the example of a government that released “data” saying its unemployment rate was 10% when “everyone knew this to be false, and people grumbled but we did not feel empowered to challenge that statement.” Concern was expressed over the lack of an independent body or commission in some countries to oversee open data and open government processes. Some did not trust the government bodies who were currently in charge of collecting and opening information, saying that due to politics, they would never release any information that made their party or their government look bad.

5.  Privacy rights can be exploited if data is opened without data protection laws and effort to build capacity around how to make certain data anonymous. Citizens may also not be aware of what rights are being violated, so this should also be addressed.

6.  Too much open data discussion takes place without a power analysis, as one participant commented, making some of the ideas around open data and open government somewhat naïve. “Those who have the greatest stake will be the most determined to push their point of view and to make sure it prevails.”

7.  Open data needs to become open data 2.0. According to one participant, open data is still mostly one-way information delivery. In some cases there isn’t even any delivery – information is opened on a portal but no one knows it’s there or what it refers to or why it would be useful. When will open data, open government and open aid become more of a dialogue? When will data be released that answers questions that citizens have rather than the government deciding what it will release? The importance of working with community groups to strengthen their capacity to ask questions and build critical consciousness to question the data was emphasized. A counter point was that government is not necessarily there to start collecting information or creating data sets according to public demand. Governments collect certain data to help them function.

8.  Intermediaries working on open government should be careful of real or perceived bias. Non-profits have their own agendas, and ‘open data’ and ‘open information’ is not immune to being interpreted in non-objective ways. Those working on civic engagement initiatives need to be careful that they are not biased in their support for citizen initiatives. One presenter who works on a platform that encourages citizens to be involved in petitioning new laws for contemplation in Parliament said “Our software is open source so that anyone can set up a similar process to compete with us if they feel we are biased towards one or another type of agenda.”

9.  Technology-based engagement tools change who is participating. Whether in Finland, Canada, Ghana or Malawi, it’s critical to think about reaching those who are not active already online, those who are not the typical early adopters. To reach a broader public, one speaker noted “We are going to remote places, doing events in smaller towns and cities to see how people want to influence and take part in this. Making sure the website is accessible and understandable.”

10. Technological platforms are modifying how political parties and democratic processes operate. This may or may not be a good thing. Normally priorities arise and are discussed within political parties. Will people now bypass the party process and use ‘direct democracy’ channels if they are passionate about an issue but do not want to enter into negotiation around it? Will this weaken political processes or longer standing democratic processes? One speaker considered this change to be positive. People are not happy with being able to vote every 4 years and they want opportunities to participate in between elections cycles and direct voice in how priorities are decided. Others questioned whether bypassing official processes can lead to less participation and more apathy overall on national issues. Some questioned whether within fairly long-standing democracies, open data will have any real impact, considering existing levels of apathy and the lack of political participation.

11. Strong information, statistical, monitoring and evaluation systems are critical for open data and open government processes and to ensure more effective management of development results. This is still a challenge for some countries that need to review their mechanisms and improve their tools and processes for data collection and dissemination. If there is no data, or no current data, there is not much point in opening it. In addition, there are capacity and technical competency challenges within institutions in some countries. One participant mentioned a lack of current government geological information about gold and oil deposits that weakens government capacity to negotiate with the private sector extraction industry and ensure partnerships and earnings will contribute to national development. In addition more evidence is needed on the impact, use, and outcomes of open data. At the moment it’s quite difficult to say with any real authority what the outcomes and impact of open data and open government have been.

12. IATI (International Aid Transparency Initiative) needs more partners. Government representatives noted that they are opening their data, but they can only open the data they possess. In order for data on aid to be useful, more data is needed, especially that of NGOs who are implementing programs. Not many NGOs have published their information to the IATI standard at this point. “The really interesting thing will be when we can start mashing up and mapping out the different kinds of information,” as one speaker noted, “for example, this is the goal of the Open Aid Partnership. It will involve combining information from the donor, development indicators from the World Bank, and country information, and this will open up amazing possibilities once this is all geo-coded.” There are reporting challenges related to IATI and open government data, however, because at times countries and NGOs do not see the benefits of reporting – it feels like just one more top-down administrative burden. There are also issues with donor governments reporting their committed intentions and amounts, recipient governments reporting back, and communications with citizens on both sides (donor and recipient countries). One example that was reported to be enjoying some success was the multi-donor budget support initiative in Ghana, where development partners and government work together to establish development indicators and commitments. If the government delivers on the indicators, the development partners will then provide them with the funding. Development partners can also earmark funding to particular areas if there is government agreement.

13. We need more accountability towards ‘beneficiaries’.Currently many of these initiatives are perceived as being focused on donors and donor publics. As one participant noted, “the interesting thing is less about government and more about getting regular people involved in these processes. When you engage the public you’ll engage government leaders in thinking they will need to change to respond to what citizens are asking for.” Another noted that the essential issue is the link between transparency/accountability and citizens and their own governments. In addition, as one participant asked, “How can you strengthen capacity among citizens to ask the right questions about the data that’s being opened?” For example, citizens may ask about the number of schools being built, but not ask about the quality of education being provided. Public education was a strong focus of discussions around citizen engagement during the policy forum.

14. Should citizens be consulted on everything? however, was one big question. The public at large may not understand the ramifications of its own deep misunderstandings on particular issues and may be inputting from a viewpoint that lacks scientific evidence or fact. “It’s one thing to have an opinion about whether your child should be able to drink energy drinks before age 16, it’s another to input about technical programs like the best policy for green energy,” commented one group.

15. Can citizens really have greater participation if government is still in control of data? was another big question. An example was given of an open consultative process that became unwieldy for a local government, which then shut down the consultation process and changed the nature of the documents to ‘administrative’ and therefore no longer open. Others asked why governments pat themselves on the back over being part of the Open Government Partnership yet they do not have Freedom of Information Acts (FOIA) or they prosecute those who open data in alternative ways, such as Bradley Manning and Aaron Swartz.

16. If citizens don’t get a response from government (or if they don’t like the response, or feel it’s biased or manipulated), apathy and cynicism will increase. It’s important to make sure that ‘open government’ is not just a box that gets ticked off, but rather a long-term change in mentality of those in power and deeper expectations and efforts by citizens for openness and participation in conversations of national importance.

The conclusion was that Open Government is somewhat of a paradox, rooted in aims that are not necessarily new. Open Government strives to enable leaders in their communities to create change and transform their lives and those of people in their communities. It is a complex process that involves many actors and multiple conflicting goals and interests. It’s also something new that we are all learning about and experimenting with, but we are very impatient to know what works and what the impact is. In the room, the feeling was one of ‘radical pragmatism,’ as one participant put it. Open Government is a big idea that represents a big change. It’s something that can transform communities at the global level and there is a great deal of hope and excitement around it. At the same time, we need to acknowledge the challenges associated with it in order to address them and move things forward.

I’ll do a follow up post with the points I made during the panel as this post is clearly way too too long already. Kudos if you are still reading, and a huge thanks to the organizers and participants in the EWB policy forum.

Read Full Post »

At the October 17, 2012 Technology Salon NYC, we focused on ways that ICTs can be used for qualitative monitoring and evaluation (M&E) efforts that aim to listen better to those who are participating in development programs. Our lead discussants were:  John Hecklinger, Global Giving; Ian Thorpe, UN DOCO and the World We Want 2015 Campaign; and Emily Jacobi, Digital Democracy. This salon was the final in a series of three on using new technologies in M&E work.

Global Giving shared experiences from their story-telling project which has collected tens of thousands of short narratives from community members about when an individual or organization tried to change something in their community. The collected stories are analyzed using Sensemaker to find patterns in the data with the aim of improving NGO work. (For more on Global Giving’s process see this document.)

The United Nations’ Beyond 2015 Campaign aims to spur a global conversation on the post-MDG development agenda. The campaign is conducting outreach to people and organizations to encourage them to participate in the discussion; offering a web platform (www.worldwewant2015.org) where the global conversation is taking place; and working to get offline voices into the conversation. A challenge will be synthesizing and making sense of all of the information coming in via all sorts of media channels and being accountable now and in future to those who participate in the process.

Digital Democracy works on digital literacy and human rights, and makes an effort to integrate qualitative monitoring and evaluation into their program work stream. They use photography, film and other media that transcend the language and literacy barriers. Using these kinds of media helps participants express opinions on issues that need addressing and builds trust. Photos have helped in program development as well as in defining quantitative and qualitative indicators.

A rich conversation took place around the following aspects:

1) Perception may trump hard data

One discussant raised the question “Do opinions matter more than hard data on services?” noting that perceptions about aid and development may be more important than numbers of items delivered, money spent, and timelines met. Even if an organization is meeting all of its targets, what may matter more is what people think about the organization and its work. Does the assistance they get respond to their needs? Rather than asking “Is the school open?” or “Did you get health care?” it may be more important to ask “How do you feel about health?” Agencies may be delivering projects that are not what people want or that do not respond to their needs, cultures, and so on. It is important to encourage people to talk amongst themselves about their priorities, what they think, encourage viewpoints from people of different backgrounds and see how to pull out information to help inform programs and approaches.

2) It is a complex process

Salon participants noted that people are clearly willing to share stories and unstructured feedback. However, the process of collecting and sorting through stories is unwieldy and far from perfect. More work needs to be done to simplify story-collection processes and make them more tech-enabled. In addition, more needs to be done to determine exactly how to feed the information gleaned back in a structured and organized way that helps with decision-making. One idea was the creation of a “Yelp” for NGOs. Tagging and/or asking program participants to tag photos and stories can help make sense of the data. If videos are subtitled, this can also be of great use to begin making sense of the type of information held in videos. Dotsub, for example, is a video subtitling platform that uses a Wikipedia style subtitling model, enabling crowd sourced video translations into any language.

3) Stories and tags are not enough

We know that collecting and tagging stories to pull out qualitative feedback is possible. But so what? The important next step is looking at the effective use of these stories and data. Some ideas on how to better use the data include adding SMS feedback, deep dives with NGOs, and face-to-face meetings. It’s important to move from collecting the stories to thinking about what questions should be asked, how the information can help NGOs improve their performance, how this qualitative data translates into change or different practice at the local and global levels, how the information could be used by local organizers for community mobilization or action, and how all this is informing program design, frameworks and indicators.

4) Outreach is important

Building an online platform does not guarantee that anyone will visit it or participate. Local partners are an important element to reach out and collect data about what people think and feel. Outreach needs to be done with many partners from all parts of a community or society in order to source different viewpoints. In addition, it is important to ask the right questions and establish trust or people will not want to share their views. Any quality participation process, whether online or offline, needs good facilitation and encouragement; it needs to be a two-way process, a conversation.

5) Be aware of bias

Understanding where the process may be biased is important. Everything from asking leading questions, defining the meta data in a certain way, creating processes that only include certain parts of the community or population, selecting certain partners, or asking questions that lead to learning what an organization thinks it needs to know can all create biased answers. Language is important here for several reasons: it will affect who is included or excluded and who is talking with whom. Using development jargon will not resonate with people, and the way development agencies frame questions may lead people to particular answers.

6)  Be aware of exclusion

Related to bias is the issue of exclusion. In large-scale consultations or online situations, it’s difficult to know who is talking and participating. Yet the more log-in information solicited, the less likely people are to participate in discussions. However by not asking, it’s hard to know who is responding, especially when anonymity is allowed. In addition, results also depend on who is willing and wants to participate. Participants agreed that there is no silver bullet to finding folks to participate and ensuring they represent diversity of opinion. One suggestion was that libraries and telecenters could play a role in engaging more remote or isolated communities in these kinds of dialogues.

7) Raising expectations

Asking people for feedback raises expectations that their input will be heard and that they will see some type of concrete result. In these feedback processes, what happens if the decisions made by NGOs or heads of state don’t reflect what people said or contributed? How can we ensure that we are actually listening to what people tell us? Often times we ask for people’s perceptions and then tell them why they are wrong. Follow up is also critical. A campaign from several years ago was mentioned where 93,000 people signed onto a pledge, and once that was achieved, the campaign ended and there was no further engagement with the 93,000 people. Soliciting input and feedback needs to be an ongoing relationship with continual dialogue and response. The process itself needs to be transparent and accountable to those who participate in it.

8 ) Don’t forget safety and protection

The issue of safety and protection for those who offer their opinions and feedback or raise issues and complaints was brought up. Participants noted that safety is very context specific and participatory risk assessments together with community members and partners can help mitigate and ensure that people are informed about potential risk. Avoiding a paternalistic stance is recommended, as sometimes human rights advocates know very well what their risk is and are willing to take it. NGOs should, however, be sure that those with whom they are working fully understand the risks and implications, especially when new media tools are involved that they may not have used before. Digital literacy is key.

9) Weave qualitative M&E into the whole process

Weaving consistent spaces for input and feedback into programs is important. As one discussant noted, “the very media tools we are training partners on are part of our monitoring and evaluation process.”  The initial consultation process itself can form part of the baseline. In addition to M&E, creating trust and a safe space to openly and honestly discuss failure and what did not go so well can help programs improve.  Qualitative information can also help provide a better understanding of the real and hard dynamics of the local context, for example the challenges faced during a complex emergency or protracted conflict. Qualitative monitoring can help people who are not on the ground have a greater appreciation for the circumstances, political framework, and the socio-economic dynamics.

10) Cheaper tool are needed

Some felt that the tools being shared (Sensemaker in particular) were too expensive and sophisticated for their needs, and too costly for smaller NGOs. Simpler tools would be useful in order to more easily digest the information and create visuals and other analyses that can be fed back to those who need to use the information to make changes. Other tools exist that might be helpful, such as Trimble’s Municipal Reporter, Open Data Kit, Kobe, iForm Builder, Episurveyor/Magpi and PoiMapper. One idea is to look at some of the tools being developed and used in the crisis mapping and response space to see if cost is dropping and capacity increasing as the field advances. (Note: several tools for parsing Twitter and other social media platforms were presented at the 2012 International Conference on Crisis Mapping, some of which could be examined and learned from.)

What next?

A final question at the Salon was around how the broader evaluation community can connect with the tools and people who are testing and experimenting with these new ways of conducting monitoring and evaluation. How can we create better momentum in the community to embrace these practices and help build this field?

Although this was the final Salon of our series on monitoring and evaluation, we’ll continue to work on what was learned and ways to take these ideas forward and keep the community talking and growing.

A huge thank you to our lead discussants and participants in this series of Salons, especially to the Community Systems Foundation and the Rockefeller Foundation’s monitoring and evaluation team for joining in the coordination with us. A special thanks to Rockefeller for all of the thoughtful discussion throughout the process and for hosting the Salons.

The next Technology Salon NYC will be November 14, 2012, hosted by the Women’s Refugee Commission and the International Rescue Committee. We’ll be shifting gears a little, and our topic will be around ways that new technologies can support children and youth who migrate, are forcibly displaced or are trafficked.

If you’d like to receive notifications about future salons, sign up for the mailing list!

Previous Salons in the ICTs and M&E Series:

12 lessons learned with ICTs for monitoring and accountability

11 points on strengthening local capacity to use new ICTs for monitoring and evaluation

Read Full Post »

OK Festival is in full swing here in Helsinki, and if today is anything like the past two days, it will be full of information and exchange on everything “open.”

A number of us have been working hard to pull together the Open Development Stream, which started yesterday and which followed very nicely on Tuesday’s fantastic series of panels on Transparency and Accountability (with a heavy focus on the Open Government Partnership and Open Data) and the Open Data Journalism and Visualization streams.

Here’s a quick Storify summary of yesterday’s last Open Development session “Taking it Local: 10 ways to make ‘open’ relevant in low resource or marginalized contexts,” It was moderated by Soren Gigler from the World Bank’s Innovation for Governance Team and included superb group of panelists:  David RodriguezMichael Gurstein, Huy Eng, Philip Thigo, and Barbara Birungi.

For the session, my colleagues David and Max Rodriguez from Plan El Salvador did some really great short videos around transparency, internet access, connectivity and related topics and how they are perceived and lived out in rural communities where they are working.

This first video with Marco Rodriguez (he’s also on Twitter), the Sub-Secretary of Transparency for the Government of El Salvador, is just a small example of some of the realities around “open” and accessibility, and the challenges of engaging every day people in some of the initiatives we are talking about here at OK Festival. (Not to mention it and the other videos with Marco and others have a number of fantastic metaphors and soundbites!)

.

.

.

.

Read Full Post »

New technologies are changing the nature of monitoring and evaluation, as discussed in our previous Salon on the use of ICTs in M&E. However, the use of new technologies in M&E efforts can seem daunting or irrelevant to those working in low resource settings, especially if there is little experience or low existing capacity with these new tools and approaches.

What is the role of donors and other intermediaries in strengthening local capacity in communities and development partners to use new technologies to enhance monitoring and evaluation efforts?

On August 30, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC to host the second in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes and to discuss just this question. Our lead discussants were: Revati Prasad from Internews, Tom O’Connell from UNICEF and Jake Watson from the International Rescue Committee. (Thanks Jake for stepping in at the last minute!)

We started off with the comment that “Many of us are faced with the “I” word – in other words, having to demonstrate impact on the ground. But how can we do that if we are 4 levels removed from where change is happening?” How can organizations and donors or those sitting in offices in Washington DC or New York City support grantees and local offices to feed back more quickly and more accurately? From this question, the conversation flowed into a number of directions and suggestions.

1) Determine what works locally

Donor shouldn’t be coming in to say “here’s what works.” Instead, they should be creating local environments for innovation. Rather than pushing things down to people, we need to start thinking from the eyes of the community and incorporate that into how we think and what we do. One participant confirmed that idea with a concrete example. “We went in with ideas – wouldn’t SMS be great… but it became clear that SMS was not the right tool, it was voice. So we worked to establish a hotline. This has connected [the population] with services, it also connects with a database that came from [their] own needs and it tracks what they want to track.” As discussed in the last Salon, however, incentive and motivation are critical. “Early on, even though indicators were set by the community, there was no direct incentive to report.” Once the mentioned call center connected the reporting to access to services, people were more motivated to report.

2) Produce local, not national-level information

If you want to leverage technology for local decision-making, you need local level information, not broad national level information. You also need to recognize that the data will be messy. As one participant said, we need to get away from the idea of imperfect data, and instead think: is the information good enough to enable us to reach that child who wasn’t reached before? We need to stop thinking of knowledge as discrete chunks that endure for 3-4 years. We are actually processing information all the time. We can help managers to think of information as something to filter and use constantly and we can help them with tools to filter information, create simpler dashboards, see bottlenecks, and combine different channels of information to make decisions.

3) Remember why you are using ICTs in M&E

We should be doing M&E in order to achieve better results and leveraging technologies to achieve better impact for communities. Often, however, we end up doing it for the donor. “Donors get really excited about this multicolored thing with 50,000 graphs, but the guy on the ground doesn’t use a bit of it. We need to let go.” commented one participant. “I don’t need to know what the district manager knows. I need to know that he or she has a system in place that works for him or her. My job is to support local staff to have that system working. We need to focus on helping people do their jobs.”

4) Excel might be your ‘killer app’

Worldwide, the range of capacities is huge. Sometimes ICT sounds very sexy, but the greatest success might be teaching people how to use Excel, how to use databases to track human rights violations and domestic violence or setting up a front-end and a data entry system in a local language.

5) Technology capacity doesn’t equal M&E capacity

One participant noted that her organization is working with a technology hub that has very good tech skills but lacks capacity in development and M&E. Their work over the past year has been less about using technology and more about working with the hub to develop these other capacities: how to conduct focus groups, surveys, network analysis, developing toolkits and guides. There’s often excitement on the ground – ‘We can get data in 48 hours! Wow! Let’s go!’ However creating good M&E surveys to be used via technology tools is difficult. One participant expressed that finding local expertise in this area is not easy, especially considering staff turnover. “We don’t always have M&E experts on the ground.” In addition, “there is an art to polls and survey trees, especially when trying to take them from English into other languages. How do you write a primer for staff to create meaningful questions.”

6) Find the best level for ICTs to support the process

ICTs are not always the best tool at the community or district level, given issues of access, literacy, capacity, connection, electricity, etc., but participants mentioned working in blended ways, eg., doing traditional data collection and using ICTs to analyze the data, compile it, produce localized reports, and working with the community to interpret the information for better decision-making. Others use hand-drawn maps, examine issues from the community angle and then incorporate that into digital literacy work and expression work, using new technology tools to tell and document the communities’ stories.

7) Discover the shadow systems and edge of network

One participant noted that people will comply and they will move data through the system as requested from on high, but they simultaneously develop their own ways of tracking information that are actually useful to them. By discovering these ‘shadow systems’, you can see what is really useful. This ‘edge of network’ is where people with whom headquarters doesn’t have contact live and work. We rely on much of their information to build M&E systems yet we don’t consult and work with them often enough. Understanding this ‘edge of network’ is critical to designing and developing good M&E systems and supporting local level M&E for better information and decision-making.

8 ) The devil is in the details

There are many M&E tools to choose from and each has its pros and cons. Participants mentioned KoBo, RapidSMSNokia Data GatheringFrontlineSMS and Episurveyor. While there is a benefit to getting more clean data and getting it in real-time, there will always be post-processing tasks. The data can, however, be thrown on a dashboard for better decision-making. Challenges exist, however. For example, in Haiti, as one participant commented, there is a 10% electrification rate, so solar is required. “It’s difficult to get a local number with Clickatell [an SMS gateway]; you can only get an international number. But getting a local number is very complicated. If you go that route, you need a project coordinator. And if you are using SMS, how do you top off the beneficiaries so that they can reply? The few pennies it costs for people to reply are a deterrent. Yet working with telecom providers is very time-consuming and expensive in any country. Training local staff is an issue – trying to train everyone on the ICT package that you are giving them. You can’t take anything for granted. People usually don’t have experience with these systems.” Literacy is another stumbling block, so some organizations are looking at Interactive Voice Response (IVR) and trying to build a way for it to be rapidly deployed.

9) Who is the M&E for?

Results are one thing, but as one participant noted, “part of results measuring means engaging communities in saying whether the results are good for them.” Another participant commented that Ushahidi maps are great and donors love them. But in CAR, for example, there is 1% internet penetration and maybe 9% of the people text. “If you are creating a crisis map about the incidence of violence, your humanitarian actors may access it, it may improve service delivery, but it is in no way useful for people on the ground. There is reliance on technology, but how to make it useful for local communities is still the big question…. It’s hard to talk about citizen engagement and citizen awareness if you are not reaching citizens because they don’t have access to technology.” And “what about the opportunity cost for the poor? ”asked one participant. “Time is restricted. CSOs push things down to the people least able to use the time for participation. There is a cost to participation, yet we assume participation is a global good. The poorest are really scraping for time and resources.  ‘Who is the data for?’ is still a huge question. Often it’s ‘here’s what we’re going to do for you’ rather than meeting with people first, asking what’s wrong, then listening and asking what they would like to do about it, and listening some more.”

10) Reaching the ‘unreachable’

Reaching and engaging the poorest is still difficult, and the truly unreached will require very different approaches. “We’re really very much spoke to hub,” said one participant, “This is not enough. How can we innovate and resolve this.” Another emphasized the need to find out who’s not part of the conversation, who is left out or not present when these community discussions take place. “You might find out that adolescent girls with mobility issues are not there. You can ask those with whom you are consulting if they know of someone who is not at the meeting. You need to figure out how to reach the invisible members of the community.”  However, as noted, “we also have to protect them. Sometimes identifying people can expose them. There is no clear answer.”

11) Innovation or building on what’s already there?

So will INGOs and donors continue to try to adapt old survey ideas to new technology tools? And will this approach survive much longer? “Aren’t we mostly looking for information that we can act on? Are we going to keep sending teams out all the time or will we begin to work with information we can access differently? Can we release ourselves from that dependence on survey teams?” Some felt that ‘data exhaust’ might be one way of getting information differently; for example a mode like Google Flu Trends. But others noted the difficulty of getting information from non-online populations, who are the majority. In addition, with these new ICT-based methods, there is still a question about representativeness and coverage. Integrated approaches where ICTs are married with traditional methods seem to be the key. This begs the question: “Is innovation really better than building up what’s already there?” as one participant commented. “We need to ask – does it add value? Is it better than what is already there? If it does add perceived value locally, then how do we ensure that it comes to some kind of result. We need to keep our eye on the results we want to achieve. We need to be more results-oriented and do reality checks. We need to constantly ask ourselves:  Are we listening to folks?”

In conclusion

There is much to think about in this emerging area of ICTs and Monitoring and Evaluation.  Join us for the third Salon in the series on October 17 where we’ll continue discussions. If you are not yet on the Technology Salon mailing list, you can sign up here. A summary of the first Salon in the series is here. (A summary of the October 17th Salon is here.)

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

New technologies are opening up all kinds of possibilities for improving monitoring and evaluation. From on-going feedback and crowd-sourced input to more structured digital data collection, to access to large data sets and improved data visualization, the field is changing quickly.

On August 7, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC for the first in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes. Our lead discussants were: Erica Kochi from UNICEF Innovations; Steven Davenport from Development Gateway and John Toner from CSF.

This particular Salon focused on the use of ICTs for social monitoring (a.k.a. ‘beneficiary feedback loops’) and accountability. Below is a summary of the key points that emerged at the Salon.

1) Monitoring and evaluation is changing

M&E is not only about formal data collection and indicators anymore, one discussant commented, “It’s free form, it contains sentiment.” New ICT tools can help donors and governments plan better. SMS and other social monitoring tools provide an additional element to more formal information sources and can help capture the pulse of the population. Combinations of official data sets with SMS data provide new ways of looking at cross-sections of information. Visualizations and trend analysis can offer combinations of information for decision making. Social monitoring, however, can be a scary thing for large institutions. It can seem too uncontrolled or potentially conflictive. One way to ease into it is through “bounded” crowd-sourcing (eg., working with a defined and more ‘trusted’ subset of the public) until there is comfort with these kinds of feedback mechanisms.

2) People need to be motivated to participate in social monitoring efforts

Building a platform or establishing an SMS response tool is not enough. One key to a successful social monitoring effort is working with existing networks, groups and organizations and doing well-planned and executed outreach, for example, in the newspaper, on the radio and on television. Social monitoring can and should go beyond producing information for a particular project or program. It should create an ongoing dialogue between and among people and institutions, expanding on traditional monitoring efforts and becoming a catalyst for organizations or government to better communicate and engage with the community. SMS feedback loops need to be thought of in terms of a dialogue or a series of questions rather than a one-question survey. “People get really engaged when they are involved in back and forth conversation.” Offering prizes or other kinds of external motivation can spike participation rates but also can create expectations that affect or skew programs in the long run. Sustainable approaches need to be identified early on. Rewards can also lead to false reports and re-registering, and need to be carefully managed.

3) Responsiveness to citizen/participant feedback is critical

One way to help motivate individuals to participate in social monitoring is for governments or institutions to show that citizen/participant feedback elicits a response (eg., better delivery of public services).  “Incentives are good,” said one discussant, “But at the core, if you get interactive with users, you will start to see the responses. Then you’ll have a targeted group that you can turn to.” Responsiveness can be an issue, however if there is limited government or institutional interest, resourcing or capacity, so it’s important to work on both sides of the equation so that demand does not outstrip response capacity. Monitoring the responsiveness to citizen/participant feedback is also important. “Was there a response promised? Did it happen? Has it been verified? What was the quality of it?”

4) Privacy and protection are always a concern

Salon participants brought up concerns about privacy and protection, especially for more sensitive issues that can put those who provide feedback at risk. There are a number of good practices in the IT world for keeping data itself private, for example presenting it in aggregate form, only releasing certain data, and setting up controls over who can access different levels of data. However with crowd-sourcing or incident mapping there can be serious concerns for those who report or provide feedback. Program managers need to have a very good handle on the potential risks involved or they can cause unintended harm to participants. Consulting with participants to better understand the context is a good idea.

5) Inclusion needs to be purposeful

Getting a representative response via SMS-based feedback or other social monitoring tools is not always easy. Mandatory ratios of male and female, age groups or other aspects can help ensure better representation. Different districts can be sampled in an effort to ensure overall response is representative. “If not,” commented one presenter, “you’ll just get data from urban males.” Barriers to participation also need consideration, such as language; however, working in multiple languages becomes very complicated very quickly. One participant noted that it is important to monitor whether people from different groups or geographic areas understand survey questions in the same way, and to be able to fine-tune the system as it goes along. A key concern is reaching and including the most vulnerable with these new technologies. “Donors want new technology as a default, but I cannot reach the most excluded with technology right now,” commented a participant.

6) Information should be useful to and used by the community

In addition to ensuring inclusion of individuals and groups, communities need to be involved in the entire process. “We need to be sure we are not just extracting information,” mentioned one participant. Organizations should be asking: What information does the community want? How can they get it themselves or from us? How can we help communities to collect the information they need on their own or provide them with local, sustainable support to do so?

7) Be sure to use the right tools for the job

Character limitation can be an issue with SMS. Decision tree models, where one question prompts another question that takes the user down a variety of paths, are one way around the character limit. SMS is not good for incredibly in-depth surveys however; it is good for breadth not depth. It’s important to use SMS and other digital tools for what they are good for. Paper can often be a better tool, and there is no shame in using it. Discussants emphasized that one shouldn’t underestimate the challenges in working with Telco operators and making short codes. Building the SMS network infrastructure takes months. Social media is on the rise, so how do you channel that into the M&E conversation?

8) Broader evaluative questions need to be established for these initiatives

The purpose of including ICT in different initiatives needs to be clear. Goals and evaluative questions need to be established. Teams need to work together because no one person is likely to have the programmatic, ICT and evaluation skills needed for a successfully implemented and well-documented project. Programs that include ICTs need better documentation and evaluation overall, including cost-benefit analyses and comparative analyses with other potential tools that could be used for these and similar processes.

9) Technology is not automatically cheaper and easier

These processes remain very iterative; they are not ‘automated’ processes. Initial surveys can only show patterns. What is more interesting is back-and-forth dialogue with participants. As one discussant noted, staff still spend a lot of time combing through data and responses to find patterns and nuances within the details. There is still a cost to these projects. In one instance, the major project budget went into a communication campaign that was launched and the work with existing physical networks to get people to participate. Compared to traditional ways of doing things (face-to-face, for example) the cost of outreach is not so expensive, but integrating SMS and other technologies does not automatically mean that money will be saved. The cost of SMS is also large in these kinds of projects because in order to ensure participation, representation, and inclusion, SMS usually needs to be free for participants. Even with bulk rates, if the program is at massive scale, it’s quite expensive. When assuming that governments or local organizations will take over these projects at some point, this is a real consideration.

10) Solutions at huge scale are not feasible for most organizations 

Some participants commented that the UN and the Red Cross and similarly sized organizations are the only ones who can work at the level of scale discussed at the Salon. Not many agencies have the weight to influence governments or mobile service providers, and these negotiations are difficult even for large-scale organizations. It’s important to look at solutions that react and respond to what development organizations and local NGOs can do. “And what about localized tools that can be used at district level or village level? For example, localized tools for participatory budgeting?” asked a participant. “There are ways to link high tech and SMS with low tech, radio outreach, working with journalists, working with other tools,” commented others. “We need to talk more about these ways of reaching everyone. We need to think more about the role of intermediaries in building capacity for beneficiaries and development partners to do this better.

11) New technology is not M&E magic

Even if you include new technology, successful initiatives require a team of people and need to be managed. There is no magic to doing translations or understanding the data – people are needed to put all this together, to understand it, to make it work. In addition, the tools covered at the Salon only collect one piece of the necessary information. “We have to be careful how we say things,” commented a discussant. We call it M&E, but it’s really ‘M’. We get confused with ourselves sometimes. What we are talking about today is monitoring results. Evaluation is how to take all that information then, and make an informed decision. It involves specialists and more information on top of this…” Another participant emphasized that SMS feedback can get at the symptoms but doesn’t seem to get at the root causes. Data needs to be triangulated and efforts made to address root causes and end users need to be involved.

12) Donors need to support adaptive design

Participants emphasized that those developing these programs, tools and systems need to be given space to try and to iterate, to use a process of adaptive design. Donors shouldn’t lock implementers into unsuitable design processes. A focused ‘ICT and Evaluation Fail Faire’ was suggested as a space for improving sharing and learning around ICTs and M&E. There is also learning to be shared from people involved in ICT projects that have scaled up. “We need to know what evidence is needed to scale up. There is excitement and investment, but not enough evidence,” it was concluded.

Our next Salon

Our next Salon in the series will take place on August 30th. It will focus on the role of intermediaries in building capacity for communities and development partners to use new technologies for monitoring and evaluation. We’ll be looking to discover good practices for advancing the use of ICTs in M&E in sustainable ways. Sign up for the Technology Salon mailing list here. [Update: A summary of the August 30 Salon is here.]

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

« Newer Posts - Older Posts »