Feeds:
Posts
Comments

Archive for the ‘monitoring and evaluation’ Category

According to the latest GSMA statistics, nearly 50% of people own a mobile phone in the developing world and almost 70% have access to mobile phones. With mobile access increasing daily, opportunities to use mobiles in development initiatives continue to grow and expand. The area of Mobiles for Development (M4D) has attracted investment from all sides, including mobile operators, entrepreneurs, investors and international development agencies – all working to generate social impact and improve wellbeing at the base of the pyramid. However, efforts to scale M4D initiatives and make them sustainable have largely failed.

Our July 25th NYC Technology Salon examined the topic of Scaling M4D. Lead discussants Corina Gardner from the GSMA’s Mobile Development Intelligence (MDI) unit and Sean McDonald from FrontlineSMS joined us to kick off the conversation, which was hosted at the Rockefeller Foundation.

Screen Shot 2013-08-14 at 10.52.13 AMThe Salon started off with key points from the MDI report “Scaling Mobile for Development: A developing world opportunity,” which highlights the issue of sustainable and scaled impact as the central challenge (and opportunity) in M4D over the next five years. Because the GSMA is commercially focused (it is made up of telecommunication industry members), business models that can achieve both revenue goals and added value to customers are a core concern. GSMA is interested in finding M4D business models that convince industry to re-invest and replicate. However not many of these examples exist.

Business models at the base of the pyramid (BOP) tend to be different than what the industry is used to. If scale is defined as the number of people reached with a service, and the population being reached has little money, then the only clear business model is via a huge customer base. Given that international development agencies also want to achieve scale with development programs, there is a good potential overlap in M4D. In order to achieve good impact, it’s critical to think through what BOP users want and need, and what offers real value to them for their limited resources.

Innovative vehicles are needed for investing in M4D. Currently, M4D financing tends to take two distinct paths: International Development funding and Venture Capital (VC) funding. Hiccups occur because the two operate very differently and do not always work well together. International development funds and processes do not move as quickly as technology-based funds. There is low tolerance for uncertainty and a desire for initial proof of potential impact, adoption and uptake. On the VC side, there is the desire for a light overhead structure modeled after Silicon Valley; however in African countries, for example, there is little existing infrastructure, meaning a heavier structure and a slower process. In addition, the exit strategy may not be clear. A worst-case scenario is when one of the two types of financing bodies is interested in investing, yet both walk away when they see the other at the table.

Though very few examples of M4D at scale exist, some elements brought up during the Salon that need to be considered include:

User-centric design. It is critical to understand the community and the end user’s needs, demands, and payment capacity. Both the private sector and international development agencies have existing approaches to developing M4D initiatives that focus on understanding local context and consultation and engagement with users, but the two sectors use different language to describe these approaches and they often talk past each other without connecting on their commonalities. According to one discussant, the best and most user friendly design is that with the lowest barrier to access, the simplest technology, the cleanest interface and configurability, so that people can build in more complexity if needed. These types of design will also tend to be the most replicable, an important element of scale. Iterative design and getting prototypes in front of users is needed to get their feedback, and this can be a challenge in M4D programs if they are being done within typical international development cycles of planning and funding.

User data. Users at the base of the pyramid are both financially poor and “data poor” and companies cannot create products for users that they know nothing about. Mobile can help gather data on user behaviors. This data can be used to inform business models, create products and services of value for BOP users, and to create revenue streams. One key question is that of how the data can be better used to benefit the BOP more broadly.

Understanding what ‘scale’ means for different parties. For mobile operators, scale is important because it is linked to numbers, volume and revenue. However this is not the element that matters for those working in international development, where impact may be a more important measure of success. Uptake of an M4D service may be due to advertising, rather than because it has a measurable impact on the life of a user. The difference needs to be understood and better analyzed and documented before success, scale, or impact is claimed. One measure of success is improved and sustained functioning of broader systems — and mobile may only be one small piece of a well-functioning development program, information ecosystem, or service delivery effort. As one discussant noted, “I don’t care if someone uses mobile banking or branch banking, so long as they are banking.” The mobile device may not be the central piece; it may be an additional access point for people who were formerly left out of these systems. In addition, “reaching” people is different than “influencing” people, and the latter will likely have more of an impact. Trust is critical in these efforts to influence, and often that takes more than a mobile connection.

Infrastructure.  The case for improved networks, coverage, and other infrastructure (electricity, for example) needs to be made to operators and government. The urban-rural divide when it comes to infrastructure is a global issue, not just one in so-called ‘developing economies.’ For example, using 4G and a credit card, someone can order a product on Amazon from the DRC, however Amazon will not be able to deliver that product. Similarly, someone can report poor government services via a mobile phone, but until infrastructure and governance improves, there may be no response. Poor infrastructure in rural areas is an issue globally.

Payment. Operators incorrectly give away free SMS to NGOs, said one discussant. Instead, having to pay a small amount (either as an NGO or an end user) means that much more care is taken in terms of what is communicated. “If it costs 5 cents to send a message, you will not spam people.” This is also critical for building in sustainability, and where the best ROI tends to be found in technology influenced programming. More thought and research is needed regarding payment and sustainable, scalable models.

Due diligence. A challenge in the M4D space is the high incidence of people seeing a problem, thinking no one has addressed it, and jumping in to build their own solution. This wastes money and time and creates churn. It is important to do research, layer, and build from other people’s ideas and existing solutions. One problem with the idea of due diligence, according to a participant, is that it means different things to different people. In technology it means “you have a problem, what is the cheapest and most robust solution,” but in the field of international development, context discovery takes a very long time and requires multidisciplinary knowledge and awareness that goes far beyond technology. There is also a need to consider whether technology (as opposed to non-digital efforts) is the most viable solution for the information and communication situation. ‘Horizontal due diligence’ (looking at partnerships) and due diligence with regard to maximizing systems are also needed.

Monitoring and Evaluation (M&E). M&E is currently sub-par on most M4D initiatives, said one participant. Organizations are often doing their own M&E rather than having a third party conduct external M&E.  There is a lack of comparative data on M4D programs also, and often M&E is attempted at the end of a project rather than built in from the start. A greater presence of academia is needed in M4D work, it was noted, and we also need more qualitative data, as currently the emphasis is on the quantitative data that are collected more easily via mobiles. One benefit in M4D programs is the ability to digitize and intelligently store data from the very start. This is the way to show scale and impact, said another participant. However data need to be well-used and refined, and available to the right person or people at the right time. Greater respect and understanding of privacy and ethical issues along with helping people to understand and steward their users’ data are also critically important and need more attention.

Salons function under Chatham House Rule, thus no attribution has been made. Sign up here to join the Technology Salon mailing list to receive invitations for future events in NYC, DC, San Francisco, London and Nairobi!

Read Full Post »

This is a cross-post from Tessie San Martin, CEO of Plan International USA. Tessie’s original post is published on the Plan International USA blogFor more on the status of the International Aid Transparency Initiative (IATI) in the US and information on which donors sit where on the Transparency Index, visit Publish What You Fund.

Over 40 governments, along with UN organizations and the World Bank, have committed to a common standard and time schedule for publishing aid information under the International Aid Transparency Initiative (IATI).  There are high expectations for this initiative. The ultimate objective is to increase the effectiveness of donor assistance, making aid work for those whom we are trying to help and contributing to accelerated development outcomes on the ground. IATI is good news for increased accountability, can help improve coordination, and provides a space for engaging donors, communities, governments and the general public in a broader development dialogue.

Secretary of State Clinton signed on behalf of the US Government on November 2011. While US engagement has been very welcomed, US Government performance in terms of actually executing IATI has left much to be desired.  Publish What You Fund, an organization helping to ensure governments are held to their initial aid transparency commitments, ranked only one out of six agencies (MCC) in the ‘fair’ category in terms of execution. Recently, organizations like Oxfam and ONE have rightly questioned the US Government’s commitment and progress, and exhorted the Obama administration to make full compliance with the IATI standard a priority.

But with all the attention focused on how USG is performing, what are INGOs doing about IATI?  After all, governments can only open access to the data they have. Official development assistance is an increasingly smaller proportion of the entire aid flows, so having INGOs — particularly, and at the very least, the largest global INGOs — also committed to this process is vital to the success of the Initiative.

What are INGO’s doing about IATI? The answer is: not much.

Very few INGOs have committed to publishing their information to the IATI standard.  INGOs that have complied are doing so primarily because a donor is requiring it.  For example, DfID, the UK foreign aid agency, has such a requirement and, as a result, the UK has the largest number of INGOs in compliance.  The US Government has not imposed this requirement on US-based INGOs and it is not likely to do so in the future.  It is therefore not surprising that US-based INGOs have not shown much interest in IATI.

This is a lost opportunity for everyone.  Accountability and transparency are as relevant to the private and the non-profit side of development assistance as they are to the public side.

At Plan International, an INGO with offices in almost 70 countries, it is not surprising that the part of our organization making the fastest strides in this area is our office in the United Kingdom.  As an important recipient of DfID money, they were instructed to do so.  In the US, though Plan International USA is not a major recipient of USG funding, we believe that making the investment to comply with IATI reporting format and timelines is good development practice; we are thus committed to publishing to IATI in the next year.  How can we effectively preach transparency and increased accountability to our recipient communities and to the governments with which we are working yet not commit to something as eminently common sensical as uniform formats, comparable data sets and systematic reporting frequencies?

We are not Pollyannaish about the task.  Like all INGOs pondering whether and how to comply with IATI, we have many concerns, including the costs of complying and what it will do to our overhead (and therefore to something like our Charity Navigator) rating.   We have established an internal project code so we can better capture, track and understand the costs involved in this initiative.  And we are evaluating where we draw the line in terms of the size of the projects on which we should be reporting, balancing costs with the desire to maximize disclosure (it is also worth remembering that rating agencies themselves are placing increasing emphasis on transparent reporting, so rating concerns may ultimately support a move towards greater IATI compliance).

As we have moved forward, we have had many issues to address, including privacy concerns, since a fair bit of Plan’s internal documentation was not written with the idea that it would one day be shared with the public.  Publishing some information may pose security risks for minority or political groups being supported.  These situations have been contemplated by IATI already, however, and there are valid exemptions for sensitive data.  We have also learned that there are many resources to help INGOs navigate the IATI compliance waters.  These resources are not well known to US INGOs, and need to be better publicized. Plan in the US, of course, is also benefiting from the research and hard work our UK office has done to comply with DfID’s mandate, allowing us to start off on a strong foundation of organizational experience.

I am convinced that IATI is not just good development practice but also makes good business sense. At the same time, it is worth remembering that IATI is not the entire solution.  IATI is designed to improve upward accountability to donors and taxpayers.  It is not designed explicitly to improve accountability to the children and communities with which we are partnering and whom we serve. And, as the ultimate goal is improved aid effectiveness, data must be accompanied by better information about goals, methodologies and approaches.  We also need to get better at sharing not just successes but failures within our federation and across all development organizations.

Despite all the shortcomings, IATI is a good start.  And as we push the US Government to do better, INGOs need to be pushing themselves to do better as well.

Read Full Post »

At Catholic Relief Services’ annual ICT4D meeting in March 2013, I worked with Jill Hannon from Rockefeller Foundation’s Evaluation Office to organize 3 sessions on the use of ICT for Monitoring and Evaluation (ICTME). The sessions covered the benefits (known and perceived) of using ICTs for M&E, the challenges and barriers organizations face when doing so, and some lessons and advice on how to integrate ICTs into the M&E process.

Our lead discussants in the three sessions included: Stella Luk (Dimagi), Guy Sharrack (CRS), Mike Matarasso (CRS), David McAfee (HNI/Datawinners), Mark Boots (Votomobile), and Teressa Trusty (USAID’s IDEA/Mobile Solutions). In addition, we drew from the experiences and expertise of some 60 people who attended our two round table sessions.

Benefits of integrating ICTs into the M&E process

Some of the potential benefits of integrating ICTs mentioned by the various discussants and participants in the sessions included:

  • More rigorous, higher quality data collection and more complete data
  • Reduction in required resources (time, human, money) to collect, aggregate and analyze data
  • Reduced complexity if data systems are simplified; thus increased productivity and efficiency
  • Combined information sources and types and integration of free form, qualitative data with quantitative data
  • Broader general feedback from a wider public via ICT tools like SMS; inclusion of new voices in the feedback process, elimination of the middleman to empower communities
  • Better cross-sections of information, information comparisons; better coordination and cross-comparing if standard, open formats are used
  • Trend-spotting with visualization tools
  • Greater data transparency and data visibility, easier data auditing
  • Real-time or near real-time feedback “up the chain” that enables quicker decision-making, adaptive management, improved allocation of limited resources based on real-time data, quicker communication of decisions/changes back to field-level staff, faster response to donors and better learning
  • Real-time feedback “down the ladder” that allows for direct citizen/beneficiary feedback, and complementing of formal M&E with other social monitoring approaches
  • Scale, greater data security and archiving, and less environmental impact
  • Better user experience for staff as well as skill enhancement and job marketability and competitiveness of staff who use the system

Barriers and challenges of integrating ICTs into M&E processes

A number of challenges and barriers were also identified, including:

  • A lack of organizational capacity to decide when to use ICTs in M&E, for what, and why, and deciding on the right ICT (if any) for the situation. Organizations may find it difficult to get beyond collecting the data to better use of data for decision-making and coordination. There is often low staff capacity, low uptake of ICT tools and resistance to change.
  • A tendency to focus on surveys and less attention to other types of M&E input, such as qualitative input. Scaling analysis of large-scale qualitative feedback is also a challenge: “How do you scale qualitative feedback to 10,000 people or more? People can give their feedback in a number of languages by voice. How do you mine that data?”
  • The temptation to offload excessive data collection to frontline staff without carefully selecting what data is actually going to be used and useful for them or for other decision-makers.
  • M&E is often tacked on at the end of a proposal design. The same is true for ICT. Both ICT and M&E need to be considered and “baked in” to a process from the very beginning.
  • ICT-based M&E systems have missed the ball on sharing data back. “Clinics in Ghana collect a lot of information that gets aggregated and moved up the chain. What doesn’t happen is sharing that information back with the clinic staff so that they can see what is happening in their own clinic and why. We need to do a better job of giving information back to people and closing the loop.” This step is also important for accountability back to communities. On the whole, we need to be less extractive.
  • Available tools are not always exactly right, and no tool seems to provide everything an organization needs, making it difficult to choose the right tool. There are too many solutions, many of which are duplicative, and often the feature sets and the usability of these tools are both poor. There are issues with sustainability and ongoing maintenance and development of M&E platforms.
  • Common definitions for data types and standards for data formatting are needed. The lack of interoperability among ICT solutions also causes challenges. As a field, we don’t do enough linking of systems together to see a bigger picture of which programs are doing what, where and who they are impacting and how.
  • Security and privacy are not adequately addressed. Many organizations or technology providers are unaware of the ethical implications of collecting data via new tools and channels. Many organizations are unclear about the ethical standards for research versus information that is offered up by different constituents or “beneficiaries” (eg., information provided by people participating in programs that use SMS or collect information through SMS-based surveys) versus monitoring and evaluation information. It is also unclear what the rules are for information collected by private companies, who this information can be shared with and what privacy laws mean for ICT-enabled M&E and other types of data collection. If there are too many barriers to collecting information, however, the amount of information collected will be reduced. A balance needs to be found. The information that telecommunications companies hold is something to think about when considering privacy and consent issues, especially in situations of higher vulnerability and risk. (UNOCHA has recently released a report that may be useful.)
  • Not enough is understood about motivation and incentive for staff or community members to participate or share data. “Where does my information go? Do I see the results? Why should I participate? Is anyone responding to my input?” In addition, the common issues of cost, access, capacity, language, literacy, cultural barriers are very much present in attempts to collect information directly from community members. Another question is that of inclusion: Does ICT-enabled data collection or surveying leave certain groups out? (See this study on intrinsic vs extrinsic motivation for feedback.)
  • Donors often push or dictate the use of ICT when it’s perhaps not the most useful for the situation. In addition there is normally not enough time during proposal process for organizations to work on buy-in and good design of an ICT-enabled M&E system. There is often a demand from the top for excessive data collection without an understanding of the effort required to collect it, and time/resource trade-offs for excessive data collection when it leads to less time spent on program implementation. “People making decisions in the capital want to add all these new questions and information and that can be a challenge… What data are valuable to collect? Who will respond to them? Who will use them as the project goes forward?”
  • There seems to be a focus on top-down, externally created solutions rather than building on local systems and strengths or supporting local organizations or small businesses to strengthen their ICTME capacities. “Can strengthening local capacity be an objective in its own right? Are donors encouraging agencies to develop vertical ICTME solutions without strengthening local systems and partners?”
  • Results-based, data-based focus can bias the countable, leave out complex development processes with more difficult to count/measure impacts.

Lessons and good practice for integrating ICTs into M&E processes

ICT is not a silver bullet – it presents its own set of challenges. But a number of good practices surfaced:

  • The use of ICTs for M&E is not just a technology issue, it’s a people and processes issue too, and it is important to manage the change carefully. It’s also important to keep an open mind that ICT4D to support M&E might not always be the best use of scarce resources – there may be more pressing priorities for a project. Getting influential people on your side to support the cause and help leverage funding and support is critical. It’s also important to communicate goals and objectives clearly, and provide incentives to make sure ICTs are successfully adopted. The trick is keeping up with technology advances to improve the system, but also keeping your eye on the ball.
  • When designing an ICTME effort, clarity of purpose and a holistic picture of the project M&E system are needed in order to review options for where ICT4D can best fit. Don’t start with the technology. Start with the M&E purpose and goals and focus on the business need, not the gadgets. Have a detailed understanding of M&E data requirements and data flows as a first step. Follow those with iterative discussions with ICT staff to specify the ICT4D solution requirements.
  • Select an important but modest project to start with and pilot in one location – get it right and work out the glitches before expanding to a second tier of pilots or expanding widely. Have a fully functional model to share for broad buy-in and collect some hard data during the pilot to convince people of adoption. The first ICT initiative will be the most important.  If it is successful, use of ICTs will likely spread throughout an organization.  If the first initiative fails, it can significantly push back the adoption of ICTs in general. For this reason, it’s important to use your best people for the first effort. Teamwork and/or new skill sets may be required to improve ICT-enabled M&E. The “ICT4D 2.0 Manifesto” talks about a tribrid set of skills needed for ICT-enabled programs.
  • Don’t underestimate the need for staff training and ongoing technical assistance to ensure a positive user experience, particularly when starting out. Agencies need to find the right balance between being able to provide support for a limited number of ICT solutions versus the need to support ongoing local innovation.  It’s also important to ask for help when needed.  The most successful M&E projects are led by competent managers who seek out resources both inside and outside their organizations.
  • Good ICT-enabled M&E comes from a partnership between program, M&E and ICT staff, technical support internal and external to the organization. Having a solid training curriculum and a good help desk are important. In addition, in-built capacity for original architecture design and to maintain and adjust the system is a good idea. A lead business owner and manager for the system need to be in place as well as global and local level pioneers and strong leadership (with budget!) to do testing and piloting. At the local level, it is important to have an energetic and savvy local M&E pioneer who has a high level of patience and understands technology.
  • At the community level, a key piece is understanding who you need to hear from for effective M&E and ensuring that ICT tools are accessible to all. It’s also critical to understand who you are ignoring or not reaching with any tool or process. Are women and children left out? What about income level? Those who are not literate?
  • Organizations should also take care that they are not replacing or obliterating existing human responsibilities for evaluation. For example, at community level in Ghana, Assembly Members have the current responsibility for representing citizen concerns. An ICT-enabled feedback loop might undermine this responsibility if it seeks direct-from-citizen evaluation input.  The issue of trust and the human-human link also need consideration. ICT cannot and should not be a replacement for everything. New ICT tools can increase the number of people and factors evaluated; not just increase efficiency of existing evaluations.
  • Along the same lines, it’s important not to duplicate existing information systems, create parallel systems or fragment the government’s own systems. Organizations should be strengthening local government systems and working with government to use the information to inform policy and help with decision-making and implementation of programs.
  • implementors need to think about the direction of information flow. “Is it valuable to share results “upward” and “downward”? It is possible to integrate local decision-making into a system.” Systems can be created that allow for immediate local-level decision-making based on survey input. Key survey questions can be linked to indicators that allow for immediate discussion and solutions to improve service provision.
  • Also, the potential political and social implications of greater openness in information flows needs to be considered. Will local, regional and national government embrace the openness and transparency that ICTs offer? Are donors and NGOs potentially putting people at risk?
  • For best results, pick a feasible and limited number of quality indicators and think through how frontline workers will be motivated to collect the data. Excessive data collection will interfere with or impede service delivery. Make sure managers are capable of handling and analyzing data that comes in and reacting to it, or there is no point in collecting it. It’s important to not only think about what data you want, but how this data will be used. Real-time data collected needs to be actionable. Be sure that those submitting data understand what data they have submitted and can verify its accuracy. Mobile data collection needs to be integrated into real processes and feedback loops. People will only submit information or reports if they see that someone cares about those reports and does something about them.
  • Collecting data through mobile technology may change the behavior being monitored or tracked. One participant commented that when his organization implemented an ICT-based system to track staff performance, people started doing unnecessary activities so that they could tick off the system boxes rather than doing what they knew should be done for better program impact.
  • At the practical level, tips include having robust options for connectivity and power solutions, testing the technology in the field with a real situation, securing reduced costs with vendors for bulk purchasing and master agreements, using standard vendor tools instead of custom building. It’s good to keep the system simple, efficient and effective as possible and to avoid redundancy or the addition of features things that don’t truly offer more functionality.

Thanks to all our participants and lead discussants at the sessions!

Useful information and guides on ICTME:

Mobile-based technology for monitoring and evaluation: A reference guide for project managers, M&E specialists, researchers, donors

3 Reports on mobile data collection

Other posts on ICTs for M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

Using participatory video for M&E

ICTs and M&E at the South Asia Evaluators’ Conclave

Read Full Post »

At the Community of Evaluators’ Evaluation Conclave last week, Jill Hannon from Rockefeller Foundation’s Evaluation Office and I organized a session on ICTs for Monitoring and Evaluation (M&E) as part of our efforts to learn what different organizations are doing in this area and better understand some of the challenges. We’ll do a couple of similar sessions at the Catholic Relief Services ICT4D Conference in Accra next week, and then we’ll consolidate what we’ve been learning.

Key points raised at this session covered experiences with ICTs in M&E and with ICT4D more generally, including:

ICTs have their advantages, including ease of data collection (especially as compared to carrying around paper forms); ability to collect and convey information from a large and diversely spread population through solutions like SMS; real-time or quick processing of information and ease of feedback; improved decision-making; and administration of large programs and funding flows from the central to the local level.

Capacity is lacking in the use of ICTs for M&E. In the past, the benefits of ICTs had to be sold. Now, the benefits seem to be clear, but there is not enough rigor in the process of selecting and using ICTs. Many organizations would like to use ICT but do not know how or whom to approach to learn. A key struggle is tailoring ICTs to suit M&E needs and goals and ensuring that the tools selected are the right ones for the job and the user. Organizations have a hard time deciding whether it is appropriate to use ICTs, and once they decide, they have trouble determining which solutions are right for their particular goals. People commonly start with the technology, rather than considering what problem they want the technology to help resolve. Often the person developing the M&E framework does not understand ICT, and the person developing the ICT does not understand M&E. There is need to further develop the capacities of M&E professionals who are using ICT systems. Many ICT solutions exist but organizations don’t know what questions to ask about them, and there is not enough information available in an easily understandable format to help them make decisions.

Mindsets can derail ICT-related efforts. Threats and fears around transparency can create resistance among employees to adopt new ICT tools for M&E. In some cases, lack of political makes it difficult to bring about institutional change. Earlier experiences of failure when using ICTs (eg, stolen or broken PCs or PDAs) can also ruin the appetite for trying ICTs again. One complaint was that some government employees nearing retirement age will participate in training as a perk or to collect per diem, yet be uninterested in actually learning any new ICT skills. This can take away opportunities from younger staff who may have a real interest in learning and implementing new approaches.

Privacy needs further study and care. It is not clear whether those who provide information through Internet, SMS, etc., understand how it is going to be used and organizations often do not do a good job of explaining. Lack of knowledge and trust in the privacy of their responses can affect willingness or correctness of responses. More effort needs to be made to guarantee privacy and build trust. Technological solutions to privacy such as data encryption can be implemented, but human behavior is likely the bigger challenge. Paper surveys with sensitive information often get piled up in a room where anyone could see them. In the same way, people do not take care with keeping data collected via ICTs safe; for example, they often share passwords. Organizations and agencies need to take privacy more seriously.

Internal Review Boards (IRBs) are missing in smaller organizations. Normally an IRB allows a researcher to be sure that a survey is not personal or potentially traumatizing, that data encryption is in place, and that data are sanitized. But these systems are usually not established in small, local organizations — they only exist in large organizations — leaving room for ethics breaches.

Information flows need quite a lot of thought, as unintended consequences may derail a project. One participant told of a community health initiative that helped women track their menstrual cycles to determine when they were pregnant. The women were sent information and reminders through SMS on prenatal care. The program ran into problems because the designers did not take into account that some women would miscarry. Women who had miscarried got reminders after their miscarriage, which was traumatic for them. Another participant gave an example of a program that publicized the mobile number of a staff member at a local NGO that supported women victims of violence so that women who faced violence could call to report it. The owner of the mobile phone was overwhelmed with the number of calls, often at night, and would switch the mobile off, meaning no response was available to the women trying to report violence. The organization therefore moved to IVR (interactive voice response), which resolved the original problem, however, with IVR, there was no response to the women who reported violence.

Research needs to be done prior to embarking on use of ICTs. A participant working with women in rural areas mentioned that her organization planned to use mobile games for an education and awareness campaign. They conducted research first on gender roles and parity and found that actually women had no command over phones. Husbands or sons owned them and women had access to them only when the men were around, so they did not proceed with the mobile games aspect of the project.

Literacy is an issue that can be overcome. Literacy is a concern, however there are many creative solutions to overcome literacy challenges, such as the use of symbols. A programme in an urban slum used symbols on hand-held devices for a poverty and infrastructure mapping exercise. In Nepal, an organization tried using SMS weather reports, but most people did not have mobiles and could not read SMS. So the organization instead sent an SMS to a couple of farmers in the community who could read, and who would then draw weather symbols on a large billboard. IVR is another commonly used tool in South Asia.

Qualitative data collection using ICTs should not be forgotten. There is often a focus on surveys, and people forget about the power of collecting qualitative data through video, audio, photos, drawings on mobiles and tablets and other such possibilities. A number of tools can be used for participatory monitoring and evaluation processes. For example, baseline data can be collected through video. tagging can be used to help sort content., video and audio files can be linked with text, and change and decision-making can be captured through video vignettes. People can take their own photos to indicate importance or value. Some participatory rural appraisal techniques can be done on a tablet with a big screen. Climate change and other visual data can be captured with tablets or phones or through digital maps. Photographs and GPS are powerful tools for validation and authentication, however care needs to be taken when using maps with those who may not easily orient themselves to an aerial map. One caution is that some of these kinds of initiatives are “boutique” designs that can be quite expensive, making scale difficult. As android devices and tablets become increasingly cheaper and more available, these kinds of solutions may become easier to implement.

Ubiquity and uptake are not the same thing. Even if mobile phones are “everywhere” it does not mean people will use them to do what organizations or evaluators want them to do. This is true for citizen feedback programs, said one participant, especially when there is a lack of response to reports. “It’s not just an issue of literacy or illiteracy, it’s about culture. It’s about not complaining, about not holding authorities accountable due to community pressures. Some people may not feed back because they are aware of the consequences of complaining and this goes beyond simple access and use of technology.” In addition, returning collected data to the community in a format they can understand and use for their own purposes is important. A participant observed that when evaluators go to the community to collect data for baseline, outcome, impact, etc., from a moral standpoint it is exploitative if they do not report the findings back to the community. Communities are not sure of what they get back from the exercise and this undermines the credibility of the feedback mechanism. Unless people see value in participation, they will not be willing to give their information or feedback. However, it’s important to note that responses to citizen or beneficiary feedback can also skew beneficiary feedback. “When people imagine a response will get them something, their feedback will be based on what they expect to get.”

There has not been enough evaluation of ICT-enabled efforts. A participant noted that despite apparent success, there are huge challenges with the use of ICTs in development initiatives: How effective has branchless banking been? How effective is citizen feedback? How are we evaluating the effectiveness of these ICT tools? And what about how these programs impact on different stakeholders? Some may be excited by these projects, whereas others are threatened.

Training and learning opportunities are needed. The session ended, yet the question of where evaluators can obtain additional guidance and support for using ICTs in M&E processes lingered. CLEAR South Asia has produced a guide on mobile data collection, and we’ll be on the lookout for additional resources and training opportunities to share, for example this series of reports on Mobile Data Collection in Africa from the World Wide Web Foundation or this online course Using ICT Tools for Effective Monitoring, Impact Evaluation and Research available through the Development Cafe.

Thanks to Mitesh Thakkar from Fieldata, Sanjay Saxena from Total Synergy Consulting, Syed Ali Asjad Naqvi from the Center for Economic Research in Pakistan (CERP) and Pankaj Chhetri from Equal Access Nepal for participating as lead discussants at the session; Siddhi Mankad from Catalyst Management Services Pvt. Ltd for serving as rapporteur; and Rockefeller Foundation’s Evaluation Office for supporting this effort.

We used the Technology Salon methodology for the session, including Chatham House Rule, therefore no attribution has been made in this summary post.

Other sessions in this series of Salons on ICTs and M&E:

12 tips on using ICTs for social monitoring and accountability

11 points on strengthening local capacity to use new ICTs for M&E

10 tips on using new ICTs for qualitative M&E

In addition, here’s a post on how War Child Uganda is using participatory video for M&E

Read Full Post »

policy forum

This past Monday I had the opportunity to join Engineers without Borders (EWB) in Calgary, Canada, at their Annual Policy Forum on Global Development to discuss “How can open government contribute to community and economic development?”

Morning panels covered some examples of open government initiatives from Finland, Ghana and Canada. In the afternoon we heard about some of the challenges with open data, open government and the International Aid Transparency Initiative. Table discussions followed both of the panels. The group was a mix of Canadian and African government representatives, people from organizations and groups working in different countries on open government and open data initiatives, and young people who are connected with EWB. The session was under Chatham House Rule in order to encourage frank conversation.

Drawing from such documents as the Open Government Partnership’s Open Government Declaration, Harlan Yu and David G. Robinson’s “The New Ambiguity of “Open Government,” Beth Noveck’s What’s in a Name? Open Gov and Good Gov and Nathaniel Heller, A Working Definition of ‘Open Government’, the following definition of Open Government was used to frame the discussions.

EWB Definition of Open Government

Below (in a very-much-longer-than-you-are-supposed-to-write-in-a-blogpost summary) are the highlights and points I found interesting and useful as related to Open Development, Open Data, Open Government and the International Aid Transparency Initiative (IATI)

1.  Participation thresholds need to be as low as possible for people to participate and engage in open government or open data initiatives. You need to understand well what engagement tools are most useful or comfortable for different groups. In some places, to engage the public you can use tools such as etherpad, wiki platforms, google docs, open tools and online collaboration spaces. In other places and with other populations, regardless of what country, you may be more successful with face-to-face methods or with traditional media like television and radio, but these need to be enhanced with different types of feedback methods like phone calls or surveys or going house to house so that your information is not only traveling one way. Community organizing skills are key to this work, regardless of whether the tools are digital or not.

2.  Literacy remains a huge challenge hindering access to information and citizen engagement in holding government accountable in many countries. This is why face-to-face engagement is important, as well as radio and more popular or broad-based communication channels. One participant asked “how can you make open government a rural, rather than an urban only, phenomenon?” This question resonated for participants from all countries.

3.  Language is still a critical issue. Language poses a big challenge for these kinds of initiatives, from the grassroots level to the global level, within and among countries, for citizens, governments, and anyone trying to share or collect data or information. It was noted that all the countries who have published data to IATI are publishing in English. All the IATI Standards are in English, as is the entire support system for IATI. As one participant noted, this begs the question of who the information in IATI is actually designed for and serving, and who are the expected users of it. Open data initiatives should consider the implications of language they publish in, both politically and practically.

4.  Open data can serve to empower the already empowered. As one speaker noted, “the idea that everyone has the potential to make use of open data is simply not true.” Access to digital infrastructure and educational resource may be missing, meaning that many do not have the ability to access, interpret or use data for their own purposes. Governments can also manipulate data and selectively release data that serves their own interests. Some questioned government motives, citing the example of a government that released “data” saying its unemployment rate was 10% when “everyone knew this to be false, and people grumbled but we did not feel empowered to challenge that statement.” Concern was expressed over the lack of an independent body or commission in some countries to oversee open data and open government processes. Some did not trust the government bodies who were currently in charge of collecting and opening information, saying that due to politics, they would never release any information that made their party or their government look bad.

5.  Privacy rights can be exploited if data is opened without data protection laws and effort to build capacity around how to make certain data anonymous. Citizens may also not be aware of what rights are being violated, so this should also be addressed.

6.  Too much open data discussion takes place without a power analysis, as one participant commented, making some of the ideas around open data and open government somewhat naïve. “Those who have the greatest stake will be the most determined to push their point of view and to make sure it prevails.”

7.  Open data needs to become open data 2.0. According to one participant, open data is still mostly one-way information delivery. In some cases there isn’t even any delivery – information is opened on a portal but no one knows it’s there or what it refers to or why it would be useful. When will open data, open government and open aid become more of a dialogue? When will data be released that answers questions that citizens have rather than the government deciding what it will release? The importance of working with community groups to strengthen their capacity to ask questions and build critical consciousness to question the data was emphasized. A counter point was that government is not necessarily there to start collecting information or creating data sets according to public demand. Governments collect certain data to help them function.

8.  Intermediaries working on open government should be careful of real or perceived bias. Non-profits have their own agendas, and ‘open data’ and ‘open information’ is not immune to being interpreted in non-objective ways. Those working on civic engagement initiatives need to be careful that they are not biased in their support for citizen initiatives. One presenter who works on a platform that encourages citizens to be involved in petitioning new laws for contemplation in Parliament said “Our software is open source so that anyone can set up a similar process to compete with us if they feel we are biased towards one or another type of agenda.”

9.  Technology-based engagement tools change who is participating. Whether in Finland, Canada, Ghana or Malawi, it’s critical to think about reaching those who are not active already online, those who are not the typical early adopters. To reach a broader public, one speaker noted “We are going to remote places, doing events in smaller towns and cities to see how people want to influence and take part in this. Making sure the website is accessible and understandable.”

10. Technological platforms are modifying how political parties and democratic processes operate. This may or may not be a good thing. Normally priorities arise and are discussed within political parties. Will people now bypass the party process and use ‘direct democracy’ channels if they are passionate about an issue but do not want to enter into negotiation around it? Will this weaken political processes or longer standing democratic processes? One speaker considered this change to be positive. People are not happy with being able to vote every 4 years and they want opportunities to participate in between elections cycles and direct voice in how priorities are decided. Others questioned whether bypassing official processes can lead to less participation and more apathy overall on national issues. Some questioned whether within fairly long-standing democracies, open data will have any real impact, considering existing levels of apathy and the lack of political participation.

11. Strong information, statistical, monitoring and evaluation systems are critical for open data and open government processes and to ensure more effective management of development results. This is still a challenge for some countries that need to review their mechanisms and improve their tools and processes for data collection and dissemination. If there is no data, or no current data, there is not much point in opening it. In addition, there are capacity and technical competency challenges within institutions in some countries. One participant mentioned a lack of current government geological information about gold and oil deposits that weakens government capacity to negotiate with the private sector extraction industry and ensure partnerships and earnings will contribute to national development. In addition more evidence is needed on the impact, use, and outcomes of open data. At the moment it’s quite difficult to say with any real authority what the outcomes and impact of open data and open government have been.

12. IATI (International Aid Transparency Initiative) needs more partners. Government representatives noted that they are opening their data, but they can only open the data they possess. In order for data on aid to be useful, more data is needed, especially that of NGOs who are implementing programs. Not many NGOs have published their information to the IATI standard at this point. “The really interesting thing will be when we can start mashing up and mapping out the different kinds of information,” as one speaker noted, “for example, this is the goal of the Open Aid Partnership. It will involve combining information from the donor, development indicators from the World Bank, and country information, and this will open up amazing possibilities once this is all geo-coded.” There are reporting challenges related to IATI and open government data, however, because at times countries and NGOs do not see the benefits of reporting – it feels like just one more top-down administrative burden. There are also issues with donor governments reporting their committed intentions and amounts, recipient governments reporting back, and communications with citizens on both sides (donor and recipient countries). One example that was reported to be enjoying some success was the multi-donor budget support initiative in Ghana, where development partners and government work together to establish development indicators and commitments. If the government delivers on the indicators, the development partners will then provide them with the funding. Development partners can also earmark funding to particular areas if there is government agreement.

13. We need more accountability towards ‘beneficiaries’.Currently many of these initiatives are perceived as being focused on donors and donor publics. As one participant noted, “the interesting thing is less about government and more about getting regular people involved in these processes. When you engage the public you’ll engage government leaders in thinking they will need to change to respond to what citizens are asking for.” Another noted that the essential issue is the link between transparency/accountability and citizens and their own governments. In addition, as one participant asked, “How can you strengthen capacity among citizens to ask the right questions about the data that’s being opened?” For example, citizens may ask about the number of schools being built, but not ask about the quality of education being provided. Public education was a strong focus of discussions around citizen engagement during the policy forum.

14. Should citizens be consulted on everything? however, was one big question. The public at large may not understand the ramifications of its own deep misunderstandings on particular issues and may be inputting from a viewpoint that lacks scientific evidence or fact. “It’s one thing to have an opinion about whether your child should be able to drink energy drinks before age 16, it’s another to input about technical programs like the best policy for green energy,” commented one group.

15. Can citizens really have greater participation if government is still in control of data? was another big question. An example was given of an open consultative process that became unwieldy for a local government, which then shut down the consultation process and changed the nature of the documents to ‘administrative’ and therefore no longer open. Others asked why governments pat themselves on the back over being part of the Open Government Partnership yet they do not have Freedom of Information Acts (FOIA) or they prosecute those who open data in alternative ways, such as Bradley Manning and Aaron Swartz.

16. If citizens don’t get a response from government (or if they don’t like the response, or feel it’s biased or manipulated), apathy and cynicism will increase. It’s important to make sure that ‘open government’ is not just a box that gets ticked off, but rather a long-term change in mentality of those in power and deeper expectations and efforts by citizens for openness and participation in conversations of national importance.

The conclusion was that Open Government is somewhat of a paradox, rooted in aims that are not necessarily new. Open Government strives to enable leaders in their communities to create change and transform their lives and those of people in their communities. It is a complex process that involves many actors and multiple conflicting goals and interests. It’s also something new that we are all learning about and experimenting with, but we are very impatient to know what works and what the impact is. In the room, the feeling was one of ‘radical pragmatism,’ as one participant put it. Open Government is a big idea that represents a big change. It’s something that can transform communities at the global level and there is a great deal of hope and excitement around it. At the same time, we need to acknowledge the challenges associated with it in order to address them and move things forward.

I’ll do a follow up post with the points I made during the panel as this post is clearly way too too long already. Kudos if you are still reading, and a huge thanks to the organizers and participants in the EWB policy forum.

Read Full Post »

This is a guest post from David Schaub-Jones who works with SeeSaw, a social venture that focuses on how technology can strengthen sanitation and water providers in developing countries. Follow SeeSaw on Twitter:  @ontheseesaw.

In June, two organisations focussed on using ICT (Information and Communications Technology) in the water and sanitation sector joined forces in Cape Town. SeeSaw, a social enterprise that customises ICT to support sanitation and water providers and iComms, a University of Cape Town research unit (Information for Community Oriented Municipal Services) co-hosted a two day event to look at how ICT tools are changing the way that public services function in developing countries.

(from SeeSaw’s website.)

There are growing expectations that harnessing ICT intelligently can bring about radical improvements in the way that health, education and other sectors function, particularly in developing countries. SeeSaw and iComms wanted to look at this in more detail – and to build on the open sharing of experience to provide general principles to those planning to harness ICT for public service delivery. Their overarching goal is to help practitioners cut through much of the complexity and hype surrounding ICT usage and give them a robust set of guidelines with which to plan and negotiate partnerships and projects on the ground.

The event brought together 30+ practitioners – with water sector professionals from across Southern Africa joined by their colleagues from the health sector – a sector that has been quick to innovate, try different approaches and learn lessons. A full write-up of the event can be found here in the Water Information Network of South Africa’s October Newsletter. (See www.win-sa.org).

Key messages

1) Putting in place an effective ICT system can make a visible impact on the ground. It can pay for itself quite quickly in terms of efficiency gains and even costs saved. Yet a fair amount of thought must go into designing the system to fit the local context – just transplanting a system that has worked in one place to a new environment is generally a recipe for trouble.

2) An important spin-off result of looking at how to use ICT is that the effort taken to design a responsive system forces stakeholders to reflect more closely on the existing structures, process and current information flows. This can have significant benefits even if no system is later built.

3) A recommendation is to spend due time and effort in understanding the system, asking direct stakeholders what information they currently get, what information they need and then seeing how and whether ICT systems can be used to gather data that can generate additional, better or faster information and get it to where it is needed (in a way that suits the working patterns of those individuals).

4) For impact at any significant scale it is crucial that ICT systems, whether in healthcare or water and sanitation, integrate with existing government systems. There is a great risk of fragmentation – too many organisations piloting new ICT systems put in place technologies or processes that cannot easily be absorbed into existing government systems (or worse still, undermine these).

5) A lot of initiatives, particularly in the healthcare system, have tried to harness ICT to get people to do what is good for them. And only that. For instance, cellphones used to gather field information can be restricted so that they can only do one thing and no longer function as a phone. Airtime and data bundles used for transmitting information can be isolated to only contribute to ‘the project’. The disadvantage is that this turns the device into something used only for work, something alien and otherwise ‘not useful’. Alternatives do exist though and can be productive. If frontline workers being asked to use phones and new ICT tools are permitted – sometimes on a limited basis – to use them for their own purposes (browsing the internet, accessing facebook, receiving SMS) then they are more likely to engage with the project, look after the equipment, etc. A balance is surely needed, but a quid pro quo arrangement can be a sensible approach. This was characterised as “give them pizza with their broccoli”!

6) ICT tools can be incredibly powerful at improving the flow of data and, from there, the flow of information. But what if the flow of information is not the real problem? There are many issues that undermine healthcare or water and sanitation systems – and a lot of them have little to do with information. Cultural conflicts, different worldviews, individual rivalries, dysfunctional facilities – all of these can be the ‘sand in the gearbox’. Don’t assume that a new ICT system is going to solve all problems – after all, these are tools, not a panacea to what are typically complex and entrenched challenges.

Next Steps

SeeSaw and iComms are now exploring how to take forward research into how to improve information flows and how incentives shape the behaviour of different stakeholders within any ICT system designed for the water and sanitation sector. A similar event is also planned for East Africa in early 2013.

If you are interested in hearing more about these topics, subscribe to SeeSaw’s newsletter for more details.

Read Full Post »

At the October 17, 2012 Technology Salon NYC, we focused on ways that ICTs can be used for qualitative monitoring and evaluation (M&E) efforts that aim to listen better to those who are participating in development programs. Our lead discussants were:  John Hecklinger, Global Giving; Ian Thorpe, UN DOCO and the World We Want 2015 Campaign; and Emily Jacobi, Digital Democracy. This salon was the final in a series of three on using new technologies in M&E work.

Global Giving shared experiences from their story-telling project which has collected tens of thousands of short narratives from community members about when an individual or organization tried to change something in their community. The collected stories are analyzed using Sensemaker to find patterns in the data with the aim of improving NGO work. (For more on Global Giving’s process see this document.)

The United Nations’ Beyond 2015 Campaign aims to spur a global conversation on the post-MDG development agenda. The campaign is conducting outreach to people and organizations to encourage them to participate in the discussion; offering a web platform (www.worldwewant2015.org) where the global conversation is taking place; and working to get offline voices into the conversation. A challenge will be synthesizing and making sense of all of the information coming in via all sorts of media channels and being accountable now and in future to those who participate in the process.

Digital Democracy works on digital literacy and human rights, and makes an effort to integrate qualitative monitoring and evaluation into their program work stream. They use photography, film and other media that transcend the language and literacy barriers. Using these kinds of media helps participants express opinions on issues that need addressing and builds trust. Photos have helped in program development as well as in defining quantitative and qualitative indicators.

A rich conversation took place around the following aspects:

1) Perception may trump hard data

One discussant raised the question “Do opinions matter more than hard data on services?” noting that perceptions about aid and development may be more important than numbers of items delivered, money spent, and timelines met. Even if an organization is meeting all of its targets, what may matter more is what people think about the organization and its work. Does the assistance they get respond to their needs? Rather than asking “Is the school open?” or “Did you get health care?” it may be more important to ask “How do you feel about health?” Agencies may be delivering projects that are not what people want or that do not respond to their needs, cultures, and so on. It is important to encourage people to talk amongst themselves about their priorities, what they think, encourage viewpoints from people of different backgrounds and see how to pull out information to help inform programs and approaches.

2) It is a complex process

Salon participants noted that people are clearly willing to share stories and unstructured feedback. However, the process of collecting and sorting through stories is unwieldy and far from perfect. More work needs to be done to simplify story-collection processes and make them more tech-enabled. In addition, more needs to be done to determine exactly how to feed the information gleaned back in a structured and organized way that helps with decision-making. One idea was the creation of a “Yelp” for NGOs. Tagging and/or asking program participants to tag photos and stories can help make sense of the data. If videos are subtitled, this can also be of great use to begin making sense of the type of information held in videos. Dotsub, for example, is a video subtitling platform that uses a Wikipedia style subtitling model, enabling crowd sourced video translations into any language.

3) Stories and tags are not enough

We know that collecting and tagging stories to pull out qualitative feedback is possible. But so what? The important next step is looking at the effective use of these stories and data. Some ideas on how to better use the data include adding SMS feedback, deep dives with NGOs, and face-to-face meetings. It’s important to move from collecting the stories to thinking about what questions should be asked, how the information can help NGOs improve their performance, how this qualitative data translates into change or different practice at the local and global levels, how the information could be used by local organizers for community mobilization or action, and how all this is informing program design, frameworks and indicators.

4) Outreach is important

Building an online platform does not guarantee that anyone will visit it or participate. Local partners are an important element to reach out and collect data about what people think and feel. Outreach needs to be done with many partners from all parts of a community or society in order to source different viewpoints. In addition, it is important to ask the right questions and establish trust or people will not want to share their views. Any quality participation process, whether online or offline, needs good facilitation and encouragement; it needs to be a two-way process, a conversation.

5) Be aware of bias

Understanding where the process may be biased is important. Everything from asking leading questions, defining the meta data in a certain way, creating processes that only include certain parts of the community or population, selecting certain partners, or asking questions that lead to learning what an organization thinks it needs to know can all create biased answers. Language is important here for several reasons: it will affect who is included or excluded and who is talking with whom. Using development jargon will not resonate with people, and the way development agencies frame questions may lead people to particular answers.

6)  Be aware of exclusion

Related to bias is the issue of exclusion. In large-scale consultations or online situations, it’s difficult to know who is talking and participating. Yet the more log-in information solicited, the less likely people are to participate in discussions. However by not asking, it’s hard to know who is responding, especially when anonymity is allowed. In addition, results also depend on who is willing and wants to participate. Participants agreed that there is no silver bullet to finding folks to participate and ensuring they represent diversity of opinion. One suggestion was that libraries and telecenters could play a role in engaging more remote or isolated communities in these kinds of dialogues.

7) Raising expectations

Asking people for feedback raises expectations that their input will be heard and that they will see some type of concrete result. In these feedback processes, what happens if the decisions made by NGOs or heads of state don’t reflect what people said or contributed? How can we ensure that we are actually listening to what people tell us? Often times we ask for people’s perceptions and then tell them why they are wrong. Follow up is also critical. A campaign from several years ago was mentioned where 93,000 people signed onto a pledge, and once that was achieved, the campaign ended and there was no further engagement with the 93,000 people. Soliciting input and feedback needs to be an ongoing relationship with continual dialogue and response. The process itself needs to be transparent and accountable to those who participate in it.

8 ) Don’t forget safety and protection

The issue of safety and protection for those who offer their opinions and feedback or raise issues and complaints was brought up. Participants noted that safety is very context specific and participatory risk assessments together with community members and partners can help mitigate and ensure that people are informed about potential risk. Avoiding a paternalistic stance is recommended, as sometimes human rights advocates know very well what their risk is and are willing to take it. NGOs should, however, be sure that those with whom they are working fully understand the risks and implications, especially when new media tools are involved that they may not have used before. Digital literacy is key.

9) Weave qualitative M&E into the whole process

Weaving consistent spaces for input and feedback into programs is important. As one discussant noted, “the very media tools we are training partners on are part of our monitoring and evaluation process.”  The initial consultation process itself can form part of the baseline. In addition to M&E, creating trust and a safe space to openly and honestly discuss failure and what did not go so well can help programs improve.  Qualitative information can also help provide a better understanding of the real and hard dynamics of the local context, for example the challenges faced during a complex emergency or protracted conflict. Qualitative monitoring can help people who are not on the ground have a greater appreciation for the circumstances, political framework, and the socio-economic dynamics.

10) Cheaper tool are needed

Some felt that the tools being shared (Sensemaker in particular) were too expensive and sophisticated for their needs, and too costly for smaller NGOs. Simpler tools would be useful in order to more easily digest the information and create visuals and other analyses that can be fed back to those who need to use the information to make changes. Other tools exist that might be helpful, such as Trimble’s Municipal Reporter, Open Data Kit, Kobe, iForm Builder, Episurveyor/Magpi and PoiMapper. One idea is to look at some of the tools being developed and used in the crisis mapping and response space to see if cost is dropping and capacity increasing as the field advances. (Note: several tools for parsing Twitter and other social media platforms were presented at the 2012 International Conference on Crisis Mapping, some of which could be examined and learned from.)

What next?

A final question at the Salon was around how the broader evaluation community can connect with the tools and people who are testing and experimenting with these new ways of conducting monitoring and evaluation. How can we create better momentum in the community to embrace these practices and help build this field?

Although this was the final Salon of our series on monitoring and evaluation, we’ll continue to work on what was learned and ways to take these ideas forward and keep the community talking and growing.

A huge thank you to our lead discussants and participants in this series of Salons, especially to the Community Systems Foundation and the Rockefeller Foundation’s monitoring and evaluation team for joining in the coordination with us. A special thanks to Rockefeller for all of the thoughtful discussion throughout the process and for hosting the Salons.

The next Technology Salon NYC will be November 14, 2012, hosted by the Women’s Refugee Commission and the International Rescue Committee. We’ll be shifting gears a little, and our topic will be around ways that new technologies can support children and youth who migrate, are forcibly displaced or are trafficked.

If you’d like to receive notifications about future salons, sign up for the mailing list!

Previous Salons in the ICTs and M&E Series:

12 lessons learned with ICTs for monitoring and accountability

11 points on strengthening local capacity to use new ICTs for monitoring and evaluation

Read Full Post »

New technologies are changing the nature of monitoring and evaluation, as discussed in our previous Salon on the use of ICTs in M&E. However, the use of new technologies in M&E efforts can seem daunting or irrelevant to those working in low resource settings, especially if there is little experience or low existing capacity with these new tools and approaches.

What is the role of donors and other intermediaries in strengthening local capacity in communities and development partners to use new technologies to enhance monitoring and evaluation efforts?

On August 30, the Rockefeller Foundation and the Community Systems Foundation (CSF) joined up with the Technology Salon NYC to host the second in a series of 3 Salons on the use of ICTs in monitoring and evaluating development outcomes and to discuss just this question. Our lead discussants were: Revati Prasad from Internews, Tom O’Connell from UNICEF and Jake Watson from the International Rescue Committee. (Thanks Jake for stepping in at the last minute!)

We started off with the comment that “Many of us are faced with the “I” word – in other words, having to demonstrate impact on the ground. But how can we do that if we are 4 levels removed from where change is happening?” How can organizations and donors or those sitting in offices in Washington DC or New York City support grantees and local offices to feed back more quickly and more accurately? From this question, the conversation flowed into a number of directions and suggestions.

1) Determine what works locally

Donor shouldn’t be coming in to say “here’s what works.” Instead, they should be creating local environments for innovation. Rather than pushing things down to people, we need to start thinking from the eyes of the community and incorporate that into how we think and what we do. One participant confirmed that idea with a concrete example. “We went in with ideas – wouldn’t SMS be great… but it became clear that SMS was not the right tool, it was voice. So we worked to establish a hotline. This has connected [the population] with services, it also connects with a database that came from [their] own needs and it tracks what they want to track.” As discussed in the last Salon, however, incentive and motivation are critical. “Early on, even though indicators were set by the community, there was no direct incentive to report.” Once the mentioned call center connected the reporting to access to services, people were more motivated to report.

2) Produce local, not national-level information

If you want to leverage technology for local decision-making, you need local level information, not broad national level information. You also need to recognize that the data will be messy. As one participant said, we need to get away from the idea of imperfect data, and instead think: is the information good enough to enable us to reach that child who wasn’t reached before? We need to stop thinking of knowledge as discrete chunks that endure for 3-4 years. We are actually processing information all the time. We can help managers to think of information as something to filter and use constantly and we can help them with tools to filter information, create simpler dashboards, see bottlenecks, and combine different channels of information to make decisions.

3) Remember why you are using ICTs in M&E

We should be doing M&E in order to achieve better results and leveraging technologies to achieve better impact for communities. Often, however, we end up doing it for the donor. “Donors get really excited about this multicolored thing with 50,000 graphs, but the guy on the ground doesn’t use a bit of it. We need to let go.” commented one participant. “I don’t need to know what the district manager knows. I need to know that he or she has a system in place that works for him or her. My job is to support local staff to have that system working. We need to focus on helping people do their jobs.”

4) Excel might be your ‘killer app’

Worldwide, the range of capacities is huge. Sometimes ICT sounds very sexy, but the greatest success might be teaching people how to use Excel, how to use databases to track human rights violations and domestic violence or setting up a front-end and a data entry system in a local language.

5) Technology capacity doesn’t equal M&E capacity

One participant noted that her organization is working with a technology hub that has very good tech skills but lacks capacity in development and M&E. Their work over the past year has been less about using technology and more about working with the hub to develop these other capacities: how to conduct focus groups, surveys, network analysis, developing toolkits and guides. There’s often excitement on the ground – ‘We can get data in 48 hours! Wow! Let’s go!’ However creating good M&E surveys to be used via technology tools is difficult. One participant expressed that finding local expertise in this area is not easy, especially considering staff turnover. “We don’t always have M&E experts on the ground.” In addition, “there is an art to polls and survey trees, especially when trying to take them from English into other languages. How do you write a primer for staff to create meaningful questions.”

6) Find the best level for ICTs to support the process

ICTs are not always the best tool at the community or district level, given issues of access, literacy, capacity, connection, electricity, etc., but participants mentioned working in blended ways, eg., doing traditional data collection and using ICTs to analyze the data, compile it, produce localized reports, and working with the community to interpret the information for better decision-making. Others use hand-drawn maps, examine issues from the community angle and then incorporate that into digital literacy work and expression work, using new technology tools to tell and document the communities’ stories.

7) Discover the shadow systems and edge of network

One participant noted that people will comply and they will move data through the system as requested from on high, but they simultaneously develop their own ways of tracking information that are actually useful to them. By discovering these ‘shadow systems’, you can see what is really useful. This ‘edge of network’ is where people with whom headquarters doesn’t have contact live and work. We rely on much of their information to build M&E systems yet we don’t consult and work with them often enough. Understanding this ‘edge of network’ is critical to designing and developing good M&E systems and supporting local level M&E for better information and decision-making.

8 ) The devil is in the details

There are many M&E tools to choose from and each has its pros and cons. Participants mentioned KoBo, RapidSMSNokia Data GatheringFrontlineSMS and Episurveyor. While there is a benefit to getting more clean data and getting it in real-time, there will always be post-processing tasks. The data can, however, be thrown on a dashboard for better decision-making. Challenges exist, however. For example, in Haiti, as one participant commented, there is a 10% electrification rate, so solar is required. “It’s difficult to get a local number with Clickatell [an SMS gateway]; you can only get an international number. But getting a local number is very complicated. If you go that route, you need a project coordinator. And if you are using SMS, how do you top off the beneficiaries so that they can reply? The few pennies it costs for people to reply are a deterrent. Yet working with telecom providers is very time-consuming and expensive in any country. Training local staff is an issue – trying to train everyone on the ICT package that you are giving them. You can’t take anything for granted. People usually don’t have experience with these systems.” Literacy is another stumbling block, so some organizations are looking at Interactive Voice Response (IVR) and trying to build a way for it to be rapidly deployed.

9) Who is the M&E for?

Results are one thing, but as one participant noted, “part of results measuring means engaging communities in saying whether the results are good for them.” Another participant commented that Ushahidi maps are great and donors love them. But in CAR, for example, there is 1% internet penetration and maybe 9% of the people text. “If you are creating a crisis map about the incidence of violence, your humanitarian actors may access it, it may improve service delivery, but it is in no way useful for people on the ground. There is reliance on technology, but how to make it useful for local communities is still the big question…. It’s hard to talk about citizen engagement and citizen awareness if you are not reaching citizens because they don’t have access to technology.” And “what about the opportunity cost for the poor? ”asked one participant. “Time is restricted. CSOs push things down to the people least able to use the time for participation. There is a cost to participation, yet we assume participation is a global good. The poorest are really scraping for time and resources.  ‘Who is the data for?’ is still a huge question. Often it’s ‘here’s what we’re going to do for you’ rather than meeting with people first, asking what’s wrong, then listening and asking what they would like to do about it, and listening some more.”

10) Reaching the ‘unreachable’

Reaching and engaging the poorest is still difficult, and the truly unreached will require very different approaches. “We’re really very much spoke to hub,” said one participant, “This is not enough. How can we innovate and resolve this.” Another emphasized the need to find out who’s not part of the conversation, who is left out or not present when these community discussions take place. “You might find out that adolescent girls with mobility issues are not there. You can ask those with whom you are consulting if they know of someone who is not at the meeting. You need to figure out how to reach the invisible members of the community.”  However, as noted, “we also have to protect them. Sometimes identifying people can expose them. There is no clear answer.”

11) Innovation or building on what’s already there?

So will INGOs and donors continue to try to adapt old survey ideas to new technology tools? And will this approach survive much longer? “Aren’t we mostly looking for information that we can act on? Are we going to keep sending teams out all the time or will we begin to work with information we can access differently? Can we release ourselves from that dependence on survey teams?” Some felt that ‘data exhaust’ might be one way of getting information differently; for example a mode like Google Flu Trends. But others noted the difficulty of getting information from non-online populations, who are the majority. In addition, with these new ICT-based methods, there is still a question about representativeness and coverage. Integrated approaches where ICTs are married with traditional methods seem to be the key. This begs the question: “Is innovation really better than building up what’s already there?” as one participant commented. “We need to ask – does it add value? Is it better than what is already there? If it does add perceived value locally, then how do we ensure that it comes to some kind of result. We need to keep our eye on the results we want to achieve. We need to be more results-oriented and do reality checks. We need to constantly ask ourselves:  Are we listening to folks?”

In conclusion

There is much to think about in this emerging area of ICTs and Monitoring and Evaluation.  Join us for the third Salon in the series on October 17 where we’ll continue discussions. If you are not yet on the Technology Salon mailing list, you can sign up here. A summary of the first Salon in the series is here. (A summary of the October 17th Salon is here.)

Salons run by Chatham House Rule, thus no attribution has been made. 

Read Full Post »

« Newer Posts