Feeds:
Posts
Comments

Posts Tagged ‘ICT’

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »

Screen Shot 2016-01-12 at 10.17.25 AMSince I started looking at the role of ICTs in monitoring and evaluation a few years back, one concern that has consistently come up is: “Are we getting too focused on quantitative M&E because ICTs are more suited to gather quantitative data? Are we forgetting the importance of qualitative data and information? How can we use ICTs for qualitative M&E?”

So it’s great to see that Insight Share (in collaboration with UNICEF) has just put out a new guide for facilitators on using Participatory Video (PV) and the Most Significant Change (MSC) methodologies together.

 

The Most Significant Change methodology is a qualitative method developed (and documented in a guide in 2005) by Rick Davies and Jess Dart (described below):

Screen Shot 2016-01-12 at 9.59.32 AM

Participatory Video methodologies have also been around for quite a while, and they are nicely laid out in Insight Share’s Participatory Video Handbook, which I’ve relied on in the past to guide youth participatory video work. With mobile video becoming more and more common, and editing tools getting increasingly simple, it’s now easier to integrate video into community processes than it has been in the past.

Screen Shot 2016-01-12 at 10.00.54 AM

The new toolkit combines these two methods and provides guidance for evaluators, development workers, facilitators, participatory video practitioners, M&E staff and others who are interested in learning how to use participatory video as a tool for qualitative evaluation via MSC. The toolkit takes users through a nicely designed, step-by-step process to planning, implementing, interpreting and sharing results.

I highly recommend taking a quick look at the toolkit to see if it might be a useful method of qualitative M&E — enhanced and livened up a bit with video!

Read Full Post »

At our November 18th Technology Salon, we discussed how different organizations are developing their ICT for development (ICT4D) strategies. We shared learning on strategy development and buy-in, talked about whether organizations should create special teams or labs for ICT- and innovation-related work or mainstream the ICT4D function, and thought about how organizations can define and find the skill sets needed for taking their ICT-enabled work forward. Population Council’s Stan Mierzwa, Oxfam America’s Neal McCarthy, and Cycle Technologies’ Leslie Heyer joined as lead discussants, and we heard from Salon participants about their experiences too.

Participating organizations were at various stages of ICT4D work, but most had experienced similar challenges and frustrations with taking their work forward. Even organizations that had created ICT4D strategies a couple of years ago said that implementation was slow.

Some of the key elements mentioned by our first discussant as important for managing and strategically moving ICT forward in an organization included:

  • being more informed about where different offices and staff were using ICTs for programmatic work,
  • establishing a standard set of technology tools for organizational use,
  • improved knowledge management about ICTs,
  • publishing on how ICTs were being used in programs (to help with credibility),
  • engaging with different teams and leadership to secure support and resources
  • working more closely with human resources teams who often do not understand ICT4D-related job descriptions and the profile needed.

Our second discussant said that his organization developed an ICT4D strategy in order to secure resources and greater support for moving ICT4D forward. It was also starting to be unwieldy to manage all of the different ideas and tools being used across the organization, and it seemed that greater harmonization would allow for improved IT support for more established tools as well as establishment of other ways to support new innovations.

In this case, the organization looked at ICTs as two categories: technology for development workers and technology for development outcomes. They used Gartner’s ‘pace layered’ model (which characterizes systems of innovation, systems of differentiation, and systems of record) as a way of analyzing the support roles of different departments.

One of the initial actions taken by this organization was establishing a small tech incubation fund that different offices could apply for in order to try something new with ICTs in their programs and campaigns. Another action was to take 10 staff to the Catholic Relief Services (CRS) ICT4D conference to learn more about ICT4D and to see what their peers from similar organizations were doing. In return for attending the conference, staff were required to submit a proposal for the tech incubation fund.

For the development of the strategy document and action plan, the ICT4D strategy team worked with a wider group of staff to develop a list of current ICT-enabled initiatives and a visual heat map of actions and activities across the organization. This formed the basis for discussions on where lots of ICT4D activities were happening and where there was nothing going on with ICTs. The team then discussed what the organization should do strategically to support and potentially consolidate existing activities and what should be done about areas where there were few ICT-related activities – should those areas be left alone or was there a reason to look at them to see if ICT should be incorporated?

Having done that, the organization adapted Nethope’s Organizational Guide to ICT4D to fit its own structure and culture, and used it as a framework for ICT4D strategy discussions with key staff from different teams. The Nethope guide suggests five key functions for strategic, organization-wide ICT4D: lead organizational change, drive knowledge exchange, build a portfolio, manage processes, and develop an advisory service (see below). The aforementioned activities were also clustered according to which of these 5 areas they fell into.

Screen Shot 2015-11-24 at 8.53.12 AM

(Table of contents from Nethope’s Guide.)

The organization felt it was also important to change the image of the IT team. ‘We had to show that we were not going to tie people up with formal committees and approvals if they wanted to try something new and innovative. Being more approachable is necessary or staff will bypass the IT team and go to consultants, and then we open ourselves up to data privacy risks and we also lose institutional knowledge.’

Salon participants agreed that it was important to know how to “sell” an ICT4D-related idea to frontline staff, management and leadership. Some ways to do this include demonstrating the value-add of ICTs in terms of longer-term cost and time efficiencies, showing the benefit of real-time data for decision-making, and demonstrating what peer organizations are doing. Organizations often also need someone at the top who is pushing for change and modernization.

Our third discussant said that her company has been shifting from a commercial product developer to a full-fledged technology company. She outlined the need for strategic thinking along that journey. Initially, the company outsourced activities such as research and data collection. With time, it started to pull key functions in house since systems maintenance and technology has become a core part of the business.

“As a small company, we can be flexible and change easily,” she said. ‘ICT is embedded into our culture and everyone thinks about it.’ One challenge that many ICT4D initiatives face – whether they are happening in a non-profit or a for-profit — is sustainability. ‘People are often fine with paying for a physical product, but when it comes to the web, they are accustomed to getting everything for free, which makes long-term sustainability difficult.’

In order to continuously evolve their strategies, organizations need to have time and space to constantly step back and think about their underlying values and where they see themselves in 5 or 10 years. A more pro-active relationship with donors is also important. Although Salon participants felt that the ICT4D Principles and related processes were promising, they also felt that donors do not have a clear idea of what they are looking for, what exists already, what needs to be created, and what evidence base exists for different tools or kinds of ICT4D. One Salon participant felt that ‘donor agencies don’t know what kinds of tech are effective, so it’s up to you as an implementer to bring the evidence to the table. It’s critical to have the ITC4D support staff at the table with you, because if not these more detailed conversations about the tech don’t happen with donors and you’ll find all kinds of duplication of efforts.’

Another challenge with thinking about ICT4D in a strategic way is that donors normally don’t want to fund capacity building, said another Salon participant. They prefer to fund concrete projects or innovation challenges rather than supporting organizations to create an environment that gives rise to innovation. In addition, funding beyond the program cycle is a big challenge. ‘We need to be thinking about enterprise systems, layered on systems, national systems,’ said one person. ‘And systems really struggle here to scale and grow if you can’t claim ownership for the whole.’

Salon participants highlighted hiring and human resources departments as a big barrier when it comes to ICT4D. It is often not clear what kinds of skills are needed to implement ICT4D programs, and human resources teams often screen for the wrong skill sets because they do not understand the nature of ICT4D. ‘I always make them give me all the CVs and screen them myself,’ said one person. ‘If not, some of the best people will not make it to the short list.’ Engaging with human resources and sharing the ICT4D strategy is one way to help with better hiring and matching of job needs with skill sets that are out there and potentially difficult to find.

In conclusion, whether the ICT4D strategy is to mainstream, to isolate and create a ‘lab,’ or to combine approaches, it seems that most organizations are struggling a bit to develop and/or implement ICT4D strategies due to the multiple pain points of slow organizational change and the need for more capacity and resources. Some are making headway, however, and developing clearer thinking and action plans that are paying off in the short term, and that may set the organizations up for eventual ICT4D success.

Thanks to Population Council for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology Salon.

Salons are held under Chatham House Rule. No attribution has been made in this post.

Read Full Post »

The July 7th Technology Salon in New York City focused on the role of Information and Communication Technologies (ICTs) in Public Consultation. Our lead discussants were Tiago Peixoto, Team Lead, World Bank Digital Engagement Unit; Michele Brandt, Interpeace’s Director of Constitution-Making for Peace; and Ravi Karkara, Co-Chair, Policy Strategy Group, World We Want Post-2015 Consultation. Discussants covered the spectrum of local, national and global public consultation.

We started off by delving into the elements of a high-quality public consultation. Then we moved into whether, when, and how ICTs can help achieve those elements, and what the evidence base has to say about different approaches.

Elements and principles of high quality public participation

Our first discussant started by listing elements that need to be considered, whether a public consultation process is local, national or global, and regardless of whether it incorporates:

  • Sufficient planning
  • Realistic time frames
  • Education for citizens to participate in the process
  • Sufficient time and budget to gather views via different mechanisms
  • Interest in analyzing and considering the views
  • Provision of feedback about what is done with the consultation results

Principles underlying public consultation processes are that they should be:

  • Inclusive
  • Representative
  • Transparent
  • Accountable

Public consultation process should also be accompanied by widespread public education processes to ensure that people are prepared to a) provide their opinions and b) aware of the wider context in which the consultation takes place, she said. Tech and media can be helpful for spreading the news that the consultation is taking place, creating the narrative around it, and encouraging participation of groups who are traditional excluded, such as girls and women or certain political, ethnic, economic or religious groups, a Salon participant added.

Technology increases scale but limits opportunities for empathy, listening and learning

When thinking about integrating technologies into national public consultation processes, we need to ask ourselves why we want to encourage participation and consultation, what we want to achieve by it, and how we can best achieve it. It’s critical to set goals and purpose for a national consultation, rather than to conduct one just to tick a box, continued the discussant.

The pros and cons of incorporating technology into public consultations are contextual. Technology can be useful for bringing more views into the consultation process, however face-to-face consultation is critical for stimulating empathy in decision makers. When people in positions of power actually sit down and listen to their constituencies, it can send a very powerful message to people across the nation that their ideas and voices matter. National consultation also helps to build consensus and capacity to compromise. If done according to the above-mentioned principles, public consultation can legitimize national processes and improve buy-in. When leaders are open to listening, it also transforms them, she said.

At times, however, those with leadership or in positions of power do not believe that people can participate; they do not believe that the people have the capacity to have an opinion about a complicated political process, for example the creation of a new constitution. For this reason there is often resistance to national level consultations from multilateral or bilateral donors, politicians, the elites of a society, large or urban non-governmental organizations, and political leaders. Often when public consultation is suggested as part of a constitution making process, it is rejected because it can slow down the process. External donors may want a quick process for political reasons, and they may impose deadlines on national leaders that do not leave sufficient time for a quality consultation process.

Polls often end up being one-off snapshots or popularity contests

One method that is seen as a quick way to conduct a national consultation is polling. Yet, as Salon participants discussed, polls may end up being more like a popularity contest than a consultation process. Polls offer limited space for deeper dialogue or preparing those who have never been listened to before to make their voices heard. Polling may also raise expectations that whatever “wins” will be acted on, yet often there are various elements to consider when making decisions. So it’s important to manage expectations about what will be done with people’s responses and how much influence they will have on decision-making. Additionally, polls generally offers a snapshot of how people feel at a distinct point in time, but it may be important to understand what people are thinking at various moments throughout a longer-term national process, such as constitution making.

In addition to the above, opinion polls often reinforce the voices of those who have traditionally had a say, whereas those who have been suffering or marginalized for years, especially in conflict situations, may have a lot to say and a need to be listened to more deeply, explained the discussant. “We need to compress the vertical space between the elites and the grassroots, and to be sure we are not just giving people a one-time chance to participate. What we should be doing is helping to open space for dialogue that continues over time. This should be aimed at setting a precedent that citizen engagement is important and that it will continue even after a goal, such as constitution writing, is achieved,” said the discussant.

In the rush to use new technologies, often we forget about more traditional ones like radio, added one Salon participant, who shared an example of using radio and face to face meetings to consult with boys and girls on the Afghan constitution. Another participant suggested we broaden our concept of technology. “A plaza or a public park is actually a technology,” he noted, and these spaces can be conducive to dialogue and conversation. It was highlighted that processes of dialogue between a) national government and the international community and b) national government and citizens, normally happen in parallel and at odds with one another. “National consultations have historically been organized by a centralized unit, but now these kinds of conversations are happening all the time on various channels. How can those conversations be considered part of a national level consultation?” wondered one participant.

Aggregation vs deliberation

There is plenty of research on aggregation versus deliberation, our next discussant pointed out, and we know that the worst way to determine how many beans are in a jar is to deliberate. Aggregation (“crowd sourcing”) is a better way to find that answer. But for a trial, it’s not a good idea to have people vote on whether someone is guilty or not. “Between the jar and the jury trial, however,” he said, “we don’t know much about what kinds of policy issues lend themselves better to aggregation or to deliberation.”

For constitution making, deliberation is probably better, he said. But for budget allocation, it may be that aggregation is better. Research conducted across 132 countries indicated that “technology systematically privileges those who are better educated, male, and wealthier, even if you account for the technology access gaps.” This discussant mentioned that in participatory budgeting, people tend to just give up and let the educated “win” whereas maybe if it were done by a simple vote it would be more inclusive.

One Salon participated noted that it’s possible to combine deliberation and aggregation. “We normally only put things out for a vote after they’ve been identified through a deliberative process,” he said, “and we make sure that there is ongoing consultation.” Others lamented that decision makers often only want to see numbers – how many voted for what – and they do not accept more qualitative consultation results because they usually happen with fewer people participating. “Congress just wants to see numbers.”

Use of technology biases participation towards the elite

Some groups are using alternative methods for participatory democracy work, but the technology space has not thought much about this and relies on self-selection for the most part, said the discussant, and results end up being biased towards wealthier, urban, more educated males. Technology allows us to examine behaviors by looking at data that is registered in systems and to conduct experiments, however those doing these experiments need to be more responsible, and those who do not understand how to conduct research using technology need to be less empirical. “It’s a unique moment to build on what we’ve learned in the past 100 years about participation,” he said. Unfortunately, many working in the field of technology-enabled consultation have not done their research.

These biases towards wealthier, educated, urban males are very visible in Europe and North America, because there is so much connectivity, yet whether online or offline, less educated people participate less in the political process. In ‘developing’ countries, the poor usually participate more than the wealthy, however. So when you start using technology for consultation, you often twist that tendency and end up skewing participation toward the elite. This is seen even when there are efforts to proactively reach out to the poor.

Internal advocacy and an individual’s sense that he or she is capable of making a judgment or influencing an outcome is key for participation, and this is very related to education, time spent in school and access to cultural assets. With those who are traditionally marginalized, these internal assets are less developed and people are less confident. In order to increase participation in consultations, it’s critical to build these internal skills among more marginalized groups.

Combining online and offline public consultations

Our last discussant described how a global public consultation was conducted on a small budget for the Sustainable Development Goals, reaching an incredible 7.5 million people worldwide. Two clear goals of the consultation were that it be inclusive and non-discriminatory. In the end, 49% who voted identified as female, 50% as male and 1% as another gender. Though technology played a huge part in the process, the majority of people who voted used a paper ballot. Others participated using SMS, in locally-run community consultation processes, or via the website. Results from the voting were visualized on a data dashboard/data curation website so that it would be easier to analyze them, promote them, and encourage high-level decision makers to take them into account.

Some of the successful elements of this online/offline process included that transparency was a critical aspect. The consultation technology was created as open source so that those wishing to run their own consultations could open it, modify it, and repackage it however they wanted to suit their local context. Each local partner could manage their own URL and track their own work, and this was motivating to them.

Other key learning was that a conscious effort has to be made to bring in voices of minority groups; investment in training and capacity development was critical for those running local consultations; honesty and transparency about the process (in other words, careful management of expectations); and recognize that there will be highs and lows in the participation cycle (be sensitive to people’s own cycles and available time to participate).

The importance of accountability

Accountability was a key aspect for this process. Member states often did not have time to digest the results of the consultation, and those running it had to find ways to capture the results in short bursts and visually simple graphics so that the consultation results would be used for decision making. This required skill and capacity for not only gathering and generating data but also curating it for the decision-making audience.

It was also important to measure the impact of the consultation – were people’s voices included in the decision-making process and did it make a difference? And were those voices representative of a wide range of people? Was the process inclusive?

Going forward, in order to build on the consultation process and to support the principle of accountability, the initiative will shift focus to become a platform for public participation in monitoring and tracking the implementation of the Sustainable Development Goals.

Political will and responsiveness

A question came up about the interest of decision-makers in actually listening. “Leaders often are not at all interested in what people have to say. They are more concerned with holding onto their power, and if leaders have not agreed to a transparent and open process of consultation, it will not work. You can’t make them listen if they don’t want to. If there is no political will, then the whole consultation process will just be propaganda and window dressing,” one discussant commented. Another Salon participant what can be done to help politicians see the value of listening. “In the US, for example, we have lobbyists, issues groups, PACs, etc., so our politicians are being pushed on and demanded from all sides. If consultation is going to matter, you need to look at the whole system.” “How can we develop tools that can help governments sort through all these pressures and inputs to make good decisions?” wondered one participant.

Another person mentioned Rakesh Rajani’s work, noting that participation is mainly about power. If participation is not part of a wider system change, part of changing power structures, then using technology for participation is just a new tool to do the same old thing. If the process is not transparent and accountable, or if you engage and do not deliver anything based on the engagement, then you will lose future interest to engage.

Responsiveness was also raised. How many of these tech-fueled participation processes have led to governments actually changing, doing something different? One discussant said that evidence of impact of ICT-enabled participation processes was found in only 25 cases, and of those only 5 could show any kind of impact. All the others had very unclear impact – it was ambiguous. Did using ICTs make a difference? There was really no evidence of any. Another commented that clearly technology will only help if government is willing and able to receive consultation input and act on it. We need to find ways to help governments to do that, noted another person.

As always, conversation could have continued on for quite some time but our 2 hours was up. For more on ICTs and public consultations, here is a short list of resources that we compiled. Please add any others that would be useful! And as a little plug for a great read on technology and its potential in development and political work overall, I highly recommend checking out Geek Heresy: Rescuing Social Change from the Cult of Technology from Kentaro Toyama. Kentaro’s “Law of Amplification” is quite relevant in the space of technology-enabled participation, in that technology amplifies existing human behaviors and tendencies, and benefits those who are already primed to benefit while excluding those who have been traditionally excluded. Hopefully we’ll get Kentaro in for a Tech Salon in the Fall!

Thanks to our lead discussants, Michele, Tiago and Ravi, and to Thoughtworks for their generous hosting of the Salon! Salons are conducted under Chatham House Rule so no attribution has been made in this post. Sign up here if you’d like to receive Technology Salon invitations.

Read Full Post »

Screen Shot 2015-04-23 at 8.59.45 PMBy Mala Kumar and Linda Raftree

Our April 21st NYC Technology Salon focused on issues related to the LGBT ICT4D community, including how LGBTQI issues are addressed in the context of stakeholders and ICT4D staff. We examined specific concerns that ICT4D practitioners who identify as LGBTQI have, as well as how LGBTQI stakeholders are (or are not) incorporated into ICT4D projects, programs and policies. Among the many issues covered in the Salon, the role of the Internet and mobile devices for both community building and surveillance/security concerns played a central part in much of the discussion.

To frame the discussion, participants were asked to think about how LGBTQI issues within ICT4D (and more broadly, development) are akin to gender. Mainstreaming gender in development starts with how organizations treat their own staff. Implementing programs, projects and policies with a focus on gender cannot happen if the implementers do not first understand how to treat staff, colleagues and those closest to them (i.e. family, friends). Likewise, without a proper understanding of LGBTQI colleagues and staff, programs that address LGBTQI stakeholders will be ineffective.

The lead discussants of the Salon were Mala Kumar, writer and former UN ICT4D staff, Tania Lee, current IRC ICT4D Program Officer, and Robert Valadéz, current UN ICT4D staff. Linda Raftree moderated the discussion.

Unpacking LGBTQI

The first discussant pointed out how we as ICT4D/development practitioners think of the acronym LGBTQI, particularly the T and I – transgender and intersex. Often, development work focuses on the sexual identity portion of the acronym (the LGBQ), and not what is considered in Western countries as transgenderism.

As one participant said, the very label of “transgender” is hard to convey in many countries where “third gender” and “two-spirit gender” exist. These disagreements in terminology have – in Bangladesh and Nepal for example – resulted in creating conflict and division of interest within LGBTQI communities. In other countries, such as Thailand and parts of the Middle East, “transgenderism” can be considered more “normal” or societally acceptable than homosexuality. Across Africa, Latin America, North America and Europe, homosexuality is a better understood – albeit sometimes severely criminalized and socially rejected – concept than transgenderism.

One participant cited that in her previous first-hand work on services for lesbian, gay and bisexual people; often in North America, transgender communities are prioritized less in LGBTQI services. In many cases she saw in San Francisco, homeless youth would identify as anything in order to gain access to needed services. Only after the services were provided did the beneficiaries realize the consequences of self-reporting or incorrectly self-reporting.

Security concerns within Unpacking LGBTQI

For many people, the very notion of self-identifying as LGBTQI poses severe security risks. From a data collection standpoint, this results in large problems in accurate representation of populations. It also results in privacy concerns. As one discussant mentioned, development and ICT4D teams often do not have the technical capacity (i.e. statisticians, software engineers) to properly anonymize data and/or keep data on servers safe from hackers. On the other hand, the biggest threat to security may just be “your dad finding your phone and reading a text message,” as one person noted.

Being an LGBTQI staff in ICT4D

 Our second lead discussant spoke about being (and being perceived as) an LGBTQI staff member in ICT4D. She noted that many of the ICT4D hubs, labs, centers, etc. are in countries that are notoriously homophobic. Examples include Uganda (Kampala), Kenya (Nairobi), Nigeria (Abuja, Lagos), Kosovo and Ethiopia (Addis). This puts people who are interested in technology for development and are queer at a distinct disadvantage.

Some of the challenges she highlighted include that ICT4D attracts colleagues from around the world who are the most likely to be adept at computers and Internet usage, and therefore more likely to seek out and find information about other staff/colleagues online. If those who are searching are homophobic, finding “evidence” against colleagues can be both easy and easy to disseminate. Along those lines, ICT4D practitioners are encouraged (and sometimes necessitated) to blog, use social media, and keep an online presence. In fact, many people in ICT4D find posts and contracts this way. However, keeping online professional and personal presences completely separate is incredibly challenging. Since ICT4D practitioners are working with colleagues most likely to actually find colleagues online, queer ICT4D practitioners are presented with a unique dilemma.

ICT4D practitioners are arguably the set of people within development that are the best fitted to utilize technology and programmatic knowledge to self-advocate as LGBT staff and for LGBT stakeholder inclusion. However, how are queer ICT4D staff supposed to balance safety concerns and professional advancement limitations when dealing with homophobic staff? This issue is further compounded (especially in the UN, as one participant noted) by being awarded the commonly used project-based contracts, which give staff little to no job security, bargaining power or general protection when working overseas.

Security concerns within being an LGBTQI staff in ICT4D

A participant who works in North America for a Kenyan-based company said that none of her colleagues ever mentioned her orientation, even though they must have found her publicly viewable blog on gender and she is not able to easily disguise her orientation. She talked about always finding and connecting to the local queer community wherever she goes, often through the Internet, and tries to support local organizations working on LGBT issues. Still, she and several other participants and discussants emphasized their need to segment online personal and professional lives to remain safe.

Another participant mentioned his time working in Ethiopia. The staff from the center he worked with made openly hostile remarks about gays, which reinforced his need to stay closeted. He noticed that the ICT staff of the organization made a concerted effort to research people online, and that Facebook made it difficult, if not impossible, to keep personal and private lives separate.

Another person reiterated this point by saying that as a gay Latino man, and the first person in his family to go to university, grad school and work in a professional job, he is a role model to many people in his community. He wants to offer guidance and support, and used to do so with a public online presence. However, at his current internationally-focused job he feels the need to self-censor and has effectively limited talking about his public online presence, because he often interacts with high level officials who are hostile towards the LGBTQI community.

One discussant also echoed this idea, saying that she is becoming a voice for the queer South Asian community, which is important because much of LGBT media is very white. The tradeoff for becoming this voice is compromising her career in the field because she cannot accept a lot of posts because they do not offer adequate support and security.

Intersectionality

Several participants and discussants offered their own experiences on the various levels of hostility and danger involved with even being suspected as gay. One (female) participant began a relationship with a woman while working in a very conservative country, and recalled being terrified at being killed over the relationship. Local colleagues began to suspect, and eventually physically intervened by showing up at her house. This participant cited her “light skinned privilege” as one reason that she did not suffer serious consequences from her actions.

Another participant recounted his time with the US Peace Corps. After a year, he started coming out and dating people in host country. When one relationship went awry and he was turned into the police for being gay, nothing came of the charges. Meanwhile, he saw local gay men being thrown into – and sometimes dying in – jail for the same charges. He and some other participants noted their relative privilege in these situations because they are white. This participant said he felt that as a white male, he felt a sense of invincibility.

In contrast, a participant from an African country described his experience growing up and using ICTs as an escape because any physical indication he was gay would have landed him in jail, or worse. He had to learn how to change his mannerisms to be more masculine, had to learn how to disengage from social situations in real life, and live in the shadows.

One of the discussants echoed these concerns, saying that as a queer woman of color, everything is compounded. She was recruited for a position at a UN Agency in Kenya, but turned the post down because of the hostility towards gays and lesbians there. However, she noted that some queer people she has met – all white men from the States or Europe – have had overall positive experiences being gay with the UN.

Perceived as predators

One person brought up the “predator” stereotype often associated with gay men. He and his partner have had to turn down media opportunities where they could have served as role models for the gay community, especially poor, gay queer men of color, (who are one of the most difficult socioeconomic classes to reach) out of fear that this stereotype may impact on their being hired to work in organizations that serve children.

Monitoring and baiting by the government

One participant who grew up in Cameroon mentioned that queer communities in his country use the Internet cautiously, even though it’s the best resource to find other queer people. The reason for the caution is that government officials have been known to pose as queer people to bait real users for illegal gay activity.

Several other participants cited this same phenomenon in different forms. A recent article talked about Egypt using new online surveillance tactics to find LGBTQI people. Some believe that this type of surveillance will also happen in Nigeria, a notoriously hostile country towards LGBTQI persons and other places.

There was also discussion about what IP or technology is the safest for LGBTQI people. While the Internet can be monitored and traced back to a specific user, being able to connect from multiple access points and with varying levels of security creates a sense of anonymity that phones cannot provide. A person also generally carries phones, so if the government intercepts a message on either the originating or receiving device, implications of existing messages are immediate unless a user can convince the government the device was stolen or used by someone else. In contrast, phones are more easily disposable and in several countries do not require registration (or a registered SIM card) to a specific person.

In Ethiopia, the government has control over the phone networks and can in theory monitor these messages for LGBTQI activity. This poses a particular threat since there is already legal precedent for convictions of illegal activity based on text messages. In some countries, major telecom carriers are owned by a national government. In others, major telecom carries are national subsidiaries of an international company.

Another major concern raised relates back to privacy. Many major international development organizations do not have the capacity or ability to retain necessary software engineers, ICT architects and system operators, statisticians and other technology people to properly prevent Internet hacks and surveillance. In some cases, this work is illegal by national government policy, and thus also requires legal advocacy. The mere collection of data and information can therefore pose a security threat to staff and stakeholders – LGBTQI and allies, alike.

The “queer divide”

One discussant asked the group for data or anecdotal information related to the “queer divide.” A commonly understood problem in ICT4D work are divides – between genders, urban and rural, rich and poor, socially accepted and socially marginalized. There have also been studies to clearly demonstrate that people who are naturally extroverted and not shy benefit more from any given program or project. As such, is there any data to support a “queer divide” between those who are LGBTQI and those who are not, he wondered. As demonstrated in the above sections, many queer people are forced to disengage socially and retreat from “normal” society to stay safe.

Success stories, key organizations and resources

Participants mentioned organizations and examples of more progressive policies for LGBTQI staff and stakeholders (this list is not comprehensive, nor does it suggest these organizations’ policies are foolproof), including:

We also compiled a much more extensive list of resources on the topic here as background reading, including organizations, articles and research. (Feel free to add to it!)

What can we do moving forward?

  • Engage relevant organizations, such as Out in Tech and Lesbians who Tech, with specific solutions, such as coding privacy protocols for online communities and helping grassroots organizations target ads to relevant stakeholders.
  • Lobby smartphone manufacturers to increase privacy protections on mobile devices.
  • Lobby US and other national governments to introduce “Right to be forgotten” law, which allows Internet users to wipe all records of themselves and personal activity.
  • Support organizations and services that offer legal council to those in need.
  • Demand better and more comprehensive protection for LGBTQI staff, consultants and interns in international organizations.

Key questions to work on…

  • In some countries, a government owns telecom companies. In others, telecom companies are national subsidiaries of international corporations. In countries in which the government is actively or planning on actively surveying networks for LGBTQI activity, how does the type of telecom company factor in?
  • What datasets do we need on LGBTQI people for better programming?
  • How do we properly anonymize data collected? What are the standards of best practices?
  • What policies need to be in place to better protect LGBTQI staff, consultants and interns? What kind of sensitizing activities, trainings and programming need to be done for local staff and less LGBTQI sensitive international staff in ICT4D organizations?
  • How much capacity have ICT4D/international organizations lost as a result of their policies for LGBTQI staff and stakeholders?
  • What are the roles and obligations of ICT4D/international organizations to their LGBTQI staff, now and in the future?
  • What are the ICT4D and international development programmatic links with LGBT stakeholders and staff? How does LGBT stakeholders intersect with water? Public health? Nutrition? Food security? Governance and transparency? Human rights? Humanitarian crises? How does LGBT staff intersect with capacity? Trainings? Programming?
  • How do we safely and responsibility increase visibility of LGBTQI people around the world?
  • How do we engage tech companies that are pro-LGBTQI, including Google, to do more for those who cannot or do not engage with their services?
  • What are the economic costs of homophobia, and does this provide a compelling enough case for countries to stop systemic LGBTQI-phobic behavior?
  • How do we mainstream LGBTQI issues in bigger development conferences and discussions?

Thanks to the great folks at ThoughtWorks for hosting and providing a lovely breakfast to us! Technology Salons are carried out under Chatham House Rule, so no attribution has been made. If you’d like to join us for Technology Salons in future, sign up here!

Read Full Post »

Today as we jump into the M&E Tech conference in DC (we’ll also have a Deep Dive on the same topic in NYC next week), I’m excited to share a report I’ve been working on for the past year or so with Michael Bamberger: Emerging Opportunities in a Tech-Enabled World.

The past few years have seen dramatic advances in the use of hand-held devices (phones and tablets) for program monitoring and for survey data collection. Progress has been slower with respect to the application of ICT-enabled devices for program evaluation, but this is clearly the next frontier.

In the paper, we review how ICT-enabled technologies are already being applied in program monitoring and in survey research. We also review areas where ICTs are starting to be applied in program evaluation and identify new areas in which new technologies can potentially be applied. The technologies discussed include hand-held devices for quantitative and qualitative data collection and analysis, data quality control, GPS and mapping devices, environmental monitoring, satellite imaging and big data.

While the technological advances and the rapidly falling costs of data collection and analysis are opening up exciting new opportunities for monitoring and evaluation, the paper also cautions that more attention should be paid to basic quality control questions that evaluators normally ask about representativity of data and selection bias, data quality and construct validity. The ability to use techniques such as crowd sourcing to generate information and feedback from tens of thousands of respondents has so fascinated researchers that concerns about the representativity or quality of the responses have received less attention than is the case with conventional instruments for data collection and analysis.

Some of the challenges include the potential for: selectivity bias and sample design, M&E processes being driven by the requirements of the technology and over-reliance on simple quantitative data, as well as low institutional capacity to introduce ICT and resistance to change, and issues of privacy.

None of this is intended to discourage the introduction of these technologies, as the authors fully recognize their huge potential. One of the most exciting areas concerns the promotion of a more equitable society through simple and cost-effective monitoring and evaluation systems that give voice to previously excluded sectors of the target populations; and that offer opportunities for promoting gender equality in access to information. The application of these technologies however needs to be on a sound methodological footing.

The last section of the paper offers some tips and ideas on how to integrate ICTs into M&E practice and potential pitfalls to avoid. Many of these were drawn from Salons and discussions with practitioners, given that there is little solid documentation or evidence related to the use of ICTs for M&E.

Download the full paper here! 

Read Full Post »

Earlier this month I attended the African Evaluators’ Conference (AfrEA) in Cameroon as part of the Technology and Evaluation stream organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation.

A first post about ICTs and M&E at the Afrea Conference went into some of the deliberations around using or not using ICTs and how we can learn and share more as institutions and evaluators. I’ve written previously about barriers and challenges with using ICTs in M&E of international development programs (see the list of posts at the bottom of this one). Many of these same conversations came up at AfrEA, so I won’t write about these again here. What I did want to capture and share were a few interesting design and implementation thoughts from the various ICT and M&E sessions. Here goes:

1) Asking questions via ICT may lead to more honest answers. Some populations are still not familiar with smart phones and tablets and this makes some people shy and quiet, yet it makes others more curious and animated to participate. Some people worry that mobiles, laptops and tablet create distance between the enumerator and the person participating in a survey. On the other hand, I’m hearing more and more examples of cases where using ICTs for surveying actually allow for a greater sense of personal privacy and more honest answers. I first heard about this several years ago with relation to children and youth in the US and Canada seeking psychological or reproductive health counseling. They seemed to feel more comfortable asking questions about sensitive issues via online chats (as opposed to asking a counselor or doctor face-to-face) because they felt anonymous. This same is true for telephone inquiries.

In the case of evaluations, someone suggested that rather than a mobile or tablet creating distance, a device can actually create an opportunity for privacy. For example, if a sensitive question comes up in a survey, an enumerator can hand the person being interviewed the mobile phone and look away when they provide their answer and hit enter, in the same way that waiters in some countries will swipe your ATM card and politely look away while you enter your PIN. Key is building people’s trust in these methods so they can be sure they are secure.

At a Salon on Feb 28, I heard about mobile polling being used to ask men in the Democratic Republic of Congo about sexual assault against men. There was a higher recorded affirmative rate when the question was answered via a mobile survey than when the question had been asked in other settings or though other means. This of course makes sense, considering that often when a reporter or surveyor comes around asking whether men have been victims of rape, no one wants to say publicly. It’s impossible to know in a situation of violence if a perpetrator might be standing around in the crowd watching someone getting interviewed, and clearly shame and stigma also prevent people from answering openly.

Another example at the AfrEA Tech Salon, was a comparison study done by an organization in a slum area in Accra. Five enumerators who spoke local languages conducted Water, Sanitation and Hygiene (WASH) surveys by mobile phone using Open Data Kit (an open source survey application) and the responses were compared with the same survey done on paper.  When people were asked in person by enumerators if they defecated outdoors, affirmative answers were very low. When people were asked the same question via a voice-based mobile phone survey, 26% of respondents reported open defecation.

2) Risk of collecting GPS coordinates. We had a short discussion on the plusses and minuses of using GPS and collecting geolocation data in monitoring and evaluation. One issue that came up was safety for enumerators who carry GPS devices. Some people highlighted that GPS devices can put staff/enumerators at risk of abuse from organized crime bands, military groups, or government authorities, especially in areas with high levels of conflict and violence. This makes me think that if geographic information is needed in these cases, it might be good to use a mobile phone application that collects GPS rather than a fancy smart phone or an actual GPS unit (for example, one could try out PoiMapper, which works on feature phones).

In addition, evaluators emphasized that we need to think through whether GPS data is really necessary at household level. It is tempting to always collect all the information that we possibly can, but we can never truly assure anyone that their information will not be de-anonymized somehow in the near or distant future, and in extremely high risk areas, this can be a huge risk. Many organizations do not have high-level security for their data, so it may be better to collect community or district level data than household locations. Some evaluators said they use ‘tricks’ to anonymize the geographical data, like pinpointing location a few miles off, but others felt this was not nearly enough to guarantee anonymity.

3) Devices can create unforeseen operational challenges at the micro-level. When doing a mobile survey by phone and asking people to press a number to select a particular answer to a question, one organization working in rural Ghana to collect feedback about government performance found that some phones were set to lock when a call was answered. People were pressing buttons to respond to phone surveys (press 1 for….), but their answers did not register because phones were locked, or answers registered incorrectly because the person was entering their PIN to unlock the phone. Others noted that when planning for training of enumerators or community members who will use their own devices for data collection, we cannot forget the fact that every model of phone is slightly different. This adds quite a lot of time to the training as each different model of phone needs to be explained to trainees. (There are a huge number of other challenges related to devices, but these were two that I had not thought of before.)

4) Motivation in the case of poor capacity to respond. An organization interested in tracking violence in a highly volatile area wanted to take reports of violence, but did not have a way to ensure that there would be a response from an INGO, humanitarian organization or government authority if/when violence was reported. This is a known issue — the difficulties of encouraging reporting if responsiveness is low. To keep people engaged this organization thanks people immediately for reporting and then sends peace messages and encouragement 2-3 times per week. Participants in the program have appreciated these ongoing messages and participation has continued to be steady, regardless of the fact that immediate help has not been provided as a result of reporting.

5) Mirroring physical processes with tech. One way to help digital tools gain more acceptance and to make them more user-friendly is to design them to mirror paper processes or other physical processes that people are already familiar with. For example, one organization shared their design process for a mobile application for village savings and loan (VSL) groups. Because security is a big concern among VSL members, the groups typically keep cash in a box with 3 padlocks. Three elected members must be present and agree to open and remove money from the box in order to conduct any transaction. To mimic this, the VSL mobile application requires 3 PINs to access mobile money or make transactions, and what’s more, the app sends everyone in the VSL Group an SMS notification if the 3 people with the PINs carry out a transaction, meaning the mobile app is even more secure than the original physical lock-box, because everyone knows what is happening all the time with the money.

****

As I mentioned in part 1 of this post, some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward.

Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that are missing!

Previous posts on ICTs and M&E on this blog:

Read Full Post »

I attended the African Evaluators’ Conference (AfrEA) in Cameroon last week as part of the Technology and Evaluation strand organized by Pact with financial support from The Rockefeller Foundation’s Evaluation Office and The MasterCard Foundation. The strand was a fantastic opportunity for learning, sharing and understanding more about the context, possibilities and realities of using ICTs in monitoring and evaluation (M&E). We heard from a variety of evaluators, development practitioners, researchers, tool-developers, donors, and private sector and government folks. Judging by the well-attended sessions, there is a huge amount of interest in ICTs and M&E.

Rather than repeat what’s I’ve written in other posts (see links at the bottom), I’ll focus here on some of the more relevant, interesting, and/or new information from the AfrEA discussions. This first post will go into institutional issues and the ‘field’ of ICTs and M&E. A second post will talk about design and operational tips I learned /was reminded of at AfrEA.

1) We tend to get stuck on data collection –Like other areas (I’m looking at you, Open Data) conversations tend to revolve around collecting data. We need to get beyond that and think more about why we are collecting data and what we are going to do with it (and do we really need all this data?). The evaluation field also needs to explore all the other ways it could be using ICTs for M&E, going beyond mobile phones and surveys. Collecting data is clearly a necessary part of M&E, but those data still need to be analyzed. As a participant from a data visualization firm said, there are so many ways you can use ICTs – they help you make sense of things, you can tag sentiment, you can visualize data and make data-based decisions. Others mentioned that ICTs can help us to share data with various stakeholders, improve sampling in RCTs (Randomized Control Trials), conduct quality checks on massive data sets, and manage staff who are working on data collection. Using big data, we can do analyses we never could have imagined before. We can open and share our data, and stop collecting the same data from the same people multiple times. We can use ICTs to share back what we’ve learned with evaluation stakeholders, governments, the public, and donors. The range of uses of ICTs is huge, yet the discussion tends to get stuck on mobile surveys and data collection, and we need to start thinking beyond that.

2) ICTs are changing how programs are implemented and how M&E is done — When a program already uses ICTs, data collection can be built in through the digital device itself (e.g., tracking user behavior, cookies, and via tests and quizzes), as one evaluator working on tech and education programs noted. As more programs integrate digital tools, it may become easier to collect monitoring and evaluation data with less effort. Along those lines, an evaluator looking at a large-scale mobile-based agricultural information system asked about approaches to conducting M&E that do not rely on enumerators and traditional M&E approaches. In his program, because the farmers who signed up for the mobile information service do not live in the same geographical community, traditional M&E approaches do not seem plausible and ICT-based approaches look like a logical answer. There is little documentation within the international development evaluation community, however, on how an evaluator might design an evaluation in this type of a situation. (I am guessing there may be some insights from market research and possibly from the transparency and accountability sectors, and among people working on “feedback loops”).

3) Moving beyond one-off efforts — Some people noted that mobile data gathering is still done mostly at the project level. Efforts tend to be short-term and one-off. The data collected is not well-integrated into management information systems or national level processes. (Here we may reference the infamous map of mHealth pilots in Uganda, and note the possibility of ICT-enabled M&E in other sectors going this same route). Numerous small pilots may be problematic if the goal is to institutionalize mobile data gathering into M&E at the wider level and do a better job of supporting and strengthening large-scale systems.

4) Sometimes ICTs are not the answer, even if you want them to be – One presenter (who considered himself a tech enthusiast) went into careful detail about his organization’s process of deciding not to use tablets for a complex evaluation across 4 countries with multiple indicators. In the end, the evaluation itself was too complex, and the team was not able to find the right tool for the job. The organization looked at simple, mid-range and highly complex applications and tools and after testing them all, opted out. Each possible tool presented a set of challenges that meant the tool was not a vast improvement over paper-based data collection, and the up-front costs and training were too expensive and lengthy to make the switch to digital tools worthwhile. In addition, the team felt that face-to-face dynamics in the community and having access to notes and written observations in the margins of a paper survey would enable them to conduct a better evaluation. Some tablets are beginning to enable more interactivity and better design for surveys, but not yet in a way that made them a viable option for this evaluation. I liked how the organization went through a very thorough and in-depth process to make this decision.

Other colleagues also commented that the tech tools are still not quite ‘there’ yet for M&E. Even top of the line business solutions are generally found to be somewhat clunky. Million dollar models are not relevant for environments that development evaluators are working in; in addition to their high cost, they often have too many features or require too much training. There are some excellent mid-range tools that are designed for the environment, but many lack vital features such as availability in multiple languages. Simple tools that are more easily accessible and understandable without a lot of training are not sophisticated enough to conduct a large-scale data collection exercise. One person I talked with suggested that the private sector will eventually develop appropriate tools, and the not-for-profit sector will then adopt them. She felt that those of us who are interested in ICTs in M&E are slightly ahead of the curve and need to wait a few years until the tools are more widespread and common. Many people attending the Tech and M&E sessions at AfrEA made the point that use of ICTs in M&E would get easier and cheaper as the field develops, tools get more advanced/appropriate/user-friendly and widely tested, and networks/ platforms/ infrastructure improves in less-connected rural areas.

5) Need for documentation, evaluation and training on use of ICTs in M&E – Some evaluators felt that ICTs are only suitable for routine data collection as part of an ongoing program, but not good for large-scale evaluations. Others pointed out that the notions of ‘ICT for M&E’ and ‘mobile data collection/mobile surveys’ are often used interchangeably, and evaluation practitioners need to look at the multiple ways that ICTs can be used in the wider field of M&E. ICTs are not just useful for moving from paper surveys to mobile data gathering. An evaluator working on a number of RCTs mentioned that his group relies on ICTs for improving samples, reducing bias, and automatically checking data quality.

There was general agreement that M&E practitioners need resources, opportunities for more discussion, and capacity strengthening on the multiple ways that ICTs may be able to support M&E. One evaluator noted that civil society organizations have a tendency to rush into things, hit a brick wall, and then cross their arms and say, “well, this doesn’t work” (in this case, ICTs for M&E). With training and capacity, and as more experience and documentation is gained, he considered that ICTs could have a huge role in making M&E more efficient and effective.

One evaluator, however, questioned whether having better, cheaper, higher quality data is actually leading to better decisions and outcomes. Another evaluator asked for more evidence of what works, when, with whom and under what circumstances so that evaluators could make better decisions around use of ICTs in M&E. Some felt that a decision tree or list of considerations or key questions to think through when integrating ICTs into M&E would be helpful for practitioners. In general, it was agreed that ICTs can help overcome some of our old challenges, but that they inevitably bring new challenges. Rather than shy away from using ICTs, we should try to understand these new challenges and find ways to overcome/work around them. Though the mHealth field has done quite a bit of useful research, and documentation on digital data collection is growing, use of ICTs is still relatively unexplored in the wider evaluation space.

6) There is no simple answer. One of my takeaways from all the sessions was that many M&E specialists are carefully considering options, and thinking quite a lot about which ICTs for what, whom, when and where rather than deciding from the start that ICTs are ‘good and beneficial’ or ‘bad and not worth considering.’ This is really encouraging, and to be expected of a thoughtful group like this. I hope to participate in more discussions of this nature that dig into the nuances of introducing ICTs into M&E.

Some new resources and forthcoming documentation may help to further set the stage for better learning and application of ICTs in the M&E process. Pact has just released their Mobile Technology Toolkit, and Michael Bamberger and I are finishing up a paper on ICT-enabled M&E that might help provide a starting point and possible framework to move things forward. The “field” of ICTs in M&E is quite broad, however, and there are many ways to slice the cake. Here is the list of toolkits, blog posts and other links that we compiled for AfrEA – please add any that you think are missing!

(Part 2 of this post)

Previous posts on ICTs and M&E:

Read Full Post »

This is a guest post by Daniella Ben-Attar (@dbenattar) who consults for international development agencies, NGOs and corporations on areas relating to youth participation, governance, municipal capacity building, ICT4D and peace building.

by Daniella Ben-Attar

Youth in Mali with local authorities.

Youth in Mali with local authorities.

ICTs are increasingly being looked to as holding great promise for improving participatory governance and citizen engagement. Mobile phones have been a game-changer in this sphere, with nearly seven billion mobile-cellular subscriptions worldwide, including 89% penetration in the developing world. Youth are at the center of these developments, both as drivers and consumers of technological innovation.  This is particularly true in developing countries where the young generation is leading the way in the usage of technology to overcome social, political and economic exclusion to begin driving positive change in their communities. The largest cohort in history, youth aged 15-24 number more than 1.2 billion worldwide, with an estimated 87% living in developing countries.  They are almost twice as networked as the global population as a whole, with the ICT age gap more pronounced in least developed countries where young people are often three times more likely to be online than the general population.

The combination of the “youth bulge” and “mobile miracle” has great potential to enable new responses to the longstanding challenge of youth engagement in governance across the developing world. Young citizens are utilizing simple mobile technology to innovate new platforms, tools and mechanisms aiming to amplify their voices and influence government. Youth are being proactive to play a greater role in governance through mobile-based communication avenues, user-generated information, tools tracking government accountability, anti-corruption platforms, crowd-sourcing and more. This is a dramatic shift from the days when the only way to gain the attention of a government official was through slow and cumbersome bureaucratic processes and official meetings in government offices.

A Growing Youth-Local Government Disconnect

Ironically, the impact of these efforts appears to be more pronounced at the national level than at the local level of government. Indeed, ICTs seem to be strengthening communications between youth and central government instead of enhancing connections with the closest level of governance where young citizens can be resources for community development. Applications and innovations in cooperation with government that address local issues have largely been the product of national government bodies. Most youth-led initiatives have not been successful in securing local government partnership, limiting impact. A communications gap has widened between young citizens and their local governments, which are often staffed by individuals with far less digital experience than their youthful constituents. As a result, youth and their local leaders often seem to be speaking in different languages through different media.  Local government deficits in capacity and resources continue to exist as barriers, as well as the need for sensitization to youth engagement as a priority outcome of adopting and shaping ICT-enabled practices.

Most young people using technology as a way to influence governance will tell you a similar story. When expressing themselves through social media outlets and ICT-enabled mechanisms, it is usually the national political figures that are more attuned and responsive. Local leaders are far behind their national counterparts in ICT capacity and usage. National ministers and officials often use Twitter accounts, blogs, SMS and websites to engage with their citizens, who by default are largely young. While this is a positive development, it also elicits frustration from young people who feel that their voices are ignored or unheard by elder leaders at the local level where chances are greatest for tangible impact in their day-to-day lives.

President Kagame of Rwanda is a stark example.  Youth have described how the president directly interacted with young citizens via Twitter and addressed concerns relating to many issues, from police violence towards youth to business ideas for urban tourism.  No such possibilities existed for these same youth to approach the local authority with these locally-based needs.  Even more significant, Kagame merged the national ministries of Youth and ICT in 2012 and appointed a Minister of Youth and ICT.  This is a groundbreaking move both in terms of ICT and youth, with youth ministries commonly grouped with sports or culture. However, these extraordinary national developments are not reflected in the policy and practice of local government in Rwanda.

Digital mapping initiatives have been in the spotlight as a new youth-driven tool drawing attention to local issues often overlooked by government officials.  While communities are benefitting from these processes, youth leaders report that these maps often do not gain the attention of city hall. For example, Kenyan NGO Map Kibera has seen its maps utilized by national ministry committees, better equipped with the capacity and mindset to absorb digital data, while city council has not been responsive to ICT-based approaches. Young leaders in Kandy City, Sri Lanka are working to bridge the “youth-local government ICT gap” which they have identified as a major barrier in engaging youth in local development. These young leaders are training municipal officials in computer skills and creating new ICT platforms for citizen-local government interaction as part of a UN-HABITAT supported youth-led training and education program run by YES – City of Youth.

Building Local Government Capacity for ICT & Youth Engagement

Partnership with local government is viewed by stakeholders as a key missing ingredient in enabling governance technology applications to have tangible results at the community level. The importance of “closing the engagement loop” and early local government buy-in is emphasized time and again by stakeholders in the field as a vital lesson learned through pilot programs. Youth organizations like Youth Agenda and Sisi ni Amani have achieved successful governance results by engaging local leaders as partners from the preliminary stages, highlighting the benefits they can gain through mobile solutions that increase civic engagement, enhance service delivery, fight corruption and bridge between local government and citizens.

Bridging the youth-local government gap will require sensitizing local officials and encouraging them to see the advantages of “listening” to youth ICT platforms, to bring them to where the majority of youth are voicing their opinions, and enable them to take responsive actions. National governments should be encouraged to help local governments be better equipped to address youthful concerns at the local level through capacity building for both youth engagement and ICT4G.  This can be supported by integrating local ICT components in national ICT plans, or increased “decentralization” and integration of both youth and ICT strategies, bolstered by budgetary allocations and devolution of authority. When seeking to utilize ICT to deliver positive governance outcomes for young people, “local gov” must be part of the “ICT4Gov” equation.

This blog post draws on findings from a UN-HABITAT Report entitled “ICT, Urban Governance and Youth” co-authored by Daniella Ben-Attar and Tim Campbell.

Read Full Post »

Our February 6th Technology Salon in New York City focused on the organizational challenges that development organizations face when trying to innovate or integrate ICTs into their programs and operations. We looked at the idea of “innovation” and different ways to approach it. We asked what “innovation” really means and why “technology” and “innovation” seem to always be used interchangeably. We shared ideas, challenges and good practice around supporting and encouraging staff, managers, and donors to experiment with new and better ways of doing things.

A huge thank you to Somto Fab-Ukozor and Rachana Kumar for their collaboration on writing the summary below!

Mika

Mika Valitalo, Plan Finland. (Photo by Somto Fab-Ukozor)

Our lead discussants were Jessica Heinzelman, DAI’s senior ICT specialist; Chris Fabian, UNICEF’s advisor to the Executive Director on innovation and co-lead of UNICEF’s innovation lab; and Mika Valitalo, Plan Finland’s program manager for ICT4D.

What is innovation?

Different organizations bring in different ideas and definitions of innovation. Is innovation always synonymous with technology? Does it always require technology? For some organizations, “innovation” means doing things faster, better and differently in a way that adds value and has a concrete impact.

One discussant noted that innovation is not necessarily disruptive in nature; it can be categorized into 3 main forms:

  • a totally new context, new problem, new solution
  •  an existing solution that is improved
  •  an existing solution that is adapted to a new context, country or sector

Another lead discussant pointed out that innovation is not necessarily something brand new; it can be something that existed but that is used in a different way or simply different processes or ways of thinking, and innovation does not have to be technology. The concept of innovation is often misunderstood, he said, because “someone can come up with 10 crappy ideas that are new but that does not make them innovative or useful.” He also cautioned that innovation should not only be about replication and scale, yet donors sometimes decide that an idea is innovative and encourage organizations to replicate the idea, without ensuring that it is having a real or relevant impact across different local contexts.

One discussant disagreed and said that there’s no innovation without technology; for example, 60% of kids are stunting in one of the greenest areas in the world because of lack to electrical grid; the provision of electricity is technology. Without the electrical grid, the country will never reach any of its developmental goals. Technology enables the work to happen. A different viewpoint, as another discussant explained, was that the application of the technology is the innovative part, not the technology itself.

What fuels innovation?

A key part of the Salon discussion focused on whether having dedicated resources fueled innovation, or whether the presence of challenges and constraints forces innovation. Some Salon participants felt that when people are faced with challenges such as less time, fewer resources, no office space, etc., they may find themselves being more innovative in order to overcome constraints. Others found that staff often use the excuse of not having time and resources as a reason for not innovating or thinking outside the box. Some felt that innovation is difficult to achieve within large bureaucratic institutions due to their risk averse cultures, whereas others felt that one of the benefits of large-scale organizations is having resources to innovate and then test and scale innovations. Participants did agree that regardless of the outside setting, some people are more inclined to be innovative – these people are easy to identify almost everywhere, as they are always coming up with new ideas and trying/testing things out. The key is to find a way for organizational structures to support and reward innovators.

Encouraging innovation within large development organizations

Different organizations approach the innovation question in different ways. One discussant said that at his organization, the innovation team spends 60% of its time working on problems the organization is facing at the moment; 20% of its time looking towards the future (a 3-5 year horizon) for ideas that have an immediate direct impact on its work; and 20% of its time on organizational redesign, in other words, how to work with users to create solutions that are not top down and that take advantage of the existing ecosystem. His innovations team is only interested in finding/creating innovations that could reach very large scale, such as 10,000,000 people or more.

The innovation team created some guidelines for staff and allies with tips on how to defend one’s existence as someone working on innovation.  The guide addresses questions like: Why innovation?  Is it valuable to have an innovation unit? If so, why? If so and why, then prove it. Working on these questions led the innovation unit to develop metrics for innovation to justify staff positions focused on innovation. These guidelines can help people at other organizations who are trying something new to have a reference point; they allow innovation teams to say “such-and-such organization is doing this, so we can do it too.”

Metrics for innovation

Having a set of metrics can help innovation labs, teams or persons charged with organizational innovation to measure whether they are actually achieving their goals, too. One organization defined the following metrics:

  • permission to fail or fail cheaply without fear
  • working with heterogeneous groups
  •  sharing knowledge across countries and contexts

Working across organizational boundaries without “soul crushing bureaucracy” and having the real ability to work horizontally is one key to achieving these metrics.

Decentralizing the innovation function

Another lead discussant described the institutional changes and underlying understanding of people needed to improve and support innovation:

  • Identify the real incentives that someone has – individual or project – and the disincentives to innovating. It is important to look underneath the excuses people come up with such as time constraints and additional work, and find out what is driving them.
  • Hire realistic optimists – Sometimes in the ICT4D space, people gloss over the challenges and promote the technology. It is important to hire people who are grounded and have a good analytical sense, and who can think beyond gadgets and hype.
  • Building and sharing expertise within the organization – Creating a champions group of mid-to entry-level professionals within the organization, who understand the power that new technology has, is another way to make innovation and ICT4D spread. Rather than keep the expertise isolated within a specialist unit, finding younger people who are hungry for knowledge and who see this kind of work as a way to help further their career and set themselves apart from their colleagues can help. Then the “innovation team” can provide them with support and guidance. Participatory workshops on new tools and approaches can be organized where these innovation champions are tasked to research and explore something and then present it. Equipped with tools and trainings, they will be able to better identify opportunities for innovation.
  • Getting innovation into the plan early and working with those who are putting proposals and RFPs together to make sure that it is part of the metrics being measured from the beginning. It’s hard to add new elements into the program later because people will perceive it as additional work.

One Salon participant said that her organization disconnected “innovation” from its other programs so that space for trying new things would be made, and the fear of failing would be reduced or “offloaded” to the innovation team. In this case, the unit is funded through private sources which support it to experiment. It still has to struggle for its existence and show the impact and value of either failure or success.

Ideas for taking innovation and ICT4D forward

Some ideas for moving ahead included:

  1. Flexibility in program planning– In reality, most times during program implementation the plan changes and we have to figure out how to cope with it. The solution lies in the ability to quietly promote innovation and to influence donor organizations to embrace more flexible implementation.
  2. Integrating User-Centered-Design – Ethnographic research can help to better understand how people use technology locally and what its meaning is. It also helps identify existing patterns and ways of doing things that could be enhanced or shared with other communities if they are working well. Agile methodology from the software world can be pulled into development programs in order to end the top-down approach of solving problems from afar and having everything cooked up from the start. Rather, focusing on small iterations and the impact of the deliverables can be a better approach.
  3. Collaboration with Universities – Universities can be great places for working on and trying out  new ideas. Links with universities can be used as ways to find solutions, but even moreso to “change the proteins” inside of a traditional organization.  Collaboration among staff and students provides opportunities for staff to learn how to think about things differently and for students to understand real-world challenges in development agencies.
  4. Bridging the gap – Involving educators, health experts, child protection specialists and others who are not very interested in gadgets can bring about strong understanding of the real needs. Then connecting them with “techies” and ICTs in plain language and asking them to relate their own use of tech (they probably all use mobile phones in their personal lives, for example) to the ways that community members use tech can help to bring about solid, practical, sustainable and locally driven solutions.
  5. Provide a safe environment – Many humans are innovative by nature, said one discussant. Hierarchies and organizational processes are often what prevent people from doing new things. Giving feedback and psychological support can help those who are innovative to flourish within a difficult environment.
  6. The interdisciplinary approach – One Salon participant said that his organization had started to work with some senior staff to think and structure data in a way that would help them understand their challenges and programs better in order to innovate. This makes people more comfortable, and working across different teams with a variety of people and skill sets can help new ideas and solutions to bubble up.
  7. Information intermediaries – Infomediaries working at various levels can help connect people with technology, conduct training, and ensure that staff can acquire skills to use the technology themselves and in programs.
  8. Open source – Making project documents, budgets, concepts, “open” online can make them more accessible and  help  enable sustainable projects and prevent issues and costs associated with proprietary tools, applications and content.
  9. Younger management – There’s an age differential between the people who lead most large organizations and large-scale projects and those who are more interested in technology. One participant suggested it would be important to get younger people into positions where they can make contributions of ideas and decisions without being blocked by higher level people that may be “past their innovation prime.” Another solution may be to hire more experienced people but to ensure that they are open to working with  younger people who bring in new ideas. (Some Salon participants, however, felt that age has nothing to do with innovation, and that it is more related to personality types and organizational environments).

For  additional resources on the Salon topic, look here – and add your resources as well.

Salons are held under Chatham House Rule, therefore no attribution has been made in this post. Many thanks to our lead discussants and to ThoughtWorks for hosting and providing breakfast.

If you’d like to attend future Salons, sign up here!

Read Full Post »

Older Posts »