Traditional development evaluation has been characterized as ‘backward looking’ rather than forward looking and too focused on proving over improving. Some believe applying an ‘agile’ approach in development would be more useful — the assumption being that if you design a program properly and iterate rapidly and constantly based on user feedback and data analytics, you are more likely achieve your goal or outcome without requiring expensive evaluations. The idea is that big data could eventually allow development agencies to collect enough passive data about program participants that there would no longer be a need to actively survey people or conduct a final evaluation, because there would be obvious patterns that would allow implementers to understand behaviors and improve programs along the way.

The above factors have made some evaluators and data scientists question whether big data and real-time availability of multiple big data sets, along with the technology that enables their collection and analysis, will make evaluation as we know it obsolete. Others have argued that it’s not the end of evaluation, but rather we will see a blending of real-time monitoring, predictive modeling, and impact evaluation, depending on the situation. Big questions remain, however, about the feasibility of big data in some contexts. For example, are big data approaches useful when it comes to people who are not producing very much digital data? How will the biases in big data be addressed to ensure that the poorest, least connected, and/or most marginalized are represented?

The Technology Salon on Big Data and Evaluation hosted during November’s  American Evaluation Association Conference in Chicago opened these questions up for consideration by a roomful of evaluators and a few data scientists. We discussed the potential role of new kinds and quantities of data. We asked how to incorporate static and dynamic big data sources into development evaluation. We shared ideas on what tools, skills, and partnerships we might require if we aim to incorporate big data into evaluation practice. This rich and well-informed conversation was catalyzed by our lead discussants: Andrew Means, Associate Director of the Center for Data Science & Public Policy at the University of Chicago and Founder of Data Analysts for Social Good and The Impact Lab; Michael Bamberger, Independent Evaluator and co-author of Real World Evaluation; and Veronica Olazabal from The Rockefeller Foundation. The Salon was supported by ITAD via a Rockefeller Foundation grant.

What do we mean by ‘big data’?

The first task was to come up with a general working definition of what was understood by ‘big data.’ Very few of the organizations present at the Salon were actually using ‘big data’ and definitions varied. Some talked about ‘big data sets’ as those that could not be collected or analyzed by a human on a standard computer. Others mentioned that big data could include ‘static’ data sets (like government census data – if digitized — or cellphone record data) and ‘dynamic’ data sets that are being constantly generated in real time (such as streaming data input from sensors or ‘cookies’ and ‘crumbs’ generated through use of the Internet and social media). Others considered big data to be real time, socially-created and socially-driven data that could be harvested without having to purposely collect it or budget for its collection. ‘It’s data that has a life of its own. Data that just exists out there.’ Yet others felt that for something to be ‘big data’ multiple big data sets needed to be involved, for example, genetic molecular data crossed with clinical trial data and other large data sets, regardless of static or dynamic nature. Big data, most agreed, is data that doesn’t easily fit on a laptop and that requires a specialized skill set that most social scientists don’t have. ‘What is big data? It’s hard to define exactly, but I know it when I see it,’ concluded one discussant.

Why is big data a ‘thing’?

As one discussant outlined, recent changes in technology have given rise to big data. Data collection, data storage and analytical power are becoming cheaper and cheaper. ‘We live digitally now and we produce data all the time. A UPS truck has anywhere from 50-75 sensors on it to do everything from optimize routes to indicate how often it visits a mechanic,’ he said. ‘The analytic and computational power in my iPhone is greater than what the space shuttle had.’ In addition, we have ‘seamless data collection’ in the case of Internet-enabled products and services, meaning that a person creates data as they access products or services, and this can then be monetized, which is how companies like Google make their money. ‘There is not someone sitting at Google going — OK, Joe just searched for the nearest pizza place, let me enter that data into the system — Joe is creating the data about his search while he is searching, and this data is a constant stream.’

What does big data mean for development evaluation?

Evaluators are normally tasked with making a judgment about the merit of something, usually for accountability, learning and/or to improve service delivery, and usually looking back at what has already happened. In the wider sense, the learning from evaluation contributes to program theory, needs assessment, and many other parts of the program cycle.

This approach differs in some key ways from big data work, because most of the new analytical methods used by data scientists are good at prediction but not very good at understanding causality, which is what social scientists (and evaluators) are most often interested in. ‘We don’t just look at giant data sets and find random correlations,’ however, explained one discussant. ‘That’s not practical at all. Rather, we start with a hypothesis and make a mental model of how different things might be working together. We create regression models and see which performs better. This helps us to know if we are building the right hypothesis. And then we chisel away at that hypothesis.’

Some challenges come up when we think about big data for development evaluation because the social sector lacks the resources of the private sector. In addition, data collection in the world of international development is not often seamless because ‘we care about people who do not live in the digital world,’ as one person put it. Populations we work with often do not leave a digital trail. Moreover, we only have complete data about the entire population in some cases (for example, when it comes to education in the US), meaning that development evaluators need to figure out how to deal with bias and sampling.

Satellite imagery can bring in some data that was unavailable in the past, and this is useful for climate and environmental work, but we still do not have a lot of big data for other types of programming, one person said. What’s more, wholly machine-based learning, and the kind of ‘deep learning’ made possible by today’s computational power is currently not very useful for development evaluation.

Evaluators often develop counterfactuals so that they can determine what would have happened without an intervention. They may use randomized controlled trials (RCTs), differentiation models, statistics and economics research approaches to do this. One area where data science may provide some support is in helping to answer questions about counterfactuals.

More access to big data (and open data) could also mean that development and humanitarian organizations stop duplicating data collection functions. Perhaps most interestingly, big data’s predictive capabilities could in the future be used in the planning phase to inform the kinds of programs that agencies run, where they should be run, and who should be let into them to achieve the greatest impact, said one discussant. Computer scientists and social scientists need to break down language barriers and come together more often so they can better learn from one another and determine where their approaches can overlap and be mutually supportive.

Are we all going to be using big data?

Not everyone needs to use big data. Not everyone has the capacity to use it, and it doesn’t exist for offline populations, so we need to be careful that we are not forcing it where it’s not the best approach. As one discussant emphasized, big data is not magic, and it’s not universally applicable. It’s good for some questions and not others, and it should be considered as another tool in the toolbox rather than the only tool. Big data can provide clues to what needs further examination using other methods, and thus most often it should be part of a mixed methods approach. Some participants felt that the discussion about big data was similar to the one 20 years ago on electronic medical records or to the debate in the evaluation community about quantitative versus qualitative methods.

What about groups of people who are digitally invisible?

There are serious limitations when it comes to the data we have access to in the poorest communities, where there are no tablets and fewer cellphones. We also need to be aware of ‘micro-exclusion’ (who within a community or household is left out of the digital revolution?) and intersectionality (how do different factors of exclusion combine to limit certain people’s digital access?) and consider how these affect the generation and interpretation of big data. There is also a question about the intensity of the digital footprint: How much data and at what frequency is it required for big data to be useful?

Some Salon participants felt that over time, everyone would have a digital presence and/or data trail, but others were skeptical. Some data scientists are experimenting with calibrating small amounts of data and comparing them to human-collected data in an attempt to make big data less biased, a discussant explained. Another person said that by digitizing and validating government data on thousands (in the case of India, millions) of villages, big data sets could be created for those that are not using mobiles or data.

Another person pointed out that generating digital data is a process that involves much more than simple access to technology. ‘Joining the digital discussion’ also requires access to networks, local language content, and all kinds of other precursors, she said. We also need to be very aware that these kinds of data collection processes impact on people’s participation and input into data collection and analysis. ‘There’s a difference between a collective evaluation activity where people are sitting around together discussing things and someone sitting in an office far from the community getting sound bites from a large source of data.’

Where is big data most applicable in evaluation?

One discussant laid out areas where big data would likely be the most applicable to development evaluation:

Screen Shot 2015-11-23 at 9.32.07 AM

It would appear that big data has huge potential in the evaluation of complex programs, he continued. ‘It’s fairly widely accepted that conventional designs don’t work well with multiple causality, multiple actors, multiple contextual variables, etc. People chug on valiantly, but it’s expected that you may get very misleading results. This is an interesting area because there are almost no evaluation designs for complexity, and big data might be a possibility here.’

In what scenarios might we use big data for development evaluation?

This discussant suggested that big data might be considered useful for evaluation in three areas:

  1. Supporting conventional evaluation design by adding new big data generated variables. For example, one could add transaction data from ATMs to conventional survey generated poverty indicators
  2. Increasing the power of a conventional evaluation design by using big data to strengthen the sample selection methodology. For example, satellite images were combined with data collected on the ground and propensity score matching was used to strengthen comparison group selection for an evaluation of the effects of interventions on protecting forest cover in Mexico.
  3. Replacing a conventional design with a big data analytics design by replacing regression based models with systems analysis. For example, one could use systems analysis to compare the effectiveness of 30 ongoing interventions that may reduce stunting in a sample of villages. Real-time observations could generate a time-series that could help to estimate the effectiveness of each intervention in different contexts.

It is important to remember construct validity too. ‘If big data is available, but it’s not quite answering the question that you want to ask, it might be easy to decide to do something with it, to run some correlations, and to think that maybe something will come out. But we should avoid this temptation,’ he cautioned. ‘We need to remember and respect construct validity and focus on measuring what we think we are measuring and what we want to measure, not get distracted by what a data set might offer us.’

What about bias in data sets?

We also need to be very aware that big data carries with it certain biases that need to be accounted for, commented several participants; notably, when working with low connectivity populations and geographies or when using data from social media sites that cater to a particular segment of the population. One discussant shared an example where Twitter was used to identify patterns in food poisoning, and suddenly the upscale, hipster restaurants in the city seemed to be the problem. Obviously these restaurants were not the sole source of the food poisoning, but rather there was a particular kind of person that tended to use Twitter.

‘People are often unclear about what’s magical and what’s really possible when it comes to big data. We want it to tell us impossible things and it can’t. We really need to engage human minds in this process; it’s not a question of everything being automated. We need to use our capacity for critical thinking and ask: Who’s creating the data? How’s it created? Where’s it coming from? Who might be left out? What could go wrong?’ emphasized one discussant. ‘Some of this information can come from the metadata, but that’s not always enough to make certain big data is a reliable source.’ Bias may also be introduced through the viewpoints and unconscious positions, values and frameworks of the data scientists themselves as they are developing algorithms and looking for/finding patterns in data.

What about the ethical and privacy implications?

Big Data has a great deal of ethical and privacy implications. Issues of consent and potential risk are critical considerations, especially when working with populations that are newly online and/or who may not have a good understanding of data privacy and how their data may be used by third parties who are collecting and/or selling it. However, one participant felt that a protectionist mentality is misguided. ‘We are pushing back and saying that social media and data tracking are bad. Instead, we should realize that having a digital life and being counted in the world is a right and it’s going to be inevitable in the future. We should be working with the people we serve to better understand digital privacy and help them to be more savvy digital citizens.’ It’s also imperative that aid and development agencies abandon our slow and antiquated data collection systems, she said, and to use the new digital tools that are available to us.

How can we be more responsible with the data we gather and use?

Development and humanitarian agencies do need be more responsible with data policies and practices, however. Big data approaches may contribute to negative data extraction tendencies if we mine data and deliver it to decision-makers far away from the source. It will be critical for evaluators and big data practitioners to find ways to engage people ‘on the ground’ and involve more communities in interpreting and querying their own big data. (For more on responsible data use, see the Responsible Development Data Book. Oxfam also has a responsible data policy that could serve as a reference. The author of this blog is working on a policy and practice guide for protecting girls digital safety, security and privacy as well.)

Who should be paying for big data sets to be made available?

One participant asked about costs and who should bear the expense of creating big data sets and/or opening them up to evaluators and/or data scientists. Others asked for examples of the private sector providing data to the social sector. This highlighted additional ethical and privacy issues. One participant gave an example from the healthcare space where there is lots of experience in accessing big data sets generated by government and the private sector. In this case, public and private data sets needed to be combined. There were strict requirements around anonymization and the effort ended up being very expensive, which made it difficult to build a business case for the work.

This can be a problem for the development sector, because it is difficult to generate resources for resolving social problems; there is normally only investment if there is some kind of commercial gain to be had. Some organizations are now hiring ‘data philanthropist’ positions that help to negotiate these kinds of data relationships with the private sector. (Global Pulse has developed a set of big data privacy principles to guide these cases.)

So, is big data going to replace evaluation or not?

In conclusion, big data will not eliminate the need for evaluation. Rather, it’s likely that it will be integrated as another source of information for strengthening conventional evaluation design. ‘Big Data and the underlying methods of data science are opening up new opportunities to answer old questions in new ways, and ask new kinds of questions. But that doesn’t mean that we should turn to big data and its methods for everything,’ said one discussant. ‘We need to get past a blind faith in big data and get more practical about what it is, how to use it, and where it adds value to evaluation processes,’ said another.

Thanks again to all who participated in the discussion! If you’d like to join (or read about) conversations like this one, visit Technology Salon. Salons run under Chatham House Rule, so no attribution has been made in this summary post.

Last month I joined a panel hosted by the Guardian on the contribution of innovation and technology to the Sustainable Development Goals (SDGs). Luckily they said that it was fine to come from a position of ‘skeptical realism.’

To drum up some good skeptical realist thoughts, I did what every innovative person does – posted a question on Facebook. A great discussion among friends who work in development, innovation and technology ensued. (Some might accuse me of ‘crowdsourcing’ ideas for the panel, but I think of it as more of a group discussion enabled by the Internet.) In the end, I didn’t get to say most of what we discussed on Facebook while on the panel, so I’m summarizing here.

To start off, I tend to think that the most interesting thing about the SDGs is that they are not written for ‘those developing countries over there.’ Rather, all countries are supposed to meet them. (I’m still not sure how many people or politicians in the US are aware of this.)

Framing them as global goals forces recognition that we have global issues to deal with — inequality and exclusion happen within countries and among countries everywhere. This opens doors for a shift in the narrative and framing of  ‘development.’ (See Goal 10: Reduce inequality within and among countries; and Goal 16: Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels.)

These core elements of the SDGs — exclusion and inequality – are two things that we also need to be aware of when we talk about innovation and technology. And while innovation and technology can contribute to development and inclusion…by connecting people and providing more access to information; helping improve access to services; creating space for new voices to speak their minds; contributing in some ways to improved government and international agency accountability; improving income generation; and so on… it’s important to be aware of who is excluded from creating, accessing, using and benefiting from tech and tech-enabled processes and advances.

Who creates and/or controls the tech? Who is pushed off platforms because of abuse or violence? Who is taken advantage of through tech? Who is using tech to control others? Who is seen as ‘innovative’ and who is ignored? For whom are most systems and services designed? Who is an entrepreneur by choice vs. an informal worker by necessity? There are so many questions to ask at both macro and micro levels.

But that’s not the whole of it. Even if all the issues of access and use were resolved, there are still problems with framing innovation and technology as one of the main solutions to the world’s problems. A core weakness of the Millennium Development Goals (MDGs) was that they were heavy on quantifiable goals and weak on reaching the most vulnerable and on improving governance. Many innovation and technology solutions suffer the same problem.

Sometimes we try to solve the wrong problems with tech, or we try to solve the wrong problems altogether, without listening to and involving the people who best understand the nature of those problems, without looking at the structural changes needed for sustainable impact, and without addressing exclusion at the micro-level (within and among districts, communities, neighborhoods or households).

Often a technological solution is brought in for questionable reasons. There is too little analysis of the political economy in development work as DE noted on the discussion thread. Too few people are asking who is pushing for a technology solution. Why technology? Who gains? What is the motivation? As Ory Okollah asked recently, Why are Africans expected to innovate and entrepreneur our way out of our problems? We need to get past our collective fascination with invention of products and move onward to a more holistic understanding of innovation that involves sustainable implementation, change, and improvement over the longer term.

Innovation is a process, not a product. As MBC said on the discussion thread, “Don’t confuse doing it first with doing it best.” Innovation is not an event, a moment, a one-time challenge, a product, a simple solution. Innovation is technology agnostic, noted LS. So we need to get past the goal of creating and distributing more products. We need to think more about innovating and tweaking processes, developing new paradigms and adjusting and improving on ways of doing things that we already know work. Sometimes technology helps, but that is not always the case.

We need more practical innovation. We should be looking at old ideas in a new context (citing from Stephen Johnson’s Where Good Ideas Come From) said AM. “The problem is that we need systems change and no one wants to talk about that or do it because it’s boring and slow.”

The heretical IT dared suggest that there’s too much attention to high profile innovation. “We could do with more continual small innovation and improvements and adaptations with a strong focus on participants/end users. This doesn’t make big headlines but it does help us get to actual results,” he said.

Along with that, IW suggested we need more innovative thinking and listening, and less innovative technology. “This might mean senior aid officials spending a half a day per week engaging with the people they are supposed to be helping.”

One innovative behavior change might be that of overcoming the ‘expert knowledge’ problem said DE. We need to ensure that the intended users or participants in an innovation or a technology or technological approach are involved and supported to frame the problem, and to define and shape the innovation over time. This means we also need to rely on existing knowledge – immediate and documented – on what has worked, how and when and where and why and what hasn’t, and to make the effort to examine how this knowledge might be relevant and useful for the current context and situation. As Robert Chambers said many years ago: the links of modern scientific knowledge with wealth, power, and prestige condition outsiders to despise and ignore rural peoples’ own knowledge. Rural people’s knowledge and modern scientific knowledge are complementary in their strengths and weaknesses.

Several people asked whether the most innovative thing in the current context is simply political will and seeing past an election cycle, a point that Kentaro Toyama often makes. We need renewed focus on political will and capacity and a focus on people rather than generic tech solutions.

In addition, we need paradigm shifts and more work to make the current system inclusive and fit for purpose. Most of our existing institutions and systems, including that of ‘development’ carry all of the old prejudices and ‘isms’. We need more questioning of these systems and more thinking about realistic alternatives – led and designed by people who have been traditionally excluded and pushed out. As a sector, we’ve focused a LOT on technocratic approaches over the past several years, and we’ve stopped being afraid to get technical. Now we need to stop being afraid to get political.

In summary, there is certainly a place for technology and for innovation in the SDGs, but the innovation narrative needs an overhaul. Just as we’ve seen with terms like ‘social good’ and ‘user centered design’ – we’ve collectively imbued these ideas and methods with properties that they don’t actually have and we’ve fetishized them. Re-claiming the term innovation, said HL, and taking it back to a real process with more realistic expectations might do us a lot of good.



Screen Shot 2015-09-02 at 7.38.45 PMBack in 2010, I wrote a post called “Where’s the ICT4D distance learning?” which lead to some interesting discussions, including with the folks over at TechChange, who were just getting started out. We ended up co-hosting a Twitter chat (summarized here) and having some great discussions on the lack of opportunities for humanitarian and development practitioners to professionalize their understanding of ICTs in their work.

It’s pretty cool today, then, to see that in addition to having run a bunch of on-line short courses focused on technology and various aspects of development and social change work, TechChange is kicking off their first Diploma program focusing on using ICT for monitoring and evaluation — an area that has become increasingly critical over the past few years.

I’ve participated in a couple of these short courses, and what I like about them is that they are not boring one-way lectures. Though you are studying at a distance, you don’t feel like you’re alone. There are variations on the type and length of the educational materials including short and long readings, videos, live chats and discussions with fellow students and experts, and smaller working groups. The team and platform do a good job of providing varied pedagogical approaches for different learning styles.

The new Diploma in ICT and M&E program has tracks for working professionals (launching in September of 2015) and prospective Graduate Students (launching in January 2016). Both offer a combination of in-person workshops, weekly office hours, a library of interactive on-demand courses, access to an annual conference, and more. (Disclaimer – you might see some of my blog posts and publications there).

The graduate student track will also have a capstone project, portfolio development support, one-on-one mentorship, live simulations, and a job placement component. Both courses take 16 weeks of study, but these can be spread out over a whole year to provide maximum flexibility.

For many of us working in the humanitarian and development sectors, work schedules and frequent travel make it difficult to access formal higher-level schooling. Not to mention, few universities offer courses related to ICTs and development. The idea of incurring a huge debt is also off-putting for a lot of folks (including me!). I’m really happy to see good quality, flexible options for on-line learning that can improve how we do our work and that also provides the additional motivation of a diploma certificate.

You can find out more about the Diploma program on the TechChange website  (note: registration for the fall course ends September 11th).




I had the privilege (no pun intended) of participating in the Art-a-Hack program via ThoughtWorks this past couple of months. Art-a-Hack is a creative space for artists and hackers to get together for 4 Mondays in June and work together on projects that involve art, tech and hacking. There’s no funding involved, just encouragement, support, and a physical place to help you carve out some time out for discovery and exploration.

I was paired up by the organizers with two others (Dmytri and Juan), and we embarked on a project. I had earlier submitted an idea of the core issues that I wanted to explore, and we mind-melded really well to come up with a plan to create something around them.

Here is our press release with links to the final product – WhiteSave.me. You can read our Artist Statement here and follow us on Twitter @whitesave.me. Feedback welcome, and please share if you think it’s worth sharing. Needless to say full responsibility for the project falls with the team, and it does not represent the views of any past, present or future employers or colleagues.


Announcing WhiteSave.me

WhiteSave.me is a revolutionary new platform that enables White Saviors to deliver privilege to non-Whites whenever and wherever they need it with the simple tap of a finger.

Today’s White guy is increasingly told “check your privilege.” He often asks himself “What am I supposed to do about my privilege? It’s not my fault I was born white! And really, I’m not a bad person!”

Until now, there has been no simple way for a White guy to be proactive in addressing the issue of his privilege. He’s been told that he benefits from biased institutions and that his privilege is related to historically entrenched power structures. He’s told to be an ally but advised to take a back seat and follow the lead from people of color. Unfortunately this is all complex and time consuming, and addressing privilege in this way is hard work.

We need to address the issue of White privilege now however – we can’t wait. Changing attitudes, institutions, policies and structures takes too damn long! What’s more, we can’t expect White men or our current systems to go through deep changes in order to address privilege and inequality at the roots. What we can do is leapfrog over what would normally require decades of grassroots social organizing, education, policy work, and behavior change and put the solution to White privilege directly into White men’s hands so that everyone can get back to enjoying the American dream.

Screen Shot 2015-07-24 at 5.31.10 PM

WhiteSave.me – an innovative solution that enables White men to quickly and easily deliver privilege to the underprivileged, requiring only a few minutes of downtime, at their discretion and convenience.

Though not everyone realizes it, White privilege affects a large number of White people, regardless of their age or political persuasion. White liberals generally agree that they are privileged, but most are simply tired of hearing about it and having to deal with it. Conservative White men believe their privilege is all earned, but most also consider it possible to teach people of color about deep-seated American values and traditions and the notion of personal responsibility. All told, what most White people want is a simple, direct way to address their privilege once and for all. Our research has confirmed that most White people would be willing to spend a few minutes every now and then sharing their privilege, as long as it does not require too much effort.

WhiteSave.me is a revolutionary and innovative way of addressing this issue. (Read Our Story here to learn more about our discovery moments!) We’ve designed a simple web and mobile platform that enables White men to quickly and easily deliver a little bit of their excess privilege to non-Whites, all through a simple and streamlined digital interface. Liberal Whites can assuage guilt and concern about their own privilege with the tap of a finger. Conservatives can feel satisfied that they have passed along good values to non-Whites. Libertarians can prove through direct digital action that tech can resolve complex issues without government intervention and via the free market. And non-White people of any economic status, all over the world, will benefit from immediate access to White privilege directly through their devices. Everyone wins – with no messy disruption of the status quo!

How it Works

Visit our “how it works” page for more information, or simply “try it now” and your first privilege delivery session is on us! Our patented Facial Color Recognition Algorithm (™) will determine whether you qualify as a White Savior, based on your skin color. (Alternatively it will classify you as a non-White ‘Savee’). Once we determine your Whiteness, you’ll be automatically connected via live video with a Savee who is lacking in White privilege so that you can share some of your good sense and privileged counsel with him or her, or periodically alleviate your guilt by offering advice and a one-off session of helping someone who is less privileged.

Our smart business model guarantees WhiteSave.me will be around for as long as it’s needed, and that we can continue innovating with technology to iterate new solutions as technology advances. WhiteSave.me is free for White Saviors to deliver privilege, and non-Whites can choose from our Third World Freemium Model (free), our Basic Model ($9/month), or our Premium Model ($29/month). To generate additional revenue, our scientific analysis of non-White user data will enable us to place targeted advertisements that allow investors and partners to extract value from the Base of the Pyramid. Non-Profit partners are encouraged to engage WhiteSave.me as their tech partner for funding proposals, thereby appearing innovative and guaranteeing successful grant revenue.

See our FAQs for additional information and check out our Success Stories for more on how WhiteSave.me, in just its first few months, has helped thousands to deliver privilege all over the world.

Try It Now and you’ll be immediately on your way to delivering privilege through our quick and easy digital solution!

Contact help@whitesave.me for more information. And please help us spread the word. Addressing the issue of White privilege has never been so easy!


The July 7th Technology Salon in New York City focused on the role of Information and Communication Technologies (ICTs) in Public Consultation. Our lead discussants were Tiago Peixoto, Team Lead, World Bank Digital Engagement Unit; Michele Brandt, Interpeace’s Director of Constitution-Making for Peace; and Ravi Karkara, Co-Chair, Policy Strategy Group, World We Want Post-2015 Consultation. Discussants covered the spectrum of local, national and global public consultation.

We started off by delving into the elements of a high-quality public consultation. Then we moved into whether, when, and how ICTs can help achieve those elements, and what the evidence base has to say about different approaches.

Elements and principles of high quality public participation

Our first discussant started by listing elements that need to be considered, whether a public consultation process is local, national or global, and regardless of whether it incorporates:

  • Sufficient planning
  • Realistic time frames
  • Education for citizens to participate in the process
  • Sufficient time and budget to gather views via different mechanisms
  • Interest in analyzing and considering the views
  • Provision of feedback about what is done with the consultation results

Principles underlying public consultation processes are that they should be:

  • Inclusive
  • Representative
  • Transparent
  • Accountable

Public consultation process should also be accompanied by widespread public education processes to ensure that people are prepared to a) provide their opinions and b) aware of the wider context in which the consultation takes place, she said. Tech and media can be helpful for spreading the news that the consultation is taking place, creating the narrative around it, and encouraging participation of groups who are traditional excluded, such as girls and women or certain political, ethnic, economic or religious groups, a Salon participant added.

Technology increases scale but limits opportunities for empathy, listening and learning

When thinking about integrating technologies into national public consultation processes, we need to ask ourselves why we want to encourage participation and consultation, what we want to achieve by it, and how we can best achieve it. It’s critical to set goals and purpose for a national consultation, rather than to conduct one just to tick a box, continued the discussant.

The pros and cons of incorporating technology into public consultations are contextual. Technology can be useful for bringing more views into the consultation process, however face-to-face consultation is critical for stimulating empathy in decision makers. When people in positions of power actually sit down and listen to their constituencies, it can send a very powerful message to people across the nation that their ideas and voices matter. National consultation also helps to build consensus and capacity to compromise. If done according to the above-mentioned principles, public consultation can legitimize national processes and improve buy-in. When leaders are open to listening, it also transforms them, she said.

At times, however, those with leadership or in positions of power do not believe that people can participate; they do not believe that the people have the capacity to have an opinion about a complicated political process, for example the creation of a new constitution. For this reason there is often resistance to national level consultations from multilateral or bilateral donors, politicians, the elites of a society, large or urban non-governmental organizations, and political leaders. Often when public consultation is suggested as part of a constitution making process, it is rejected because it can slow down the process. External donors may want a quick process for political reasons, and they may impose deadlines on national leaders that do not leave sufficient time for a quality consultation process.

Polls often end up being one-off snapshots or popularity contests

One method that is seen as a quick way to conduct a national consultation is polling. Yet, as Salon participants discussed, polls may end up being more like a popularity contest than a consultation process. Polls offer limited space for deeper dialogue or preparing those who have never been listened to before to make their voices heard. Polling may also raise expectations that whatever “wins” will be acted on, yet often there are various elements to consider when making decisions. So it’s important to manage expectations about what will be done with people’s responses and how much influence they will have on decision-making. Additionally, polls generally offers a snapshot of how people feel at a distinct point in time, but it may be important to understand what people are thinking at various moments throughout a longer-term national process, such as constitution making.

In addition to the above, opinion polls often reinforce the voices of those who have traditionally had a say, whereas those who have been suffering or marginalized for years, especially in conflict situations, may have a lot to say and a need to be listened to more deeply, explained the discussant. “We need to compress the vertical space between the elites and the grassroots, and to be sure we are not just giving people a one-time chance to participate. What we should be doing is helping to open space for dialogue that continues over time. This should be aimed at setting a precedent that citizen engagement is important and that it will continue even after a goal, such as constitution writing, is achieved,” said the discussant.

In the rush to use new technologies, often we forget about more traditional ones like radio, added one Salon participant, who shared an example of using radio and face to face meetings to consult with boys and girls on the Afghan constitution. Another participant suggested we broaden our concept of technology. “A plaza or a public park is actually a technology,” he noted, and these spaces can be conducive to dialogue and conversation. It was highlighted that processes of dialogue between a) national government and the international community and b) national government and citizens, normally happen in parallel and at odds with one another. “National consultations have historically been organized by a centralized unit, but now these kinds of conversations are happening all the time on various channels. How can those conversations be considered part of a national level consultation?” wondered one participant.

Aggregation vs deliberation

There is plenty of research on aggregation versus deliberation, our next discussant pointed out, and we know that the worst way to determine how many beans are in a jar is to deliberate. Aggregation (“crowd sourcing”) is a better way to find that answer. But for a trial, it’s not a good idea to have people vote on whether someone is guilty or not. “Between the jar and the jury trial, however,” he said, “we don’t know much about what kinds of policy issues lend themselves better to aggregation or to deliberation.”

For constitution making, deliberation is probably better, he said. But for budget allocation, it may be that aggregation is better. Research conducted across 132 countries indicated that “technology systematically privileges those who are better educated, male, and wealthier, even if you account for the technology access gaps.” This discussant mentioned that in participatory budgeting, people tend to just give up and let the educated “win” whereas maybe if it were done by a simple vote it would be more inclusive.

One Salon participated noted that it’s possible to combine deliberation and aggregation. “We normally only put things out for a vote after they’ve been identified through a deliberative process,” he said, “and we make sure that there is ongoing consultation.” Others lamented that decision makers often only want to see numbers – how many voted for what – and they do not accept more qualitative consultation results because they usually happen with fewer people participating. “Congress just wants to see numbers.”

Use of technology biases participation towards the elite

Some groups are using alternative methods for participatory democracy work, but the technology space has not thought much about this and relies on self-selection for the most part, said the discussant, and results end up being biased towards wealthier, urban, more educated males. Technology allows us to examine behaviors by looking at data that is registered in systems and to conduct experiments, however those doing these experiments need to be more responsible, and those who do not understand how to conduct research using technology need to be less empirical. “It’s a unique moment to build on what we’ve learned in the past 100 years about participation,” he said. Unfortunately, many working in the field of technology-enabled consultation have not done their research.

These biases towards wealthier, educated, urban males are very visible in Europe and North America, because there is so much connectivity, yet whether online or offline, less educated people participate less in the political process. In ‘developing’ countries, the poor usually participate more than the wealthy, however. So when you start using technology for consultation, you often twist that tendency and end up skewing participation toward the elite. This is seen even when there are efforts to proactively reach out to the poor.

Internal advocacy and an individual’s sense that he or she is capable of making a judgment or influencing an outcome is key for participation, and this is very related to education, time spent in school and access to cultural assets. With those who are traditionally marginalized, these internal assets are less developed and people are less confident. In order to increase participation in consultations, it’s critical to build these internal skills among more marginalized groups.

Combining online and offline public consultations

Our last discussant described how a global public consultation was conducted on a small budget for the Sustainable Development Goals, reaching an incredible 7.5 million people worldwide. Two clear goals of the consultation were that it be inclusive and non-discriminatory. In the end, 49% who voted identified as female, 50% as male and 1% as another gender. Though technology played a huge part in the process, the majority of people who voted used a paper ballot. Others participated using SMS, in locally-run community consultation processes, or via the website. Results from the voting were visualized on a data dashboard/data curation website so that it would be easier to analyze them, promote them, and encourage high-level decision makers to take them into account.

Some of the successful elements of this online/offline process included that transparency was a critical aspect. The consultation technology was created as open source so that those wishing to run their own consultations could open it, modify it, and repackage it however they wanted to suit their local context. Each local partner could manage their own URL and track their own work, and this was motivating to them.

Other key learning was that a conscious effort has to be made to bring in voices of minority groups; investment in training and capacity development was critical for those running local consultations; honesty and transparency about the process (in other words, careful management of expectations); and recognize that there will be highs and lows in the participation cycle (be sensitive to people’s own cycles and available time to participate).

The importance of accountability

Accountability was a key aspect for this process. Member states often did not have time to digest the results of the consultation, and those running it had to find ways to capture the results in short bursts and visually simple graphics so that the consultation results would be used for decision making. This required skill and capacity for not only gathering and generating data but also curating it for the decision-making audience.

It was also important to measure the impact of the consultation – were people’s voices included in the decision-making process and did it make a difference? And were those voices representative of a wide range of people? Was the process inclusive?

Going forward, in order to build on the consultation process and to support the principle of accountability, the initiative will shift focus to become a platform for public participation in monitoring and tracking the implementation of the Sustainable Development Goals.

Political will and responsiveness

A question came up about the interest of decision-makers in actually listening. “Leaders often are not at all interested in what people have to say. They are more concerned with holding onto their power, and if leaders have not agreed to a transparent and open process of consultation, it will not work. You can’t make them listen if they don’t want to. If there is no political will, then the whole consultation process will just be propaganda and window dressing,” one discussant commented. Another Salon participant what can be done to help politicians see the value of listening. “In the US, for example, we have lobbyists, issues groups, PACs, etc., so our politicians are being pushed on and demanded from all sides. If consultation is going to matter, you need to look at the whole system.” “How can we develop tools that can help governments sort through all these pressures and inputs to make good decisions?” wondered one participant.

Another person mentioned Rakesh Rajani’s work, noting that participation is mainly about power. If participation is not part of a wider system change, part of changing power structures, then using technology for participation is just a new tool to do the same old thing. If the process is not transparent and accountable, or if you engage and do not deliver anything based on the engagement, then you will lose future interest to engage.

Responsiveness was also raised. How many of these tech-fueled participation processes have led to governments actually changing, doing something different? One discussant said that evidence of impact of ICT-enabled participation processes was found in only 25 cases, and of those only 5 could show any kind of impact. All the others had very unclear impact – it was ambiguous. Did using ICTs make a difference? There was really no evidence of any. Another commented that clearly technology will only help if government is willing and able to receive consultation input and act on it. We need to find ways to help governments to do that, noted another person.

As always, conversation could have continued on for quite some time but our 2 hours was up. For more on ICTs and public consultations, here is a short list of resources that we compiled. Please add any others that would be useful! And as a little plug for a great read on technology and its potential in development and political work overall, I highly recommend checking out Geek Heresy: Rescuing Social Change from the Cult of Technology from Kentaro Toyama. Kentaro’s “Law of Amplification” is quite relevant in the space of technology-enabled participation, in that technology amplifies existing human behaviors and tendencies, and benefits those who are already primed to benefit while excluding those who have been traditionally excluded. Hopefully we’ll get Kentaro in for a Tech Salon in the Fall!

Thanks to our lead discussants, Michele, Tiago and Ravi, and to Thoughtworks for their generous hosting of the Salon! Salons are conducted under Chatham House Rule so no attribution has been made in this post. Sign up here if you’d like to receive Technology Salon invitations.

I love Dr. Seuss. His books are creative and zany. He made great social commentary. “If I Ran the Zoo” is a story about innovation and re-invention*. The hero, Gerald McGrew, is a young a boy who re-imagines the zoo. In his vision for the new zoo, he travels the world to find cool creatures that no one has ever seen. He brings them back to showcase in his “new zoo McGrew zoo,” which is dynamic, flashy and exciting.

McGrew’s new zoo looks a lot like today’s world of development sector innovation and “innovation for social good.” Great ideas and discoveries; fresh things to look at, play with and marvel at; but also quite laden with an adolescent boy’s special brand of ego and hubris.

See, most of our institutions have been basically like this for a while:

Screen Shot 2015-06-12 at 2.49.53 PM

But over the past decade, we’ve been hearing quite a lot of this:

Screen Shot 2015-06-12 at 2.52.14 PM

People inside and outside of the development and social sectors are innovating really hard to come up with new and cool things. Silicon Valley is putting in its two cents and inventing “life-changing solutions.” People are traveling all around and looking for “local” innovation, too. Some donors are even are supporting what they like to call “reverse innovation.” It feels a bit like the days of colonization are rolling on and on.Screen Shot 2015-06-12 at 1.49.18 PM

We see people with resources exploring and looking for opportunities, amazing ideas, and places to invest in or extract out value (BOP anyone?). These new ideas and innovations are captured and showcased for donors, investors, and global development peers.

Screen Shot 2015-06-12 at 1.51.59 PM

The most innovative are applauded and given more resources. Those who “win” at innovation are congratulated on Ted stages, like McGrew is for his cool new flavor of exotic creatures.

Screen Shot 2015-06-12 at 1.53.21 PM

But it’s fairly safe to say that one of the biggest problems in the world today is inequality. Many believe it’s the development model (in the small and the big sense) itself that’s the problem. Yet most of this “innovation for social good” is being stimulated by and developed within the capitalist, colonial, patriarchal models and structures that entrench inequality in the first place.

If I ran the zoo, I’d take innovation in a different direction. I’d try to figure out how to dismantle the zoo.


Fun fact: Many people credit Dr. Seuss with coining the term ‘nerd’ in this book.

(Screenshots from: https://www.youtube.com/watch?t=20&v=BLQpqkbsrr0 and https://books.google.com/books?id=fdX3xUSbriIC&pg=PT57&source=gbs_selected_pages&cad=3#v=onepage&q&f=false. Book by Dr. Seuss from 1950)

The private sector has been using dashboards for quite some time, but international development organizations face challenges when it comes to identifying the right data dashboards and accompanying systems for decision-making.

Our May 29th, 2015, Technology Salon (sponsored by The Rockefeller Foundation) explored data dashboards and data visualization for improved decision making with lead discussants John DeRiggi, Senior Data Architect, DAI; Shawna Hoffman, Associate Manager, Evaluation and Learning at The MasterCard Foundation; Stephanie Evergreen, Evergreen Data.

In short, we learned at the Salon that most organizations are struggling with the data dashboard process. There are a number of reasons that dashboards fail. They may never get off the ground, they may not deliver what was promised, they may deliver but no one uses them, or they may deliver but the data is poor and bad decisions are made. Using data for better decision-making is an ongoing process – not a task or product to complete and then relegate to automation. Just getting a dashboard up and running doesn’t guarantee that it’s a success – it’s critical to look deeper to see if the data and its visualization have actually improved decisions and how. Like with any ICT tool, user centered design and ongoing iteration are key. Successful dashboards are organized, useful, include targets, and have trends and predictions. Organizational culture and change management are critical in the process.

Points discussed in detail*:

1) Ask whether you actually need a dashboard

The first question to ask is whether a dashboard is needed or possible. One discussant, who specializes in data visualization, noted that she’s often brought in because someone wants to do data visualization, and she then needs to work backwards with the organization through a number of other preparatory steps before getting to the part on data visualization. It’s critical to have data dashboard discussions with different parts of the organization in order to understand real needs and expectations. Often people will say they need a dashboard because they want to make better decisions, noted another lead discussant. “But what kind of decisions, and what information is needed to make those decisions? Where does that information come from? Who will get it?”

2) Define the audience and type of dashboard

People often think that they can create one dashboard that will fulfill everyone’s needs. As one discussant put it, they will say the audience for the dashboard is “everyone – all decision makers at all levels!” In reality most organizations will need several dashboards for different levels of decision-making. It’s important to know who will own it, use it, keep it up, and collect the data. Will it be internal or externally facing? Discussing all of this is a key part of the process of thinking through the dashboard. As one discussant outlined, dashboards can be strategic, analytical or operational. But it’s difficult for them to be all three at once. So organizations need to come to a clear understanding of their data and decision-making needs. What information, if available, would help different teams at different levels with their decision making? One dashboard can’t be everything to everyone. Creating a charter that outlines what the dashboard project is and what it aims to do is a way to help avoid mission creep, said one discussant.

3) Work with users to develop your dashboard

To start off the process, it’s important to clearly identify the audience and find out what they need – don’t assume you know, recommended one discussant. But also, as a Salon participant pointed out, don’t assume that they know either. Have a conversation where their and your expertise comes together. “The higher up you go, the less people may understand about data. One idea is to just take the ‘data’ out of the conversation. Ask decision-makers what questions they are trying to answer, what problems they are trying to solve. Then find out how to collect and visualize the data that helps them answer their questions,” suggested another participant. Create ownership and accountability at all levels – with users, with staff who will input the data, with project managers, with grantees – you need cooperation from all levels noted others. Clear buy-in will also help with data quality. If people see the results of their data coming out in a data visualization, they may be more inclined to provide quality data. One way to involve users is to gather different teams to talk about their data and to create ‘entity relationship models’ together. “People can get into the weeds, and then you can build a vocabulary for the organization. Then you can use that model to build the system and create commonality across it,” said one discussant. Another idea is to create paper prototypes of dashboards with users so that they can envision them better.

4) Dashboards help people engage with the data they’ve collected

A dashboard is a window into your data, said one participant. In some cases, seeing their data visualized can help staff to see that they have been providing poor quality data. “People didn’t realize how bad their data was until they saw their dashboard,” said one discussant. Another noted that people may disagree with what the data tells them in the dashboard and feel motivated to provide better data. On the other hand, they may realize that their data was actually good, and instead they need to improve ineffective programs. A danger is that putting a dashboard on top of bad data shines a light on the data, said one participant, and this might create an incentive for people to manipulate their data.

5) Don’t be over-ambitious

Align the dashboard with indicators that link to strategic goals and directions and stay focused, recommended one discussant. There is often a temptation to over-complicate with tons of data and visuals. But extraneous data leads to misinterpretation or distraction. Dashboards should make complex data available in an accessible way to users, she said. You can always make more visuals if needed, but you want a concise story told in the data and visuals that you’re depicting. Determine what is useful, productive and credible and leave out what is exciting but extraneous. “Don’t try to have 30 indicators.”

6) Be clear about your data categories and indicators

Rolling up data from a large number of different programs into a dashboard is a huge challenge, especially if different sites or programs are using different data models. For example, if one program is describing an activity as a ‘workshop’ and the other uses ‘training session,’ said one discussant, you have a problem. A Salon participant explained that her organization started with shallow but important common denominators across programs. Over time they aim to go deeper to begin looking at outcomes and impact.

7) Think through how you’ll sustain the dashboard and related system(s)

One discussant said that her organization established three different teams to work on the dashboard process: a) Metrics – Where do we have credible representative data? Where do we have indicators but we don’t have data? b) Plumbing: Where are the data sources? How do they feed into each other? Who is responsible, and can this be aggregated up? And c) Visualization: What visual would help different decision makers make their decisions? Depending on where the organization is in its stage of readiness and its existing staff capacities, different combinations of skill sets may be required to supplement existing ones. Data experts can help teams understand what is possible, yet program or management teams and other dashboard users also need to be involved so that they can identify the questions they are trying to answer with the data and the dashboard.

8) Don’t underestimate the time/resources needed for a functional dashboard

People may not realize that you can’t make a dashboard without data to support it, noted one participant. “It’s like a power point presentation… a power point doesn’t just appear out of nowhere. It’s a result of conversations, research, data, design and more. But for some reason, people think a dashboard will just magically create itself out of thin air.” People also seem to think you can create and launch a dashboard and then put it on autopilot, but that is not the case. The dashboard will need constant changes and iteration, and there will be continual work to keep it up. The questions being asked will also likely change over time and so the dashboard may need to shift to take this into consideration. Time will be required to get buy-in for the dashboard and its use. One Salon participant said that in her former organization, they met quarterly to present, use and discuss the dashboard, and it took about 2 years in order for it to become useful and for people to become invested in it. It’s very important, said one participant, to ensure that management knows that the dashboard is not a static thing – it will need ongoing attention and management.

9) Be selective when it comes to the technology

People tend to think that dashboards are just visual, said a Salon participant. They think they are really cool, business solution platforms. Often senior leadership has seen been pitched something really expensive and complicated, with all kinds of bells and whistles, and they may think that is what they need. It’s important to know where your organization is in terms of capacity before determining which technology would be the best fit, however, noted one discussant. She counseled organizations to use whatever they have on hand rather than bringing in new software that takes people 6 months to learn how to use. Simple excel-based dashboards might be the best place to start, she said.

10) Legacy systems can be combined with new data viz capabilities

One discussant shared how his company’s information system, which was set up over 15 years ago, did not allow for the creation of APIs. This meant that the team could not build derivative software products from their massive existing database. It is too expensive to replace the entire system, and building modules to replace some of it would lead to fragmenting the user experience. So the team built a thin web service layer on top of the existing system. This exposed the data to friendly web formats from which developers could build interactive products.

11) Be realistic about “real time” and “data quality”

One question that came up was around the the level of evidence needed to make good decisions. Having perfect data served up into a perfect visualization is utopian, said one Salon participant. The idea is that we could have ‘real time’ data to inform our decisions, she explained, yet it’s hard to quality check data so quickly. “So at what level can we say we’ll make decisions based on a level of certainty – is it when we feel 80% of the data is good quality? Do we need to lower that to 60% so that we have timely data? Is that too low?” Another question was around the kinds of decisions that require ‘real time’ data versus those that could be made based on data that is 3 to 6 months old. Salon participants said this will depend on the kind of program and the type of decision. The sector in which one is working may also determine the level of comfort with real time and with data quality – for example, the humanitarian sector may need more timely data and accept a lower level of verification whereas the development sector may be the opposite.

Another point was that dashboards should include error bars and available metadata, as well as in some cases a link to raw data for those who want to dig into the data and understand what is behind the dashboard. Sometimes the dashboard process will highlight that there is simply not much quality data available for some programs in some countries. This can be an opportunity to work with staff on the ground to strengthen capacity to collect it.

12) Relax

As one discussant said, “much of the concern about data quality is related to our own hang-ups as data nerds and what we feel comfortable putting out there for people to use to make decisions. We always say ‘we need more research.’” But here the context is different. “Stakeholders and management want the answer. We need to just put the data out there with some caveats to help them.” One way to offer more context for a dashboard is creating a dashboard report that provides some narrative alongside the visualization. Dashboards should also show trends, not only what has happened already, she said. People need to see trends towards the future so that decisions can be made. It was also pointed out that a dashboard shouldn’t be the only basis for decisions. Like a car dashboard – these data dashboards signal that something is changing but you still need to look under the hood to see what it is. The dashboard should trigger questions – it should be a launch pad for discussion.

13) Organizational culture is a huge part of this process

The internal culture and people’s attitudes towards data are embedded into how an organization operates, noted one Salon participant. This varies depending on the type of organization – an evaluation focused organization vs. a development organization vs. a contractor vs. a humanitarian organization, for example. Outside consultants can help you to build a dashboard, but it will be critical to have someone managing organizational change on the inside who knows the current culture and where the organization is aiming to go with the dashboard process. The process is getting easier, however. Many organizations are thirsty for data now, noted one lead discussant. “Often the research or evaluation team create a dashboard and send it to the management team, and then everyone loves it and wants one. People are ready for it now.”

More resources on data dashboards and visualization.

Special thanks to our lead discussants and to our hosts for this Salon! If you’d like to join our Salon discussions in the future, sign up at the Technology Salon site.

*Salons run under Chatham House Rule, so no attribution has been made in this post.


Get every new post delivered to your Inbox.

Join 894 other followers