Feeds:
Posts
Comments

Our December 2015 Technology Salon discussion in NYC focused on approaches to girls’ digital privacy, safety and security. By extension, the discussion included ways to reduce risk for other vulnerable populations. Our lead discussants were Ximena BenaventeGirl Effect Mobile (GEM) and Jonathan McKay, Praekelt Foundation. I also shared a draft Girls’ Digital Privacy, Safety and Security Policy and Toolkit I’ve been working on with both organizations over the past year.

Girls’ digital privacy, safety and security risks

Our first discussant highlighted why it’s important to think specifically about girls and digital security. In part, this is because different factors and vulnerabilities combine, exacerbating girls’ levels of risk. For example, girls living on less than $2 per day likely only have access to basic mobile phones, which are often borrowed from parents or siblings. The organization she works with always starts with deep research on aspects like ownership vs. borrowship and whether girls’ mobile usage is free/unlimited and un-supervised or controlled by gatekeepers such as parents, brothers, or other relatives. This helps to design better tools, services and platforms and to design for safety and security, she said. “Gatekeepers are very restrictive in many cases, but parental oversight is not necessarily a bad thing. We always work with parents and other gatekeepers as well as with girls themselves when we design and test.” When girls are living in more traditional or conservative societies, she said, we also need to think about how content might affect girls both online and offline. For example, “is content sufficiently progressive in terms of girls’ rights, yet safe for girls to read, comment on or discuss with friends and family without severe retaliation?”

Research suggests that girls who are more vulnerable offline (due to poverty or other forms of marginalization), are likely also more vulnerable to certain risks online, so we design with that in mind, she said. “When we started off on this project, our team members were experts in digital, but we had less experience with the safety and privacy aspects when it comes to girls living under $2/day or who were otherwise vulnerable. “Having additional guidance and developing a policy on this aspect has helped immensely – but has also slowed our processes down and sometimes made them more expensive,” she noted. “We had to go back to everything and add additional layers of security to make it as safe as possible for girls. We have also made sure to work very closely with our local partners to be sure that everyone involved in the project is aware of girls’ safety and security.”

Social media sites: Open, Closed, Private, Anonymous?

One issue that came up was safety for children and youth on social media networks. A Salon participant said his organization had thought about developing this type of a network several years back but decided in the end that the security risks outweighed the advantages. Participants discussed whether social media networks can ever be safe. One school of thought is that the more open a platform, the safer it is, as “there is no interaction in private spaces that cannot be constantly monitored or moderated.” Some worry about open sites, however, and set up smaller, closed, private groups that were closely monitored. “We work with victims of violence to share their stories and coping mechanisms, so, for us, private groups are a better option.”

Some suggested that anonymity on a social media site can protect girls and other vulnerable groups, however there is also research showing that Internet anonymity contributes to an increase in activities such as bullying and harassment. Some Salon participants felt that it was better to leverage existing platforms and try to use them safely. Others felt that there are no existing social media platforms that have enough security for girls or other vulnerable groups to use with appropriate levels of risk. “We sometimes recruit participants via existing social media platforms,” said one discussant, “but we move people off of those sites to our own more secure sites as soon as we can.”

Moderation and education on safety

Salon participants working with vulnerable populations said that they moderate their sites very closely and remove comments if users share personal information or use offensive language. “Some project budgets allow us to have a moderator check every 2 hours. For others, we sweep accounts once a day and remove offensive content within 24 hours.” One discussant uses moderation to educate the community. “We always post an explanation about why a comment was removed in order to educate the larger user base about appropriate ways to use the social network,” he said.

Close moderation becomes difficult and costly, however, as the user base grows and a platform scales. This means individual comments cannot be screened and pre-approved, because that would take too long and defeat the purpose of an engaging platform. “We need to acknowledge the very real tension between building a successful and engaging community and maintaining privacy and security,” said one Salon participant. “The more you lock it down and the more secure it is, the harder you find it is to create a real and active community.”

Another participant noted that they use their safe, closed youth platform to educate and reinforce messaging about what is safe and positive use of social media in hopes that young people will practice safe behaviors when they use other platforms. “We know that education and awareness raising can only go so far, however,” she said, “and we are not blind to that fact.” She expressed concern about risk for youth who speak out about political issues, because more and more governments are passing laws that punish critics and censor information. The organization, however, does not want to encourage youth to stop voicing opinions or participating politically.

Data breaches and project close-out

One Salon participant asked if organizations had examples of actual data breaches, and how they had handled them. Though no one shared examples, it was recommended that every organization have a contingency plan in place for accidental data leaks or a data breach or data hack. “You need to assume that you will get hacked,” said one person, “and develop your systems with that as a given.”

In addition to the day-to-day security issues, we need to think about project close-out, said one person. “Most development interventions are funded for a short, specific period of time. When a project finishes, you get a report, you do your M&E, and you move on. However, the data lives on, and the effects of the data live on. We really need to think more about budgeting for proper project wind-down and ensure that we are accountable beyond the lifetime of a project.”

Data security, anonymization, consent

Another question was related to using and keeping girls’ (and others’) data safe. “Consent to collect and use data on a website or via a mobile platform can be tricky, especially if we don’t know how to explain what we might do with the data,” said one Salon participant. Others suggested it would be better not to collect any data at all. “Why do we even need to collect this data? Who is it for?” he asked. Others countered that this data is often the only way to understand what people are doing on the site, to make adjustments and to measure impact.

One scenario was shared where several partner organizations discussed opening up a country’s cell phone data records to help contain a massive public health epidemic, but the privacy and security risks were too great, so the idea was scrapped. “Some said we could anonymize the data, but you can never really and truly anonymize data. It would have been useful to have a policy or a rubric that would have guided us in making that decision.”

Policy and Guidelines on Girls Privacy, Security and Safety

Policy guidelines related to aspects such as responsible data for NGOs, data security, privacy and other aspects of digital security in general do exist. (Here are some that we compiled along with some other resources). Most IT departments also have strict guidelines when it comes to donor data (in the case of credit card and account information, for example). This does not always cross over to program-level ICT or M&E efforts that involve the populations that NGOs are serving through their programming.

General awareness around digital security is increasing, in part due to recent major corporate data hacks (e.g., Target, Sony) and the Edward Snowden revelations from a few years back, but much more needs to be done to educate NGO staff and management on the type of privacy and security measures that need to be taken to protect the data and mitigate risk for those who participate in their programs.  There is an argument that NGOs should have specific digital privacy, safety and security policies that are tailored to their programming and that specifically focus on the types of digital risks that girls, women, children or other vulnerable people face when they are involved in humanitarian or development programs.

One such policy (focusing on vulnerable girls) and toolkit (its accompanying principles and values, guidelines, checklists and a risk matrix template); was shared at the Salon. (Disclosure: – This policy toolkit is one that I am working on. It should be ready to share in early 2016). The policy and toolkit take program implementers through a series of issues and questions to help them assess potential risks and tradeoffs in a particular context, and to document decisions and improve accountability. The toolkit covers:

  1. data privacy and security –using approaches like Privacy by Design, setting limits on the data that is collected, achieving meaningful consent.
  2. platform content and design –ensuring that content produced for girls or that girls produce or volunteer is not putting girls at risk.
  3. partnerships –vetting and managing partners who may be providing online/offline services or who may partner on an initiative and want access to data, monetizing of girls’ data.
  4. monitoring, evaluation, research and learning (MERL) – how will program implementers gather and store digital data when they are collecting it directly or through third parties for organizational MERL purposes.

Privacy, Security and Safety Implications

Our final discussant spoke about the implications of implementing the above-mentioned girls’ privacy, safety and security policy. He started out saying that the policy starts off with a manifesto: We will not compromise a girl in any way, nor will we opt for solutions that cut corners in terms of cost, process or time at the expense of her safety. “I love having this as part of our project manifesto, he said. “It’s really inspiring! On the flip side, however, it makes everything I do more difficult, time consuming and expensive!”

To demonstrate some of the trade-offs and decisions required when working with vulnerable girls, he gave examples of how the current project (implemented with girls’ privacy and security as a core principle) differed from that of a commercial social media platform and advertising campaign he had previously worked on (where the main concern was the reputation of the corporation, not that of the users of the platform and the potential risks they might put themselves in by using the platform).

Moderation

On the private sector platform, said the discussant, “we didn’t have the option of pre-moderating comments because of the budget and because we had 800 thousand users. To meet the campaign goals, it was more important for users to be engaged than to ensure content was safe. We focused on removing pornographic photos within 24 hours, using algorithms based on how much skin tone was in the photo.” In the fields of marketing and social media, it’s a fairly well-known issue that heavy-handed moderation kills platform engagement. “The more we educated and informed users about comment moderation, or removed comments, the deader the community became. The more draconian the moderation, the lower the engagement.”

The discussant had also worked on a platform for youth to discuss and learn about sexual health and practices, where he said that users responded angrily to moderators and comments that restricted their participation. “We did expose our participants to certain dangers, but we also knew that social digital platforms are more successful when they provide their users with sense of ownership and control. So we identified users that exhibited desirable behaviors and created a different tier of users who could take ownership (super users) to police and flag comments as inappropriate or temporarily banned users.” This allowed a 25% decrease in moderation. The organization discovered, however, that they had to be careful about how much power these super users had. “They ended up creating certain factions on the platform, and we then had to develop safeguards and additional mechanisms by which we moderated our super users!”

Direct Messages among users

In the private sector project example, engagement was measured by the number of direct or private messages sent between platform users. In the current scenario, however, said the discussant, “we have not allowed any direct messages between platform users because of the potential risks to girls of having places on the site that are hidden from moderators. So as you can see, we are removing some of our metrics by disallowing features because of risk. These activities are all things that would make the platform more engaging but there is a big fear that they could put girls at risk.”

Adopting a privacy, security, and safety policy

One discussant highlighted the importance of having privacy, safety and security policies before a project or program begins. “If you start thinking about it later on, you may have to go back and rebuild things from scratch because your security holes are in the design….” The way a database is set up to capture user data can make it difficult to query in the future or for users to have any control of what information is or is not being shared about them. “If you don’t set up the database with security and privacy in mind from the beginning, it might be impossible to make the platform safe for girls without starting from scratch all over again,” he said.

He also cautioned that when making more secure choices from the start, platform and tool development generally takes longer and costs more. It can be harder to budget because designers may not have experience with costing and developing the more secure options.

“A valuable lesson is that you have to make sure that what you’re trying to do in the first place is worth it if it’s going to be that expensive. It is worth a girls’ while to use a platform if she first has to wade through a 5-page terms and conditions on a small mobile phone screen? Are those terms and conditions even relevant to her personally or within her local context? Every click you ask a user to make will reduce their interest in reaching the platform. And if we don’t imagine that a girl will want to click through 5 screens of terms and conditions, the whole effort might not be worth it.” Clearly, aspects such as terms and conditions and consent processes need to be designed specifically to fit new contexts and new kinds of users.

Making responsible tradeoffs

The Girls Privacy, Security and Safety policy and toolkit shared at the Salon includes a risk matrix where project implementers rank the intensity and probability of risks as high, medium and low. Based on how a situation, feature or other potential aspect is ranked and the possibility to mitigate serious risks, decisions are made to proceed or not. There will always be areas with a certain level of risk to the user. The key is in making decisions and trade-offs that balance the level of risk with the potential benefits or rewards of the tool, service, or platform. The toolkit can also help project designers to imagine potential unintended consequences and mitigate risk related to them. The policy also offers a way to systematically and pro-actively consider potential risks, decide how to handle them, and document decisions so that organizations and project implementers are accountable to girls, peers and partners, and organizational leadership.

“We’ve started to change how we talk about user data in our organization,” said one discussant. “We have stopped thinking about it as something WE create and own, but more as something GIRLS own. Banks don’t own people’s money – they borrow it for a short time. We are trying to think about data that way in the conversations we’re having about data, funding, business models, proposals and partnerships. You don’t get to own your users’ data, we’re not going to share de-anonymized data with you. We’re seeing legislative data in some of the countries we work that are going that way also, so it’s good to be thinking about this now and getting prepared”

Take a look at our list of resources on the topic and add anything we may have missed!

 

Thanks to our friends at ThoughtWorks for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology SalonSalons are held under Chatham House Rule, therefore no attribution has been made in this post.

Facebook and its Internet.org initiative (now called ‘Free Basics’), have faced their fair share of criticism, but I’m guessing that neither is going away anytime soon. So, here’s something that may be of interest to folks working with and/or designing mobile tools for lower income populations or those with lower end phones.

Praekelt Foundation is partnering with Facebook on an open source toolkit of technologies and strategies that will open the Free Basics platform to more organizations and/or tech developers to adapt existing services or create new ones for distribution through the web and the Free Basics platform.

Praekelt Foundation will be running this incubator for Free Basics. It will provide 100 social change organizations with tools, service and support worth a total of $200,000. The tools and lessons that emerge will be shared with the public in 2016.

Praekelt is working with a broad range of experts in international development, user experience, mobile technology and digital safety and security to create an independent panel that will be responsible for selecting the members of the incubator from an open call to developers, social enterprises and NGOs. Disclosure: I’ve been asked (and agreed) to join the selection panel and will be involved in reviewing applications. I have also provided input into the topic areas.

Applications are sought in the areas of health, education, agriculture, economic empowerment, gender equality, citizen engagement and others. The aim is to enhance information and services available via low-end phones for low-income communities, youth, women and girls, healthcare workers and/or other frontline staff, refugees and migrants, and/or the LGBTQI population. This might include provision of information about and/or access to things like financial services, medical services, advocacy initiatives, citizen engagement efforts, behavior change communications and support and/or counseling.

For questions, comments, or to find out more about the initiative, here’s Praekelt Foundation’s blog and a link to the call for proposals and the application form.

(I’m also interested in feedback to improve on the idea and process, etc., so feel free to get in touch with me also if you have comments.)

 

At our November 18th Technology Salon, we discussed how different organizations are developing their ICT for development (ICT4D) strategies. We shared learning on strategy development and buy-in, talked about whether organizations should create special teams or labs for ICT- and innovation-related work or mainstream the ICT4D function, and thought about how organizations can define and find the skill sets needed for taking their ICT-enabled work forward. Population Council’s Stan Mierzwa, Oxfam America’s Neal McCarthy, and Cycle Technologies’ Leslie Heyer joined as lead discussants, and we heard from Salon participants about their experiences too.

Participating organizations were at various stages of ICT4D work, but most had experienced similar challenges and frustrations with taking their work forward. Even organizations that had created ICT4D strategies a couple of years ago said that implementation was slow.

Some of the key elements mentioned by our first discussant as important for managing and strategically moving ICT forward in an organization included:

  • being more informed about where different offices and staff were using ICTs for programmatic work,
  • establishing a standard set of technology tools for organizational use,
  • improved knowledge management about ICTs,
  • publishing on how ICTs were being used in programs (to help with credibility),
  • engaging with different teams and leadership to secure support and resources
  • working more closely with human resources teams who often do not understand ICT4D-related job descriptions and the profile needed.

Our second discussant said that his organization developed an ICT4D strategy in order to secure resources and greater support for moving ICT4D forward. It was also starting to be unwieldy to manage all of the different ideas and tools being used across the organization, and it seemed that greater harmonization would allow for improved IT support for more established tools as well as establishment of other ways to support new innovations.

In this case, the organization looked at ICTs as two categories: technology for development workers and technology for development outcomes. They used Gartner’s ‘pace layered’ model (which characterizes systems of innovation, systems of differentiation, and systems of record) as a way of analyzing the support roles of different departments.

One of the initial actions taken by this organization was establishing a small tech incubation fund that different offices could apply for in order to try something new with ICTs in their programs and campaigns. Another action was to take 10 staff to the Catholic Relief Services (CRS) ICT4D conference to learn more about ICT4D and to see what their peers from similar organizations were doing. In return for attending the conference, staff were required to submit a proposal for the tech incubation fund.

For the development of the strategy document and action plan, the ICT4D strategy team worked with a wider group of staff to develop a list of current ICT-enabled initiatives and a visual heat map of actions and activities across the organization. This formed the basis for discussions on where lots of ICT4D activities were happening and where there was nothing going on with ICTs. The team then discussed what the organization should do strategically to support and potentially consolidate existing activities and what should be done about areas where there were few ICT-related activities – should those areas be left alone or was there a reason to look at them to see if ICT should be incorporated?

Having done that, the organization adapted Nethope’s Organizational Guide to ICT4D to fit its own structure and culture, and used it as a framework for ICT4D strategy discussions with key staff from different teams. The Nethope guide suggests five key functions for strategic, organization-wide ICT4D: lead organizational change, drive knowledge exchange, build a portfolio, manage processes, and develop an advisory service (see below). The aforementioned activities were also clustered according to which of these 5 areas they fell into.

Screen Shot 2015-11-24 at 8.53.12 AM

(Table of contents from Nethope’s Guide.)

The organization felt it was also important to change the image of the IT team. ‘We had to show that we were not going to tie people up with formal committees and approvals if they wanted to try something new and innovative. Being more approachable is necessary or staff will bypass the IT team and go to consultants, and then we open ourselves up to data privacy risks and we also lose institutional knowledge.’

Salon participants agreed that it was important to know how to “sell” an ICT4D-related idea to frontline staff, management and leadership. Some ways to do this include demonstrating the value-add of ICTs in terms of longer-term cost and time efficiencies, showing the benefit of real-time data for decision-making, and demonstrating what peer organizations are doing. Organizations often also need someone at the top who is pushing for change and modernization.

Our third discussant said that her company has been shifting from a commercial product developer to a full-fledged technology company. She outlined the need for strategic thinking along that journey. Initially, the company outsourced activities such as research and data collection. With time, it started to pull key functions in house since systems maintenance and technology has become a core part of the business.

“As a small company, we can be flexible and change easily,” she said. ‘ICT is embedded into our culture and everyone thinks about it.’ One challenge that many ICT4D initiatives face – whether they are happening in a non-profit or a for-profit — is sustainability. ‘People are often fine with paying for a physical product, but when it comes to the web, they are accustomed to getting everything for free, which makes long-term sustainability difficult.’

In order to continuously evolve their strategies, organizations need to have time and space to constantly step back and think about their underlying values and where they see themselves in 5 or 10 years. A more pro-active relationship with donors is also important. Although Salon participants felt that the ICT4D Principles and related processes were promising, they also felt that donors do not have a clear idea of what they are looking for, what exists already, what needs to be created, and what evidence base exists for different tools or kinds of ICT4D. One Salon participant felt that ‘donor agencies don’t know what kinds of tech are effective, so it’s up to you as an implementer to bring the evidence to the table. It’s critical to have the ITC4D support staff at the table with you, because if not these more detailed conversations about the tech don’t happen with donors and you’ll find all kinds of duplication of efforts.’

Another challenge with thinking about ICT4D in a strategic way is that donors normally don’t want to fund capacity building, said another Salon participant. They prefer to fund concrete projects or innovation challenges rather than supporting organizations to create an environment that gives rise to innovation. In addition, funding beyond the program cycle is a big challenge. ‘We need to be thinking about enterprise systems, layered on systems, national systems,’ said one person. ‘And systems really struggle here to scale and grow if you can’t claim ownership for the whole.’

Salon participants highlighted hiring and human resources departments as a big barrier when it comes to ICT4D. It is often not clear what kinds of skills are needed to implement ICT4D programs, and human resources teams often screen for the wrong skill sets because they do not understand the nature of ICT4D. ‘I always make them give me all the CVs and screen them myself,’ said one person. ‘If not, some of the best people will not make it to the short list.’ Engaging with human resources and sharing the ICT4D strategy is one way to help with better hiring and matching of job needs with skill sets that are out there and potentially difficult to find.

In conclusion, whether the ICT4D strategy is to mainstream, to isolate and create a ‘lab,’ or to combine approaches, it seems that most organizations are struggling a bit to develop and/or implement ICT4D strategies due to the multiple pain points of slow organizational change and the need for more capacity and resources. Some are making headway, however, and developing clearer thinking and action plans that are paying off in the short term, and that may set the organizations up for eventual ICT4D success.

Thanks to Population Council for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology Salon.

Salons are held under Chatham House Rule. No attribution has been made in this post.

Traditional development evaluation has been characterized as ‘backward looking’ rather than forward looking and too focused on proving over improving. Some believe applying an ‘agile’ approach in development would be more useful — the assumption being that if you design a program properly and iterate rapidly and constantly based on user feedback and data analytics, you are more likely achieve your goal or outcome without requiring expensive evaluations. The idea is that big data could eventually allow development agencies to collect enough passive data about program participants that there would no longer be a need to actively survey people or conduct a final evaluation, because there would be obvious patterns that would allow implementers to understand behaviors and improve programs along the way.

The above factors have made some evaluators and data scientists question whether big data and real-time availability of multiple big data sets, along with the technology that enables their collection and analysis, will make evaluation as we know it obsolete. Others have argued that it’s not the end of evaluation, but rather we will see a blending of real-time monitoring, predictive modeling, and impact evaluation, depending on the situation. Big questions remain, however, about the feasibility of big data in some contexts. For example, are big data approaches useful when it comes to people who are not producing very much digital data? How will the biases in big data be addressed to ensure that the poorest, least connected, and/or most marginalized are represented?

The Technology Salon on Big Data and Evaluation hosted during November’s  American Evaluation Association Conference in Chicago opened these questions up for consideration by a roomful of evaluators and a few data scientists. We discussed the potential role of new kinds and quantities of data. We asked how to incorporate static and dynamic big data sources into development evaluation. We shared ideas on what tools, skills, and partnerships we might require if we aim to incorporate big data into evaluation practice. This rich and well-informed conversation was catalyzed by our lead discussants: Andrew Means, Associate Director of the Center for Data Science & Public Policy at the University of Chicago and Founder of Data Analysts for Social Good and The Impact Lab; Michael Bamberger, Independent Evaluator and co-author of Real World Evaluation; and Veronica Olazabal from The Rockefeller Foundation. The Salon was supported by ITAD via a Rockefeller Foundation grant.

What do we mean by ‘big data’?

The first task was to come up with a general working definition of what was understood by ‘big data.’ Very few of the organizations present at the Salon were actually using ‘big data’ and definitions varied. Some talked about ‘big data sets’ as those that could not be collected or analyzed by a human on a standard computer. Others mentioned that big data could include ‘static’ data sets (like government census data – if digitized — or cellphone record data) and ‘dynamic’ data sets that are being constantly generated in real time (such as streaming data input from sensors or ‘cookies’ and ‘crumbs’ generated through use of the Internet and social media). Others considered big data to be real time, socially-created and socially-driven data that could be harvested without having to purposely collect it or budget for its collection. ‘It’s data that has a life of its own. Data that just exists out there.’ Yet others felt that for something to be ‘big data’ multiple big data sets needed to be involved, for example, genetic molecular data crossed with clinical trial data and other large data sets, regardless of static or dynamic nature. Big data, most agreed, is data that doesn’t easily fit on a laptop and that requires a specialized skill set that most social scientists don’t have. ‘What is big data? It’s hard to define exactly, but I know it when I see it,’ concluded one discussant.

Why is big data a ‘thing’?

As one discussant outlined, recent changes in technology have given rise to big data. Data collection, data storage and analytical power are becoming cheaper and cheaper. ‘We live digitally now and we produce data all the time. A UPS truck has anywhere from 50-75 sensors on it to do everything from optimize routes to indicate how often it visits a mechanic,’ he said. ‘The analytic and computational power in my iPhone is greater than what the space shuttle had.’ In addition, we have ‘seamless data collection’ in the case of Internet-enabled products and services, meaning that a person creates data as they access products or services, and this can then be monetized, which is how companies like Google make their money. ‘There is not someone sitting at Google going — OK, Joe just searched for the nearest pizza place, let me enter that data into the system — Joe is creating the data about his search while he is searching, and this data is a constant stream.’

What does big data mean for development evaluation?

Evaluators are normally tasked with making a judgment about the merit of something, usually for accountability, learning and/or to improve service delivery, and usually looking back at what has already happened. In the wider sense, the learning from evaluation contributes to program theory, needs assessment, and many other parts of the program cycle.

This approach differs in some key ways from big data work, because most of the new analytical methods used by data scientists are good at prediction but not very good at understanding causality, which is what social scientists (and evaluators) are most often interested in. ‘We don’t just look at giant data sets and find random correlations,’ however, explained one discussant. ‘That’s not practical at all. Rather, we start with a hypothesis and make a mental model of how different things might be working together. We create regression models and see which performs better. This helps us to know if we are building the right hypothesis. And then we chisel away at that hypothesis.’

Some challenges come up when we think about big data for development evaluation because the social sector lacks the resources of the private sector. In addition, data collection in the world of international development is not often seamless because ‘we care about people who do not live in the digital world,’ as one person put it. Populations we work with often do not leave a digital trail. Moreover, we only have complete data about the entire population in some cases (for example, when it comes to education in the US), meaning that development evaluators need to figure out how to deal with bias and sampling.

Satellite imagery can bring in some data that was unavailable in the past, and this is useful for climate and environmental work, but we still do not have a lot of big data for other types of programming, one person said. What’s more, wholly machine-based learning, and the kind of ‘deep learning’ made possible by today’s computational power is currently not very useful for development evaluation.

Evaluators often develop counterfactuals so that they can determine what would have happened without an intervention. They may use randomized controlled trials (RCTs), differentiation models, statistics and economics research approaches to do this. One area where data science may provide some support is in helping to answer questions about counterfactuals.

More access to big data (and open data) could also mean that development and humanitarian organizations stop duplicating data collection functions. Perhaps most interestingly, big data’s predictive capabilities could in the future be used in the planning phase to inform the kinds of programs that agencies run, where they should be run, and who should be let into them to achieve the greatest impact, said one discussant. Computer scientists and social scientists need to break down language barriers and come together more often so they can better learn from one another and determine where their approaches can overlap and be mutually supportive.

Are we all going to be using big data?

Not everyone needs to use big data. Not everyone has the capacity to use it, and it doesn’t exist for offline populations, so we need to be careful that we are not forcing it where it’s not the best approach. As one discussant emphasized, big data is not magic, and it’s not universally applicable. It’s good for some questions and not others, and it should be considered as another tool in the toolbox rather than the only tool. Big data can provide clues to what needs further examination using other methods, and thus most often it should be part of a mixed methods approach. Some participants felt that the discussion about big data was similar to the one 20 years ago on electronic medical records or to the debate in the evaluation community about quantitative versus qualitative methods.

What about groups of people who are digitally invisible?

There are serious limitations when it comes to the data we have access to in the poorest communities, where there are no tablets and fewer cellphones. We also need to be aware of ‘micro-exclusion’ (who within a community or household is left out of the digital revolution?) and intersectionality (how do different factors of exclusion combine to limit certain people’s digital access?) and consider how these affect the generation and interpretation of big data. There is also a question about the intensity of the digital footprint: How much data and at what frequency is it required for big data to be useful?

Some Salon participants felt that over time, everyone would have a digital presence and/or data trail, but others were skeptical. Some data scientists are experimenting with calibrating small amounts of data and comparing them to human-collected data in an attempt to make big data less biased, a discussant explained. Another person said that by digitizing and validating government data on thousands (in the case of India, millions) of villages, big data sets could be created for those that are not using mobiles or data.

Another person pointed out that generating digital data is a process that involves much more than simple access to technology. ‘Joining the digital discussion’ also requires access to networks, local language content, and all kinds of other precursors, she said. We also need to be very aware that these kinds of data collection processes impact on people’s participation and input into data collection and analysis. ‘There’s a difference between a collective evaluation activity where people are sitting around together discussing things and someone sitting in an office far from the community getting sound bites from a large source of data.’

Where is big data most applicable in evaluation?

One discussant laid out areas where big data would likely be the most applicable to development evaluation:

Screen Shot 2015-11-23 at 9.32.07 AM

It would appear that big data has huge potential in the evaluation of complex programs, he continued. ‘It’s fairly widely accepted that conventional designs don’t work well with multiple causality, multiple actors, multiple contextual variables, etc. People chug on valiantly, but it’s expected that you may get very misleading results. This is an interesting area because there are almost no evaluation designs for complexity, and big data might be a possibility here.’

In what scenarios might we use big data for development evaluation?

This discussant suggested that big data might be considered useful for evaluation in three areas:

  1. Supporting conventional evaluation design by adding new big data generated variables. For example, one could add transaction data from ATMs to conventional survey generated poverty indicators
  2. Increasing the power of a conventional evaluation design by using big data to strengthen the sample selection methodology. For example, satellite images were combined with data collected on the ground and propensity score matching was used to strengthen comparison group selection for an evaluation of the effects of interventions on protecting forest cover in Mexico.
  3. Replacing a conventional design with a big data analytics design by replacing regression based models with systems analysis. For example, one could use systems analysis to compare the effectiveness of 30 ongoing interventions that may reduce stunting in a sample of villages. Real-time observations could generate a time-series that could help to estimate the effectiveness of each intervention in different contexts.

It is important to remember construct validity too. ‘If big data is available, but it’s not quite answering the question that you want to ask, it might be easy to decide to do something with it, to run some correlations, and to think that maybe something will come out. But we should avoid this temptation,’ he cautioned. ‘We need to remember and respect construct validity and focus on measuring what we think we are measuring and what we want to measure, not get distracted by what a data set might offer us.’

What about bias in data sets?

We also need to be very aware that big data carries with it certain biases that need to be accounted for, commented several participants; notably, when working with low connectivity populations and geographies or when using data from social media sites that cater to a particular segment of the population. One discussant shared an example where Twitter was used to identify patterns in food poisoning, and suddenly the upscale, hipster restaurants in the city seemed to be the problem. Obviously these restaurants were not the sole source of the food poisoning, but rather there was a particular kind of person that tended to use Twitter.

‘People are often unclear about what’s magical and what’s really possible when it comes to big data. We want it to tell us impossible things and it can’t. We really need to engage human minds in this process; it’s not a question of everything being automated. We need to use our capacity for critical thinking and ask: Who’s creating the data? How’s it created? Where’s it coming from? Who might be left out? What could go wrong?’ emphasized one discussant. ‘Some of this information can come from the metadata, but that’s not always enough to make certain big data is a reliable source.’ Bias may also be introduced through the viewpoints and unconscious positions, values and frameworks of the data scientists themselves as they are developing algorithms and looking for/finding patterns in data.

What about the ethical and privacy implications?

Big Data has a great deal of ethical and privacy implications. Issues of consent and potential risk are critical considerations, especially when working with populations that are newly online and/or who may not have a good understanding of data privacy and how their data may be used by third parties who are collecting and/or selling it. However, one participant felt that a protectionist mentality is misguided. ‘We are pushing back and saying that social media and data tracking are bad. Instead, we should realize that having a digital life and being counted in the world is a right and it’s going to be inevitable in the future. We should be working with the people we serve to better understand digital privacy and help them to be more savvy digital citizens.’ It’s also imperative that aid and development agencies abandon our slow and antiquated data collection systems, she said, and to use the new digital tools that are available to us.

How can we be more responsible with the data we gather and use?

Development and humanitarian agencies do need be more responsible with data policies and practices, however. Big data approaches may contribute to negative data extraction tendencies if we mine data and deliver it to decision-makers far away from the source. It will be critical for evaluators and big data practitioners to find ways to engage people ‘on the ground’ and involve more communities in interpreting and querying their own big data. (For more on responsible data use, see the Responsible Development Data Book. Oxfam also has a responsible data policy that could serve as a reference. The author of this blog is working on a policy and practice guide for protecting girls digital safety, security and privacy as well.)

Who should be paying for big data sets to be made available?

One participant asked about costs and who should bear the expense of creating big data sets and/or opening them up to evaluators and/or data scientists. Others asked for examples of the private sector providing data to the social sector. This highlighted additional ethical and privacy issues. One participant gave an example from the healthcare space where there is lots of experience in accessing big data sets generated by government and the private sector. In this case, public and private data sets needed to be combined. There were strict requirements around anonymization and the effort ended up being very expensive, which made it difficult to build a business case for the work.

This can be a problem for the development sector, because it is difficult to generate resources for resolving social problems; there is normally only investment if there is some kind of commercial gain to be had. Some organizations are now hiring ‘data philanthropist’ positions that help to negotiate these kinds of data relationships with the private sector. (Global Pulse has developed a set of big data privacy principles to guide these cases.)

So, is big data going to replace evaluation or not?

In conclusion, big data will not eliminate the need for evaluation. Rather, it’s likely that it will be integrated as another source of information for strengthening conventional evaluation design. ‘Big Data and the underlying methods of data science are opening up new opportunities to answer old questions in new ways, and ask new kinds of questions. But that doesn’t mean that we should turn to big data and its methods for everything,’ said one discussant. ‘We need to get past a blind faith in big data and get more practical about what it is, how to use it, and where it adds value to evaluation processes,’ said another.

Thanks again to all who participated in the discussion! If you’d like to join (or read about) conversations like this one, visit Technology Salon. Salons run under Chatham House Rule, so no attribution has been made in this summary post.

Last month I joined a panel hosted by the Guardian on the contribution of innovation and technology to the Sustainable Development Goals (SDGs). Luckily they said that it was fine to come from a position of ‘skeptical realism.’

To drum up some good skeptical realist thoughts, I did what every innovative person does – posted a question on Facebook. A great discussion among friends who work in development, innovation and technology ensued. (Some might accuse me of ‘crowdsourcing’ ideas for the panel, but I think of it as more of a group discussion enabled by the Internet.) In the end, I didn’t get to say most of what we discussed on Facebook while on the panel, so I’m summarizing here.

To start off, I tend to think that the most interesting thing about the SDGs is that they are not written for ‘those developing countries over there.’ Rather, all countries are supposed to meet them. (I’m still not sure how many people or politicians in the US are aware of this.)

Framing them as global goals forces recognition that we have global issues to deal with — inequality and exclusion happen within countries and among countries everywhere. This opens doors for a shift in the narrative and framing of  ‘development.’ (See Goal 10: Reduce inequality within and among countries; and Goal 16: Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels.)

These core elements of the SDGs — exclusion and inequality – are two things that we also need to be aware of when we talk about innovation and technology. And while innovation and technology can contribute to development and inclusion…by connecting people and providing more access to information; helping improve access to services; creating space for new voices to speak their minds; contributing in some ways to improved government and international agency accountability; improving income generation; and so on… it’s important to be aware of who is excluded from creating, accessing, using and benefiting from tech and tech-enabled processes and advances.

Who creates and/or controls the tech? Who is pushed off platforms because of abuse or violence? Who is taken advantage of through tech? Who is using tech to control others? Who is seen as ‘innovative’ and who is ignored? For whom are most systems and services designed? Who is an entrepreneur by choice vs. an informal worker by necessity? There are so many questions to ask at both macro and micro levels.

But that’s not the whole of it. Even if all the issues of access and use were resolved, there are still problems with framing innovation and technology as one of the main solutions to the world’s problems. A core weakness of the Millennium Development Goals (MDGs) was that they were heavy on quantifiable goals and weak on reaching the most vulnerable and on improving governance. Many innovation and technology solutions suffer the same problem.

Sometimes we try to solve the wrong problems with tech, or we try to solve the wrong problems altogether, without listening to and involving the people who best understand the nature of those problems, without looking at the structural changes needed for sustainable impact, and without addressing exclusion at the micro-level (within and among districts, communities, neighborhoods or households).

Often a technological solution is brought in for questionable reasons. There is too little analysis of the political economy in development work as DE noted on the discussion thread. Too few people are asking who is pushing for a technology solution. Why technology? Who gains? What is the motivation? As Ory Okollah asked recently, Why are Africans expected to innovate and entrepreneur our way out of our problems? We need to get past our collective fascination with invention of products and move onward to a more holistic understanding of innovation that involves sustainable implementation, change, and improvement over the longer term.

Innovation is a process, not a product. As MBC said on the discussion thread, “Don’t confuse doing it first with doing it best.” Innovation is not an event, a moment, a one-time challenge, a product, a simple solution. Innovation is technology agnostic, noted LS. So we need to get past the goal of creating and distributing more products. We need to think more about innovating and tweaking processes, developing new paradigms and adjusting and improving on ways of doing things that we already know work. Sometimes technology helps, but that is not always the case.

We need more practical innovation. We should be looking at old ideas in a new context (citing from Stephen Johnson’s Where Good Ideas Come From) said AM. “The problem is that we need systems change and no one wants to talk about that or do it because it’s boring and slow.”

The heretical IT dared suggest that there’s too much attention to high profile innovation. “We could do with more continual small innovation and improvements and adaptations with a strong focus on participants/end users. This doesn’t make big headlines but it does help us get to actual results,” he said.

Along with that, IW suggested we need more innovative thinking and listening, and less innovative technology. “This might mean senior aid officials spending a half a day per week engaging with the people they are supposed to be helping.”

One innovative behavior change might be that of overcoming the ‘expert knowledge’ problem said DE. We need to ensure that the intended users or participants in an innovation or a technology or technological approach are involved and supported to frame the problem, and to define and shape the innovation over time. This means we also need to rely on existing knowledge – immediate and documented – on what has worked, how and when and where and why and what hasn’t, and to make the effort to examine how this knowledge might be relevant and useful for the current context and situation. As Robert Chambers said many years ago: the links of modern scientific knowledge with wealth, power, and prestige condition outsiders to despise and ignore rural peoples’ own knowledge. Rural people’s knowledge and modern scientific knowledge are complementary in their strengths and weaknesses.

Several people asked whether the most innovative thing in the current context is simply political will and seeing past an election cycle, a point that Kentaro Toyama often makes. We need renewed focus on political will and capacity and a focus on people rather than generic tech solutions.

In addition, we need paradigm shifts and more work to make the current system inclusive and fit for purpose. Most of our existing institutions and systems, including that of ‘development’ carry all of the old prejudices and ‘isms’. We need more questioning of these systems and more thinking about realistic alternatives – led and designed by people who have been traditionally excluded and pushed out. As a sector, we’ve focused a LOT on technocratic approaches over the past several years, and we’ve stopped being afraid to get technical. Now we need to stop being afraid to get political.

In summary, there is certainly a place for technology and for innovation in the SDGs, but the innovation narrative needs an overhaul. Just as we’ve seen with terms like ‘social good’ and ‘user centered design’ – we’ve collectively imbued these ideas and methods with properties that they don’t actually have and we’ve fetishized them. Re-claiming the term innovation, said HL, and taking it back to a real process with more realistic expectations might do us a lot of good.

 

 

Screen Shot 2015-09-02 at 7.38.45 PMBack in 2010, I wrote a post called “Where’s the ICT4D distance learning?” which lead to some interesting discussions, including with the folks over at TechChange, who were just getting started out. We ended up co-hosting a Twitter chat (summarized here) and having some great discussions on the lack of opportunities for humanitarian and development practitioners to professionalize their understanding of ICTs in their work.

It’s pretty cool today, then, to see that in addition to having run a bunch of on-line short courses focused on technology and various aspects of development and social change work, TechChange is kicking off their first Diploma program focusing on using ICT for monitoring and evaluation — an area that has become increasingly critical over the past few years.

I’ve participated in a couple of these short courses, and what I like about them is that they are not boring one-way lectures. Though you are studying at a distance, you don’t feel like you’re alone. There are variations on the type and length of the educational materials including short and long readings, videos, live chats and discussions with fellow students and experts, and smaller working groups. The team and platform do a good job of providing varied pedagogical approaches for different learning styles.

The new Diploma in ICT and M&E program has tracks for working professionals (launching in September of 2015) and prospective Graduate Students (launching in January 2016). Both offer a combination of in-person workshops, weekly office hours, a library of interactive on-demand courses, access to an annual conference, and more. (Disclaimer – you might see some of my blog posts and publications there).

The graduate student track will also have a capstone project, portfolio development support, one-on-one mentorship, live simulations, and a job placement component. Both courses take 16 weeks of study, but these can be spread out over a whole year to provide maximum flexibility.

For many of us working in the humanitarian and development sectors, work schedules and frequent travel make it difficult to access formal higher-level schooling. Not to mention, few universities offer courses related to ICTs and development. The idea of incurring a huge debt is also off-putting for a lot of folks (including me!). I’m really happy to see good quality, flexible options for on-line learning that can improve how we do our work and that also provides the additional motivation of a diploma certificate.

You can find out more about the Diploma program on the TechChange website  (note: registration for the fall course ends September 11th).

 

 

 

I had the privilege (no pun intended) of participating in the Art-a-Hack program via ThoughtWorks this past couple of months. Art-a-Hack is a creative space for artists and hackers to get together for 4 Mondays in June and work together on projects that involve art, tech and hacking. There’s no funding involved, just encouragement, support, and a physical place to help you carve out some time out for discovery and exploration.

I was paired up by the organizers with two others (Dmytri and Juan), and we embarked on a project. I had earlier submitted an idea of the core issues that I wanted to explore, and we mind-melded really well to come up with a plan to create something around them.

Here is our press release with links to the final product – WhiteSave.me. You can read our Artist Statement here and follow us on Twitter @whitesave.me. Feedback welcome, and please share if you think it’s worth sharing. Needless to say full responsibility for the project falls with the team, and it does not represent the views of any past, present or future employers or colleagues.

*****

Announcing WhiteSave.me

WhiteSave.me is a revolutionary new platform that enables White Saviors to deliver privilege to non-Whites whenever and wherever they need it with the simple tap of a finger.

Today’s White guy is increasingly told “check your privilege.” He often asks himself “What am I supposed to do about my privilege? It’s not my fault I was born white! And really, I’m not a bad person!”

Until now, there has been no simple way for a White guy to be proactive in addressing the issue of his privilege. He’s been told that he benefits from biased institutions and that his privilege is related to historically entrenched power structures. He’s told to be an ally but advised to take a back seat and follow the lead from people of color. Unfortunately this is all complex and time consuming, and addressing privilege in this way is hard work.

We need to address the issue of White privilege now however – we can’t wait. Changing attitudes, institutions, policies and structures takes too damn long! What’s more, we can’t expect White men or our current systems to go through deep changes in order to address privilege and inequality at the roots. What we can do is leapfrog over what would normally require decades of grassroots social organizing, education, policy work, and behavior change and put the solution to White privilege directly into White men’s hands so that everyone can get back to enjoying the American dream.

Screen Shot 2015-07-24 at 5.31.10 PM

WhiteSave.me – an innovative solution that enables White men to quickly and easily deliver privilege to the underprivileged, requiring only a few minutes of downtime, at their discretion and convenience.

Though not everyone realizes it, White privilege affects a large number of White people, regardless of their age or political persuasion. White liberals generally agree that they are privileged, but most are simply tired of hearing about it and having to deal with it. Conservative White men believe their privilege is all earned, but most also consider it possible to teach people of color about deep-seated American values and traditions and the notion of personal responsibility. All told, what most White people want is a simple, direct way to address their privilege once and for all. Our research has confirmed that most White people would be willing to spend a few minutes every now and then sharing their privilege, as long as it does not require too much effort.

WhiteSave.me is a revolutionary and innovative way of addressing this issue. (Read Our Story here to learn more about our discovery moments!) We’ve designed a simple web and mobile platform that enables White men to quickly and easily deliver a little bit of their excess privilege to non-Whites, all through a simple and streamlined digital interface. Liberal Whites can assuage guilt and concern about their own privilege with the tap of a finger. Conservatives can feel satisfied that they have passed along good values to non-Whites. Libertarians can prove through direct digital action that tech can resolve complex issues without government intervention and via the free market. And non-White people of any economic status, all over the world, will benefit from immediate access to White privilege directly through their devices. Everyone wins – with no messy disruption of the status quo!

How it Works

Visit our “how it works” page for more information, or simply “try it now” and your first privilege delivery session is on us! Our patented Facial Color Recognition Algorithm (™) will determine whether you qualify as a White Savior, based on your skin color. (Alternatively it will classify you as a non-White ‘Savee’). Once we determine your Whiteness, you’ll be automatically connected via live video with a Savee who is lacking in White privilege so that you can share some of your good sense and privileged counsel with him or her, or periodically alleviate your guilt by offering advice and a one-off session of helping someone who is less privileged.

Our smart business model guarantees WhiteSave.me will be around for as long as it’s needed, and that we can continue innovating with technology to iterate new solutions as technology advances. WhiteSave.me is free for White Saviors to deliver privilege, and non-Whites can choose from our Third World Freemium Model (free), our Basic Model ($9/month), or our Premium Model ($29/month). To generate additional revenue, our scientific analysis of non-White user data will enable us to place targeted advertisements that allow investors and partners to extract value from the Base of the Pyramid. Non-Profit partners are encouraged to engage WhiteSave.me as their tech partner for funding proposals, thereby appearing innovative and guaranteeing successful grant revenue.

See our FAQs for additional information and check out our Success Stories for more on how WhiteSave.me, in just its first few months, has helped thousands to deliver privilege all over the world.

Try It Now and you’ll be immediately on your way to delivering privilege through our quick and easy digital solution!

Contact help@whitesave.me for more information. And please help us spread the word. Addressing the issue of White privilege has never been so easy!

 

Follow

Get every new post delivered to your Inbox.

Join 909 other followers