Feeds:
Posts
Comments

Archive for May, 2016

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »

I used to write blog posts two or three times a week, but things have been a little quiet here for the past couple of years. That’s partly because I’ve been ‘doing actual work’ (as we like to say) trying to implement the theoretical ‘good practices’ that I like soapboxing about. I’ve also been doing some writing in other places and in ways that I hope might be more rigorously critiqued and thus have a wider influence than just putting them up on a blog.

One of those bits of work that’s recently been released publicly is a first version of a monitoring and evaluation framework for SIMLab. We started discussing this at the first M&E Tech conference in 2014. Laura Walker McDonald (SIMLab CEO) outlines why in a blog post.

Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.

For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact….

FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.

As an organization that provides technology strategy and support to other organizations — both large and small — it is important for SIMLab to better understand the quality of that support and how it may translate into improvements as well as how introduction or improvement of information and communication technology contributes to impact at the broader scale.

This is a difficult proposition, given that isolating a single factor like technology is extremely tough, if not impossible. The Framework thus aims to get at the breadth of considerations that go into successful tech-enabled project design and implementation. It does not aim to attribute impact to a particular technology, but to better understand that technology’s contribution to the wider impact at various levels. We know this is incredibly complex, but thought it was worth a try.

As Laura notes in another blogpost,

One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators – the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and consider the criteria for technology-enabled social change projects.

Here are the adapted criteria that you can read more about in the Framework. They were designed for internal use, but we hope they might be useful to evaluators of technology-enabled programming, commissioners of evaluations of these programs, and those who want to do in-house examination of their own technology-enabled efforts. We welcome your thoughts and feedback — The Framework is published in draft format in the hope that others working on similar challenges can help make it better, and so that they could pick up and use any and all of it that would be helpful to them. The document includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample log frames and evaluator terms of reference.

Happy reading and we really look forward to any feedback and suggestions!!

*****

The Criteria

Criterion 1: Relevance

The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design – particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?

Criterion 2: Effectiveness

A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

Criterion 3: Efficiency

Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?

Criterion 4: Impact

Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying, documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies evaluation criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us.

For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Consider: To what extent does the choice of communications channels or tools enable wider and/or higher quality participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?

Criterion 6: Coherence

The OECD-DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

Read Full Post »

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo courtesy of Amy O'Donnell).

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo by Amy O’Donnell).

At Catholic Relief Services’ ICT4D Conference in May 2016, I worked with Amy O’Donnell  (Oxfam GB) and Paul Perrin (CRS) to facilitate a participatory session that explored notions of Digital Privacy, Security and Safety. We had a full room, with a widely varied set of experiences and expertise.

The session kicked off with stories of privacy and security breaches. One person told of having personal data stolen when a federal government clearance database was compromised. We also shared how a researcher in Denmark scraped very personal data from the OK Cupid online dating site and opened it up to the public.

A comparison was made between the OK Cupid data situation and the work that we do as development professionals. When we collect very personal information from program participants, they may not expect that their household level income, health data or personal habits would be ‘opened’ at some point.

Our first task was to explore and compare the meaning of the terms: Privacy, Security and Safety as they relate to “digital” and “development.”

What do we mean by privacy?

The “privacy” group talked quite a bit about contextuality of data ownership. They noted that there are aspects of privacy that cut across different groups of people in different societies, and that some aspects of privacy may be culturally specific. Privacy is concerned with ownership of data and protection of one’s information, they said. It’s about who owns data and who collects and protects it and notions of to whom it belongs. Private information is that which may be known by some but not by all. Privacy is a temporal notion — private information should be protected indefinitely over time. In addition, privacy is constantly changing. Because we are using data on our mobile phones, said one person, “Safaricom knows we are all in this same space, but we don’t know that they know.”

Another said that in today’s world, “You assume others can’t know something about you, but things are actually known about you that you don’t even know that others can know. There are some facts about you that you don’t think anyone should know or be able to know, but they do.” The group mentioned website terms and conditions, corporate ownership of personal data and a lack of control of privacy now. Some felt that we are unable to maintain our privacy today, whereas others felt that one could opt out of social media and other technologies to remain in control of one’s own privacy. The group noted that “privacy is about the appropriate use of data for its intended purpose. If that purpose shifts and I haven’t consented, then it’s a violation of privacy.”

What do we mean by security?

The Security group considered security to relate to an individual’s information. “It’s your information, and security of it means that what you’re doing is protected, confidential, and access is only for authorized users.” Security was also related to the location of where a person’s information is hosted and the legal parameters. Other aspects were related to “a barrier – an anti-virus program or some kind of encryption software, something that protects you from harm…. It’s about setting roles and permissions on software and installing firewalls, role-based permissions for accessing data, and cloud security of individuals’ data.” A broader aspect of security was linked to the effects of hacking that lead to offline vulnerability, to a lack of emotional security or feeling intimidated in an online space. Lastly, the group noted that “we, not the systems, are the weakest link in security – what we click on, what we view, what we’ve done. We are our own worst enemies in terms of keeping ourselves and our data secure.”

What do we mean by safety?

The Safety group noted that it’s difficult to know the difference between safety and security. “Safety evokes something highly personal. Like privacy… it’s related to being free from harm personally, physically and emotionally.” The group raised examples of protecting children from harmful online content or from people seeking to harm vulnerable users of online tools. The aspect of keeping your online financial information safe, and feeling confident that a service was ‘safe’ to use was also raised. Safety was considered to be linked to the concept of risk. “Safety engenders a level of trust, which is at the heart of safety online,” said one person.

In the context of data collection for communities we work with – safety was connected to data minimization concepts and linked with vulnerability, and a compounded vulnerability when it comes to online risk and safety. “If one person’s data is not safely maintained it puts others at risk,” noted the group. “And pieces of information that are innocuous on their own may become harmful when combined.” Lastly, the notion of safety as related to offline risk or risk to an individual due to a specific online behavior or data breach was raised.

It was noted that in all of these terms: privacy, security and safety, there is an element of power, and that in this type of work, a power relations analysis is critical.

The Digital Data Life Cycle

After unpacking the above terms, Amy took the group through an analysis of the data life cycle (courtesy of the Engine Room’s Responsible Data website) in order to highlight the different moments where the three concepts (privacy, security and safety) come into play.

Screen Shot 2016-05-25 at 6.51.50 AM

  • Plan/Design
  • Collect/Find/Acquire
  • Store
  • Transmit
  • Access
  • Share
  • Analyze/use
  • Retention
  • Disposal
  • Afterlife

Participants added additional stages in the data life cycle that they passed through in their work (coordinate, monitor the process, monitor compliance with data privacy and security policies). We placed the points of the data life cycle on the wall, and invited participants to:

  • Place a pink sticky note under the stage in the data life cycle that resonates or interests them most and think about why.
  • Place a green sticky note under the stage that is the most challenging or troublesome for them or their organizations and think about why.
  • Place a blue sticky note under the stage where they have the most experience, and to share a particular experience or tip that might help others to better manage their data life cycle in a private, secure and safe way.

Challenges, concerns and lessons

Design as well as policy are important!

  • Design drives everScreen Shot 2016-05-25 at 7.21.07 AMything else. We often start from the point of collection when really it’s at the design stage when we should think about the burden of data collection and define what’s the minimum we can ask of people? How we design – even how we get consent – can inform how the whole process happens.
  • When we get part-way through the data life cycle, we often wish we’d have thought of the whole cycle at the beginning, during the design phase.
  • In addition to good design, coordination of data collection needs to be thought about early in the process so that duplication can be reduced. This can also reduce fatigue for people who are asked over and over for their data.
  • Informed consent is such a critical issue that needs to be linked with the entire process of design for the whole data life cycle. How do you explain to people that you will be giving their data away, anonymizing, separating out, encrypting? There are often flow down clauses in some contracts that shifts responsibilities for data protection and security and it’s not always clear who is responsible for those data processes? How can you be sure that they are doing it properly and in a painstaking way?
  • Anonymization is also an issue. It’s hard to know to what level to anonymize things like call data records — to the individual? Township? District Level? And for how long will anonymization actually hold up?
  • The lack of good design and policy contributes to overlapping efforts and poor coordination of data collection efforts across agencies. We often collect too much data in poorly designed databases.
  • Policy is not enough – we need to do a much better job of monitoring compliance with policy.
  • Institutional Review Boards (IRBs) and compliance aspects need to be updated to the new digital data reality. At the same time, sometimes IRBs are not the right instrument for what we are aiming to achieve.

Data collection needs more attention.

  • Data collection is the easy part – where institutions struggle is with analyzing and doing something with the data we collect.
  • Organizations often don’t have a well-structured or systematic process for data collection.
  • We need to be clearer about what type of information we are collecting and why.
  • We need to update our data protection policy.

Reasons for data sharing are not always clear.

  • How can share data securely and efficiently without building duplicative systems? We should be thinking more during the design and collection phase about whether the data is going to be interoperable and who needs to access it.
  • How can we get the right balance in terms of data sharing? Some donors really push for information that can put people in real danger – like details of people who have participated in particular programs that would put them at risk with their home governments. Organizations really need to push back against this. It’s an education thing with donors. Middle management and intermediaries are often the ones that push for this type of data because they don’t really have a handle on the risk it represents. They are the weak points because of the demands they are putting on people. This is a challenge for open data policies – leaving it open to people leaves it to doing the laziest job possible of thinking about the potential risks for that data.
  • There are legal aspects of sharing too – such as the USAID open data policy where those collecting data have to share with the government. But we don’t have a clear understanding of what the international laws are about data sharing.
  • There are so many pressures to share data but they are not all fully thought through!

Data analysis and use of data are key weak spots for organizations.

  • We are just beginning to think through capturing lots of data.
  • Data is collected but not always used. Too often it’s extractive data collection. We don’t have the feedback loops in place, and when there are feedback loops we often don’t use the the feedback to make changes.
  • We forget often to go back to the people who have provided us with data to share back with them. It’s not often that we hold a consultation with the community to really involve them in how the data can be used.

Secure storage is a challenge.

  • We have hundreds of databases across the agency in various formats, hard drives and states of security, privacy and safety. Are we able to keep these secure?
  • We need to think more carefully about where we hold our data and who has access to it. Sometimes our data is held by external consultants. How should we be addressing that?

Disposing of data properly in a global context is hard!

  • Screen Shot 2016-05-25 at 7.17.58 AMIt’s difficult to dispose of data when there are multiple versions of it and a data footprint.
  • Disposal is an issue. We’re doing a lot of server upgrades and many of these are remote locations. How do we ensure that the right disposal process is going on globally, short of physically seeing that hard drives are smashed up!
  • We need to do a better job of disposal on personal laptops. I’ve done a lot of data collection on my personal laptop – no one has ever followed up to see if I’ve deleted it. How are we handling data handover? How do you really dispose of data?
  • Our organization hasn’t even thought about this yet!

Tips and recommendations from participants

  • Organizations should be using different tools. They should be using Pretty Good Privacy techniques rather than relying on free or commercial tools like Google or Skype.
  • People can be your weakest link if they are not aware or they don’t care about privacy and security. We send an email out to all staff on a weekly basis that talks about taking adequate measures. We share tips and stories. That helps to keep privacy and security front and center.
  • Even if you have a policy the hard part is enforcement, accountability, and policy reform. If our organizations are not doing direct policy around the formation of best practices in this area, then it’s on us to be sure we understand what is best practice, and to advocate for that. Let’s do what we can before the policy catches up.
  • The Responsible Data Forum and Tactical Tech have a great set of resources.
  • Oxfam has a Responsible Data Policy and Girl Effect have developed a Girls’ Digital Privacy, Security and Safety Toolkit that can also offer some guidance.

In conclusion, participants agreed that development agencies and NGOs need to take privacy, security and safety seriously. They can no longer afford to implement security at a lower level than corporations. “Times are changing and hackers are no longer just interested in financial information. People’s data is very valuable. We need to change and take security as seriously as corporates do!” as one person said.

 

 

Read Full Post »