Feeds:
Posts
Comments

Archive for the ‘wait… what?’ Category

On November 14 Technology Salon NYC met to discuss issues related to the role of film and video in development and humanitarian work. Our lead discussants were Ambika Samarthya from Praekelt.org; Lina Srivastava of CIEL, and Rebekah Stutzman, from Digital Green’s DC office.

How does film support aid and development work?

Lina proposed that there are three main reasons for using video, film, and/or immersive media (such as virtual reality or augmented reality) in humanitarian and development work:

  • Raising awareness about an issue or a brand and serving as an entry point or a way to frame further actions.
  • Community-led discussion/participatory media, where people take agency and ownership and express themselves through media.
  • Catalyzing movements themselves, where film, video, and other visual arts are used to feed social movements.

Each of the above is aimed at a different audience. “Raising awareness” often only scratches the surface of an issue and can have limited impact if done on its own without additional actions. Community-led efforts tend to go deeper and focus on the learning and impact of the process (rather than the quality of the end product) but they usually reach fewer people (thus have a higher cost per person and less scale). When using video for catalyzing moments, the goal is normally bringing people into a longer-term advocacy effort.

In all three instances, there are issues with who controls access to tools/channels, platforms, and distribution channels. Though social media has changed this to an extent, there are still gatekeepers that impact who gets to be involved and whose voice/whose story is highlighted, funders who determine which work happens, and algorithms that dictate who will see the end products.

Participants suggested additional ways that video and film are used, including:

  • Social-emotional learning, where video is shown and then discussed to expand on new ideas and habits or to encourage behavior change.
  • Personal transformation through engaging with video.

Becky shared Digital Green’s approach, which is participatory and where community members to use video to help themselves and those around them. The organization supports community members to film videos about their agricultural practices, and these are then taken to nearby communities to share and discuss. (More on Digital Green here). Video doesn’t solve anyone’s development problem all by itself, Becky emphasized. If an agricultural extensionist is no good, having a video as part of their training materials won’t solve that. “If they have a top-down attitude, don’t engage, don’t answer questions, etc., or if people are not open to changing practices, video or no video, it won’t work.”

How can we improve impact measurement?

Questions arose from Salon participants around how to measure impact of film in a project or wider effort. Overall, impact measurement in the world of film for development is weak, noted one discussant, because change takes a long time and it is hard to track. We are often encouraged to focus on the wrong things like “vanity measurements” such as “likes” and “clicks,” but these don’t speak to longer-term and deeper impact of a film and they are often inappropriate in terms of who the audience is for the actual films (E.g., are we interested in impact on the local audience who is being impacted by the problem or the external audience who is being encouraged to care about it?)

Digital Green measures behavior change based on uptake of new agriculture practices. “After the agriculture extension worker shows a video to a group, they collect data on everyone that’s there. They record the questions that people ask, the feedback about why they can’t implement a particular practice, and in that way they know who is interested in trying a new practice.” The organization sets indicators for implementing the practice. “The extension worker returns to the community to see if the family has implemented a, b, c and if not, we try to find out why. So we have iterative improvement based on feedback from the video.” The organization does post their videos on YouTube but doesn’t know if the content there is having an impact. “We don’t even try to follow it up as we feel online video is much less relevant to our audience.” An organization that is working with social-emotional learning suggested that RCTs could be done to measure which videos are more effective. Others who work on a more individual or artistic level said that the immediate feedback and reactions from viewers were a way to gauge impact.

Donors often have different understandings of useful metrics. “What is a valuable metric? How can we gather it? How much do you want us to spend gathering it?” commented one person. Larger, longer-term partners who are not one-off donors will have a better sense of how to measure impact in reasonable ways. One person who formerly worked at a large public television station noted that it was common to have long conversation about measurement, goals, and aligning to the mission. “But we didn’t go by numbers, we focused on qualitative measurement.” She highlighted the importance of having these conversations with donors and asking them “why are you partnering with us?” Being able to say no to donors is important, she said. “If you are not sharing goals and objectives you shouldn’t be working together. Is gathering these stories a benefit to the community ? If you can’t communicate your actual intent, it’s very complicated.”

The goal of participatory video is less about engaging external (international) audiences or branding and advocacy. Rather it focuses on building skills and capacities through the process of video making. Here, the impact measurement is more related to individual, and often self-reported, skills such as confidence, finding your voice, public speaking, teamwork, leadership skills, critical thinking and media literacy. The quality of video production in these cases may be low, and videos unsuitable for widespread circulation, however the process and product can be catalysts for local-level change and locally-led advocacy on themes and topics that are important to the video-makers.

Participatory video suffers from low funding levels because it doesn’t reach the kind of scale that is desired by funders, though it can often contribute to deep, personal and community-level change. Some felt that even if community-created videos were of high production quality and translated to many languages, large-scale distribution is not always feasible because they are developed in and speak to/for hyper-local contexts, thus their relevance can be limited to smaller geographic areas. Expectation management with donors can go a long way towards shifting perspectives and understanding of what constitutes “impact.”

Should we re-think compensation?

Ambika noted that there are often challenges related to incentives and compensation when filming with communities for organizational purposes (such as branding or fundraising). Organizations are usually willing to pay people for their time in places such New York City and less inclined to do so when working with a rural community that is perceived to benefit from an organization’s services and projects. Perceptions by community members that a filmmaker is financially benefiting from video work can be hard to overcome, and this means that conflict may arise during non-profit filmmaking aimed at fundraising or building a brand. Even when individuals and communities are aware that they will not be compensated directly, there is still often some type of financial expectation, noted one Salon participant, such as the purchase of local goods and products.

Working closely with gatekeepers and community leaders can help to ease these tensions. When filmmaking takes several hours or days, however, participants may be visibly stressed or concerned about household or economic chores that are falling to the side during filming, and this can be challenging to navigate, noted one media professional. Filming in virtual reality can exacerbate this problem, since VR filming is normally over-programmed and repetitive in an effort to appear realistic.

One person suggested a change in how we approach incentives. “We spent about two years in a community filming a documentary about migration. This was part of a longer research project. We were not able to compensate the community, but we were able to invest directly in some of the local businesses and to raise funds for some community projects.” It’s difficult to understand why we would not compensate people for their time and their stories, she said. “This is basically their intellectual property, and we’re stealing it. We need a sector rethink.” Another person agreed, “in the US everyone gets paid and we have rules and standards for how that happens. We should be developing these for our work elsewhere.”

Participatory video tends to have less of a challenge with compensation. “People see the videos, the videos are for their neighbors. They are sharing good agricultural or nutrition approaches with people that they already know. They sometimes love being in the videos and that is partly its own reward. Helping people around them is also an incentive,” said one person.

There were several other rabbit holes to explore in relation to film and development, so look for more Salons in 2018!

To close out the year right, join us for ICT4Drinks on December 14th at Flatiron Hall from 7-9pm. If you’re signed up for Technology Salon emails, you’ll find the invitation in your inbox!

Salons run under Chatham House Rule so no attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

(Joint post from Linda Raftree, MERL Tech and Megan Colnar, Open Society Foundations)

The American Evaluation Association Conference happens once a year, and offers literally hundreds of sessions. It can take a while to sort though all of them. Because there are so many sessions, it’s easy to feel a bit lost in the crowds of people and content.

So, Megan Colnar (Open Society Foundations) and I thought we’d share some of the sessions that caught our eye.

I’m on the look-out for innovative tech applications, responsible and gender-sensitive data collection practices, and virtual or online/social media-focused evaluation techniques and methods. Megan plans to tune into sessions on policy change, complexity-aware techniques, and better MEL practices for funders. 

We both can’t wait to learn about evaluation in the post-truth and fake news era. Full disclosure, our sessions are also featured below.

Hope we see you there!

Wednesday, November 8th

3.15-4.15

4.30-6.00

We also think a lot of the ignite talks during this session in the Thurgood Salon South look interesting, like:

6.15-7.15

7.00-8.30

Tour of a few poster sessions before dinner. Highlights might include:

  • M&E for Journalism (51)
  • Measuring Advocacy (3)
  • Survey measures of corruption (53)
  • Theory of change in practice (186)
  • Using social networks as a decision-making tool (225)

 

Thursday, Nov 9th

8.00-9.00 – early risers are rewarded with some interesting options

9.15-10.15

10.30-11.15

12.15-1.15

1.15-2.00

2.15-3.00

3.15-4.15

4.30-5.15

 

Friday, Nov 10th

8.00-9.30early risers rewarded again!

11.00-11.45

1.45-3.15

3.30-4.15

4.30-5.15

5.30-6.15– if you can hold out for one more on a Friday evening

6.30-7.15

 

Saturday, Nov 11th–you’re on your own! Let us know what treasures you discover

Read Full Post »

For our Tuesday, July 27th Salon, we discussed partnerships and interoperability in global health systems. The room housed a wide range of perspectives, from small to large non-governmental organizations to donors and funders to software developers to designers to healthcare professionals to students. Our lead discussants were Josh Nesbit, CEO at Medic Mobile; Jonathan McKay, Global Head of Partnerships and Director of the US Office of Praekelt.org; and Tiffany Lentz, Managing Director, Office of Social Change Initiatives at ThoughtWorks

We started by hearing from our discussants on why they had decided to tackle issues in the area of health. Reasons were primarily because health systems were excluding people from care and organizations wanted to find a way to make healthcare inclusive. As one discussant put it, “utilitarianism has infected global health. A lack of moral imagination is the top problem we’re facing.”

Other challenges include requests for small scale pilots and customization/ bespoke applications, lack of funding and extensive requirements for grant applications, and a disconnect between what is needed on the ground and what donors want to fund. “The amount of documentation to get a grant is ridiculous, and then the system that is requested to be built is not even the system that needs to be made,” commented one person. Another challenge is that everyone is under constant pressure to demonstrate that they are being innovative. [Sidenote: I’m reminded of this post from 2010….] “They want things that are not necessarily in the best interest of the project, but that are seen to be innovations. Funders are often dragged along by that,” noted another person.

The conversation most often touched on the unfulfilled potential of having a working ecosystem and a common infrastructure for health data as well as the problems and challenges that will most probably arise when trying to develop these.

“There are so many uncoordinated pilot projects in different districts, all doing different things,” said one person. “Governments are doing what they can, but they don’t have the funds,” added another, “and that’s why there are so many small pilots happening everywhere.” One company noted that it had started developing a platform for SMS but abandoned it in favor of working with an existing platform instead. “Can we create standards and protocols to tie some of this work together? There isn’t a common infrastructure that we can build on,” was the complaint. “We seem to always start from scratch. I hope donors and organizations get smart about applying pressure in the right areas. We need an infrastructure that allows us to build on it and do the work!” On the other hand, someone warned of the risks of pushing everyone to “jump on a mediocre software or platform just because we are told to by a large agency or donor.”

The benefits of collaboration and partnership are apparent: increased access to important information, more cooperation, less duplication, the ability to build on existing knowledge, and so on. However, though desirable, partnerships and interoperability is not easy to establish. “Is it too early for meaningful partnerships in mobile health? I was wondering if I could say that…” said one person. “I’m not even sure I’m actually comfortable saying it…. But if you’re providing essential basic services, collecting sensitive medical data from patients, there should be some kind of infrastructure apart from private sector services, shouldn’t there?” The question is who should own this type of a mediator platform: governments? MNOs?

Beyond this, there are several issues related to control and ownership. Who would own the data? Is there a way to get to a point where the data would be owned by the patients and demonetized? If the common system is run by the private sector, there should be protections surrounding the patients’ sensitive information. Perhaps this should be a government-run system. Should it be open source?

Open source has its own challenges. “Well… yes. We’ve practiced ‘hopensource’,” said one person (to widespread chuckles).

Another explained that the way we’ve designed information systems has held back shifts in health systems. “When we’re comparing notes and how we are designing products, we need to be out ahead of the health systems and financing shifts. We need to focus on people-centered care. We need to gather information about a person over time and place. About the teams who are caring for them. Many governments we’re working with are powerless and moneyless. But even small organizations can do something. When we show up and treat a government as a systems owner that is responsible to deliver health care to their citizens, then we start to think about them as a partner, and they begin to think about how they could support their health systems.”

One potential model is to design a platform or system such that it can eventually be handed off to a government. This, of course, isn’t a simple idea in execution. Governments can be limited by their internal expertise. The personnel that a government has at the time of the handoff won’t necessarily be there years or months later. So while the handoff itself may be successful in the short term, there’s no firm guarantee that the system will be continually operational in the future. Additionally, governments may not be equipped with the knowledge to make the best decisions about software systems they purchase. Governments’ negotiating capacity must be expanded if they are to successfully run an interoperable system. “But if we can bring in a snazzy system that’s already interoperable, it may be more successful,” said one person.

Having a common data infrastructure is crucial. However, we must also spend some time thinking about what the data itself should look like. Can it be standardized? How can we ensure that it is legible to anyone with access to it?

These are only some of the relevant political issues, and at a more material level, one cannot ignore the technical challenges of maintaining a national scale system. For example, “just getting a successful outbound dialing rate is hard!” said one person. “If you are running servers in Nigeria it just won’t always be up! I think human centered design is important. But there is also a huge problem simply with making these things work at scale. The hardcore technical challenges are real. We can help governments to filter through some of the potential options. Like, can a system demonstrate that it can really operate at massive scale?” Another person highlighted that “it’s often non-profits who are helping to strengthen the capacity of governments to make better decisions. They don’t have money for large-scale systems and often don’t know how to judge what’s good or to be a strong negotiator. They are really in a bind.”

This is not to mention that “the computers have plastic over them half the time. Electricity, computers, literacy, there are all these issues. And the TelCo infrastructure! We have layers of capacity gaps to address,” said one person.

There are also donors to consider. They may come into a project with unrealistic expectations of what is normal and what can be accomplished. There is a delicate balance to be struck between inspiring the donors to take up the project and managing expectations so that they are not disappointed.” One strategy is to “start hopeful and steadily temper expectations.” This is true also with other kinds of partnerships. “Building trust with organizations so that when things do go bad, you can try to manage it is crucial. Often it seems like you don’t want to be too real in the first conversation. I think, ‘if I lay this on them at the start it can be too real and feel overwhelming.…'” Others recommended setting expectations about how everyone together is performing. “It’s more like, ‘together we are going to be looking at this, and we’ll be seeing together how we are going to work and perform together.”

Creating an interoperable data system is costly and time-consuming, oftentimes more so than donors and other stakeholders imagine, but there are real benefits. Any step in the direction of interoperability must deal with challenges like those considered in this discussion. Problems abound. Solutions will be harder to come by, but not impossible.

So, what would practitioners like to see? “I would like to see one country that provides an incredible case study showing what good partnership and collaboration looks like with different partners working at different levels and having a massive impact and improved outcomes. Maybe in Uganda,” said one person. “I hope we see more of us rally around supporting and helping governments to be the system owners. We could focus on a metric or shared cause – I hope in the near future we have a view into the equity measure and not just the vast numbers. I’d love to see us use health equity as the rallying point,” added another. From a different angle, one person felt that “from a for-profit, we could see it differently. We could take on a country, a clinic or something as our own project. What if we could sponsor a government’s health care system?”

A participant summed the Salon up nicely: “I’d like to make a flip-side comment. I want to express gratitude to all the folks here as discussants. This is one of the most unforgiving and difficult environments to work in. It’ SO difficult. You have to be an organization super hero. We’re among peers and feel it as normal to talk about challenges, but you’re really all contributing so much!”

Salons are run under Chatham House Rule so not attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

(I’ve been blogging a little bit over at MERLTech.org. Here’s a repost.)

It can be overwhelming to get your head around all the different kinds of data and the various approaches to collecting or finding data for development and humanitarian monitoring, evaluation, research and learning (MERL).

Though there are many ways of categorizing data, lately I find myself conceptually organizing data streams into four general buckets when thinking about MERL in the aid and development space:

  1. ‘Traditional’ data. How we’ve been doing things for(pretty much)ever. Researchers, evaluators and/or enumerators are in relative control of the process. They design a specific questionnaire or a data gathering process and go out and collect qualitative or quantitative data; they send out a survey and request feedback; they do focus group discussions or interviews; or they collect data on paper and eventually digitize the data for analysis and decision-making. Increasingly, we’re using digital tools for all of these processes, but they are still quite traditional approaches (and there is nothing wrong with traditional!).
  2. ‘Found’ data.  The Internet, digital data and open data have made it lots easier to find, share, and re-use datasets collected by others, whether this is internally in our own organizations, with partners or just in general. These tend to be datasets collected in traditional ways, such as government or agency data sets. In cases where the datasets are digitized and have proper descriptions, clear provenance, consent has been obtained for use/re-use, and care has been taken to de-identify them, they can eliminate the need to collect the same data over again. Data hubs are springing up that aim to collect and organize these data sets to make them easier to find and use.
  3. ‘Seamless’ data. Development and humanitarian agencies are increasingly using digital applications and platforms in their work — whether bespoke or commercially available ones. Data generated by users of these platforms can provide insights that help answer specific questions about their behaviors, and the data is not limited to quantitative data. This data is normally used to improve applications and platform experiences, interfaces, content, etc. but it can also provide clues into a host of other online and offline behaviors, including knowledge, attitudes, and practices. One cautionary note is that because this data is collected seamlessly, users of these tools and platforms may not realize that they are generating data or understand the degree to which their behaviors are being tracked and used for MERL purposes (even if they’ve checked “I agree” to the terms and conditions). This has big implications for privacy that organizations should think about, especially as new regulations are being developed such a the EU’s General Data Protection Regulations (GDPR). The commercial sector is great at this type of data analysis, but the development set are only just starting to get more sophisticated at it.
  4. ‘Big’ data. In addition to data generated ‘seamlessly’ by platforms and applications, there are also ‘big data’ and data that exists on the Internet that can be ‘harvested’ if one only knows how. The term ‘Big data’ describes the application of analytical techniques to search, aggregate, and cross-reference large data sets in order to develop intelligence and insights. (See this post for a good overview of big data and some of the associated challenges and concerns). Data harvesting is a term used for the process of finding and turning ‘unstructured’ content (message boards, a webpage, a PDF file, Tweets, videos, comments), into ‘semi-structured’ data so that it can then be analyzed. (Estimates are that 90 percent of the data on the Internet exists as unstructured content). Currently, big data seems to be more apt for predictive modeling than for looking backward at how well a program performed or what impact it had. Development and humanitarian organizations (self included) are only just starting to better understand concepts around big data how it might be used for MERL. (This is a useful primer).

Thinking about these four buckets of data can help MERL practitioners to identify data sources and how they might complement one another in a MERL plan. Categorizing them as such can also help to map out how the different kinds of data will be responsibly collected/found/harvested, stored, shared, used, and maintained/ retained/ destroyed. Each type of data also has certain implications in terms of privacy, consent and use/re-use and how it is stored and protected. Planning for the use of different data sources and types can also help organizations choose the data management systems needed and identify the resources, capacities and skill sets required (or needing to be acquired) for modern MERL.

Organizations and evaluators are increasingly comfortable using mobile and/or tablets to do traditional data gathering, but they often are not using ‘found’ datasets. This may be because these datasets are not very ‘find-able,’ because organizations are not creating them, re-using data is not a common practice for them, the data are of questionable quality/integrity, there are no descriptors, or a variety of other reasons.

The use of ‘seamless’ data is something that development and humanitarian agencies might want to get better at. Even though large swaths of the populations that we work with are not yet online, this is changing. And if we are using digital tools and applications in our work, we shouldn’t let that data go to waste if it can help us improve our services or better understand the impact and value of the programs we are implementing. (At the very least, we had better understand what seamless data the tools, applications and platforms we’re using are collecting so that we can manage data privacy and security of our users and ensure they are not being violated by third parties!)

Big data is also new to the development sector, and there may be good reason it is not yet widely used. Many of the populations we are working with are not producing much data — though this is also changing as digital financial services and mobile phone use has become almost universal and the use of smart phones is on the rise. Normally organizations require new knowledge, skills, partnerships and tools to access and use existing big data sets or to do any data harvesting. Some say that big data along with ‘seamless’ data will one day replace our current form of MERL. As artificial intelligence and machine learning advance, who knows… (and it’s not only MERL practitioners who will be out of a job –but that’s a conversation for another time!)

Not every organization needs to be using all four of these kinds of data, but we should at least be aware that they are out there and consider whether they are of use to our MERL efforts, depending on what our programs look like, who we are working with, and what kind of MERL we are tasked with.

I’m curious how other people conceptualize their buckets of data, and where I’ve missed something or defined these buckets erroneously…. Thoughts?

Read Full Post »

Development, humanitarian and human rights organizations increasingly collect and use digital data at the various stages of their programming. This type of data has the potential to yield great benefit, but it can also increase individual and community exposure to harm and privacy risks. How can we as a sector better balance data collection and open data sharing with privacy and security, especially when it involves the most vulnerable?

A number of donors, humanitarian and development organizations (including Oxfam, CRS, UN bodies and others) have developed or are in the process of developing guidelines to help them to be more responsible about collection, use, sharing and retention of data from those who participate in their programs.

I’m part of a team (including mStar, Sonjara, Georgetown University, the USAID Global Development Lab, and an advisory committee that includes several shining stars from the ‘responsible data’ movement) that is conducting research on existing practices, policies, systems, and legal frameworks through which international development data is collected, used, shared, and released. Based on this research, we’ll develop ‘responsible data’ practice guidelines for USAID that aim to help:

  • Mitigate privacy and security risks for beneficiaries and others
  • Improve performance and development outcomes through use of data
  • Promote transparency, accountability and public good through open data

The plan is to develop draft guidelines and then to test their application on real programs.

We are looking for digital development projects to assess how our draft guidelines would work in real world settings. Once the projects are selected, members of the research team will visit them to better understand “on-the-ground” contexts and project needs. We’ll apply draft practice guidelines to each case with the goal of identifying what parts of the guidelines are useful/ applicable, and where the gaps are in the guidelines. We’ll also capture feedback from the project management team and partners on implications for project costs and timelines, and we’ll document existing digital data-related good practices and lessons. These findings will further refine USAID’s Responsible Data Practice guidelines.

What types of projects are we looking for?

  • Ongoing or recently concluded projects that are using digital technologies to collect, store, analyze, manage, use and share individuals’ data.
  • Cases where data collected is sensitive or may put project participants at risk.
  • The project should have informal or formal processes for privacy/security risk assessment and mitigation especially with respect to field implementation of digital technologies (listed above) as part of their program. These may be implicit or explicit (i.e. documented or written). They potentially include formal review processes conducted by ethics review boards or institutional review boards (IRBs) for projects.
  • All sectors of international development and all geographies are welcome to submit case studies. We are looking for diversity in context and programming.
  • We prefer case studies from USAID-funded projects but are open to receiving case studies from other donor-supported projects.

If you have a project or an activity that falls into the above criteria, please let us know here. We welcome multiple submissions from one organization; just reuse the form for each proposed case study.

Please submit your projects by February 15, 2017.

And please share this call with others who may be interested in contributing case studies.

Click here to submit your case study.

Also feel free to get in touch with me if you have questions about the project or the call!

 

Read Full Post »

At the 2016 American Evaluation Association conference, I chaired a session on benefits and challenges with ICTs in Equity-Focused Evaluation. The session frame came from a 2016 paper on the same topic. Panelists Kecia Bertermann from Girl Effect, and Herschel Sanders from RTI added fascinating insights on the methodological challenges to consider when using ICTs for evaluation purposes and discussant Michael Bamberger closed out with critical points based on his 50+ years doing evaluations.

ICTs include a host of technology-based tools, applications, services, and platforms that are overtaking the world. We can think of them in three key areas: technological devices, social media/internet platforms and digital data.

An equity focus evaluation implies ensuring space for the voices of excluded groups and avoiding the traditional top-down approach. It requires:

  • Identifying vulnerable groups
  • Opening up space for them to make their voices heard through channels that are culturally responsive, accessible and safe
  • Ensuring their views are communicated to decision makers

It is believed that ICTs, especially mobile phones, can help with inclusion in the implementation of development and humanitarian programming. Mobile phones are also held up as devices that can allow evaluators to reach isolated or marginalized groups and individuals who are not usually engaged in research and evaluation. Often, however, mobiles only overcome geographic inclusion. Evaluators need to think harder when it comes to other types of exclusion – such as that related to disability, gender, age, political status or views, ethnicity, literacy, or economic status – and we need to consider how these various types of exclusions can combine to exacerbate marginalization (e.g., “intersectionality”).

We are seeing increasing use of ICTs in evaluation of programs aimed at improving equity. Yet these tools also create new challenges. The way we design evaluations and how we apply ICT tools can make all the difference between including new voices and feedback loops or reinforcing existing exclusions or even creating new gaps and exclusions.

Some of the concerns with the use of ICTs in equity- based evaluation include:

Methodological aspects:

  • Are we falling victim to ‘elite capture’ — only hearing from higher educated, comparatively wealthy men, for example? How does that bias our information? How can we offset that bias or triangulate with other data and multi-methods rather than depending only on one tool-based method?
  • Are we relying too heavily on things that we can count or multiple-choice responses because that’s what most of these new ICT tools allow?
  • Are we spending all of our time on a device rather than in communities engaging with people and seeking to understand what’s happening there in person?
  • Is reliance on mobile devices or self-reporting through mobile surveys causing us to miss contextual clues that might help us better interpret the data?
  • Are we falling into the trap of fallacy in numbers – in other words, imagining that because lots of people are saying something, that it’s true for everyone, everywhere?

Organizational aspects:

  • Do digital tools require a costly, up-front investment that some organizations are not able to make?
  • How do fear and resistance to using digital tools impact on data gathering?
  • What kinds of organizational change processes are needed amongst staff or community members to address this?
  • What new skills and capacities are needed?

Ethical aspects:

  • How are researchers and evaluators managing informed consent considering the new challenges to privacy that come with digital data? (Also see: Rethinking Consent in the Digital Age)?
  • Are evaluators and non-profit organizations equipped to keep data safe?
  • Is it possible to anonymize data in the era of big data given the capacity to cross data sets and re-identify people?
  • What new risks might we be creating for community members? To local enumerators? To ourselves as evaluators? (See: Developing and Operationalizing Responsible Data Policies)

Evaluation of Girl Effect’s online platform for girls

Kecia walked us through how Girl Effect has designed an evaluation of an online platform and applications for girls. She spoke of how the online platform itself brings constraints because it only works on feature phones and smart phones, and for this reason it was decided to work with 14-16 year old urban girls in megacities who have access to these types of devices yet still experience multiple vulnerabilities such as gender-based violence and sexual violence, early pregnancy, low levels of school completion, poor health services and lack of reliable health information, and/or low self-esteem and self-confidence.

The big questions for this program include:

  • Is the content reaching the girls that Girl Effect set out to reach?
  • Is the content on the platform contributing to change?

Because the girl users are on the platform, Girl Effect can use features such as polls and surveys for self-reported change. However, because the girls are under 18, there are privacy and security concerns that sometimes limit the extent to which the organization feels comfortable tracking user behavior. In addition, the type of phones that the girls are using and the fact that they may be borrowing others’ phones to access the site adds another level of challenges. This means that Girl Effect must think very carefully about the kind of data that can be gleaned from the site itself, and how valid it is.

The organization is using a knowledge, attitudes and practices (KAP) framework and exploring ways that KAP can be measured through some of the exciting data capture options that come with an online platform. However it’s hard to know if offline behavior is actually shifting, making it important to also gather information that helps read into the self-reported behavior data.

Girl Effect is complementing traditional KAP indicators with web analytics (unique users, repeat visitors, dwell times, bounce rates, ways that users arrive to the site) with push-surveys that go out to users and polls that appear after an article (“Was this information helpful? Was it new to you? Did it change your perceptions? Are you planning to do something different based on this information?”) Proxy indicators are also being developed to help interpret the data. For example, does an increase in frequency of commenting on the site by a particular user have a link with greater self-esteem or self-efficacy?

However, there is only so much that can be gleaned from an online platform when it comes to behavior change, so the organization is complementing the online information with traditional, in-person, qualitative data gathering. The site is helpful there, however, for recruiting users for focus groups and in-depth interviews. Girl Effect wants to explore KAP and online platforms, yet also wants to be careful about making assumptions and using proxy indicators, so the traditional methods are incorporated into the evaluation as a way of triangulating the data. The evaluation approach is a careful balance of security considerations, attention to proxy indicators, digital data and traditional offline methods.

Using SMS surveys for evaluation: Who do they reach?

Herschel took us through a study conducted by RTI (Sanders, Lau, Lombaard, Baker, Eyerman, Thalji) in partnership with TNS about the use of SMS surveys for evaluation. She noted that the rapid growth of mobile phones, particularly in African countries, opens up new possibilities for data collection. There has been an explosion of SMS surveys for national, population-based surveys.

Like most ICT-enabled MERL methods, use of SMS for general population surveys brings both promise:

  • High mobile penetration in many African countries means we can theoretically reach a large segment of the population.
  • These surveys are much faster and less expensive than traditional face-to- face surveys.
  • SMS surveys work on virtually any GSM phone.
  • SMS offers the promise of reach. We can reach a large and geographically dispersed population, including some areas that are excluded from FTF surveys because of security concerns.

And challenges:

  • Coverage: We cannot include illiterate people or those without access to a mobile phone. Also, some sample frames may not include the entire population with mobile phones.
  • Non-response: Response rates are expected to be low for a variety of reasons, including limited network connectivity or electricity; if two or people share a phone, we may not reach all people associated with that phone; people may feel a lack of confidence with technology. These factors might affect certain sub-groups differently, so we might underrepresent the poor, rural areas, or women.
  • Quality of measurement. We only have 160 CHARACTERS for both the question AND THE RESPONSE OPTIONS. Further, an interviewer is not present to clarify any questions.

RTI’s research aimed to answer the question: How representative are general population SMS surveys and are there ways to improve representativeness?

Three core questions were explored via SMS invitations sent in Kenya, Ghana, Nigeria and Uganda:

  • Does the sample frame match the target population?
  • Does non-response have an impact on representativeness?
  • Can we improve quality of data by optimizing SMS designs?

One striking finding was the extent to which response rates may vary by country, Hershel said. In some cases this was affected by agreements in place in each country. Some required a stronger opt-in process. In Kenya and Uganda, where a higher percentage of users had already gone through an opt-in process and had already participated in SMS-based surveys, there was a higher rate of response.

screen-shot-2016-11-03-at-2-23-26-pm

These response rates, especially in Ghana and Nigeria, are noticeably low, and the impact of the low response rates in Nigeria and Ghana is evident in the data. In Nigeria, where researchers compared the SMS survey results against the face-to-face data, there was a clear skew away from older females, towards those with a higher level of education and who are full-time employed.

Additionally, 14% of the face-to-face sample, filtered on mobile users, had a post-secondary education, whereas in the SMS data this figure is 60%.

Additionally, Compared to face-to-face data, SMS respondents were:

  • More likely to have more than 1 SIM card
  • Less likely to share a SIM card
  • More likely to be aware of and use the Internet.

This sketches a portrait of a more technological savvy respondent in the SMS surveys, said Herschel.

screen-shot-2016-11-03-at-2-24-18-pm

The team also explored incentives and found that a higher incentive had no meaningful impact, but adding reminders to the design of the SMS survey process helped achieve a wider slice of the sample and a more diverse profile.

Response order effects were explored along with issues related to questionnaire designers trying to pack as much as possible onto the screen rather than asking yes/no questions. Hershel highlighted that that when multiple-choice options were given, 76% of SMS survey respondents only gave 1 response compared to 12% for the face-to-face data.

screen-shot-2016-11-03-at-2-23-53-pmLastly, the research found no meaningful difference in response rate between a survey with 8 questions and one with 16 questions, she said. This may go against common convention which dictates that “the shorter, the better” for an SMS survey. There was no observable break off rate based on survey length, giving confidence that longer surveys may be possible via SMS than initially thought.

Hershel noted that some conclusions can be drawn:

  • SMS excels for rapid response (e.g., Ebola)
  • SMS surveys have substantial non-response errors
  • SMS surveys overrepresent

These errors mean SMS cannot replace face-to-face surveys … yet. However, we can optimize SMS survey design now by:

  • Using reminders during data collection
  • Be aware of response order effects. So we need to randomize substantive response options to avoid bias.
  • Not using “select all that apply” questions. It’s ok to have longer surveys.

However, she also noted that the landscape is rapidly changing and so future research may shed light on changing reactions as familiarity with SMS and greater access grow.

Summarizing the opportunities and challenges with ICTs in Equity-Focused Evaluation

Finally we heard some considerations from Michael, who said that people often get so excited about possibilities for ICT in monitoring, evaluation, research and learning that they neglect to address the challenges. He applauded Girl Effect and RTI for their careful thinking about the strengths and weaknesses in the methods they are using. “It’s very unusual to see the type of rigor shown in these two examples,” he said.

Michael commented that a clear message from both presenters and from other literature and experiences is the need for mixed methods. Some things can be done on a phone, but not all things. “When the data collection is remote, you can’t observe the context. For example, if it’s a teenage girl answering the voice or SMS survey, is the mother-in-law sitting there listening or watching? What are the contextual clues you are missing out on? In a face-to-face context an evaluator can see if someone is telling the girl how to respond.”

Additionally,“no survey framework will cover everyone,” he said. “There may be children who are not registered on the school attendance list that is being used to identify survey respondents. What about immigrants who are hiding from sight out of fear and not registered by the government?” He cautioned evaluators to not forget about folks in the community who are totally missed out and skipped over, and how the use of new technology could make that problem even greater.

Another point Michael raised is that communicating through technology channels creates a different behavior dynamic. One is not better than the other, but evaluators need to be aware that they are different. “Everyone with teenagers knows that the kind of things we communicate online are very different than what we communicate in a face-to-face situation,” he said. “There is a style of how we communicate. You might be more frank and honest on an online platform. Or you may see other differences in just your own behavior dynamics on how you communicate via different kinds of tools,” he said.

He noted that a range of issues has been raised in connection to ICTs in evaluation, but that it’s been rare to see priority given to evaluation rigor. The study Herschel presented was one example of a focus on rigor and issues of bias, but people often get so excited that they forget to think about this. “Who has access.? Are people sharing phones? What are the gender dynamics? Is a husband restricting what a woman is doing on the phone? There’s a range of selection bias issues that are ignored,” he said.

Quantitative bias and mono-methods are another issue in ICT-focused evaluation. The tool choice will determine what an evaluator can ask and that in turn affects the quality of responses. This leads to issues with construct validity. If you are trying to measure complex ideas like girls’ empowerment and you reduce this to a proxy, there can often be a large jump in interpretation. This doesn’t happen only when using mobile phones for evaluation data collection purposes but there are certain areas that may be exacerbated when the phone is the tool. So evaluators need to better understand behavior dynamics and how they related to the technical constraints of a particular digital or mobile platform.

The aspect of information dissemination is another one worth raising, said Michael. “What are the dynamics? When we incorporate new tools, we tend to assume there is just one-step between the information sharer and receiver, yet there is plenty of literature that shows this is normally at least 2 steps. Often people don’t get information directly, but rather they share and talk with someone else who helps them verify and interpret the information they get on a mobile phone. There are gatekeepers who control or interpret, and evaluators need to better understand those dynamics. Social network analysis can help with that sometimes – looking at who communicates with whom? Who is part of the main infuencer hub? Who is marginalized? This could be exciting to explore more.”

Lastly, Michael reiterated the importance of mixed methods and needing to combine online information and communications with face-to-face methods and to be very aware of invisible groups. “Before you do an SMS survey, you may need to go out to the community to explain that this survey will be coming,” he said. “This might be necessary to encourage people to even receive the survey, to pay attention or to answer it.” The case studies in the paper “The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges” explore some of these aspects in good detail.

Read Full Post »

This post is co-authored by Emily Tomkys, Oxfam GB; Danna Ingleton, Amnesty International; and me (Linda Raftree, Independent)

At the MERL Tech conference in DC this month, we ran a breakout session on rethinking consent in the digital age. Most INGOs have not updated their consent forms and policies for many years, yet the growing use of technology in our work, for many different purposes, raises many questions and insecurities that are difficult to address. Our old ways of requesting and managing consent need to be modernized to meet the new realities of digital data and the changing nature of data. Is informed consent even possible when data is digital and/or opened? Do we have any way of controlling what happens with that data once it is digital? How often are organizations violating national and global data privacy laws? Can technology be part of the answer?

Let’s take a moment to clarify what kind of consent we are talking about in this post. Being clear on this point is important because there are many synchronous conversations on consent in relation to technology. For example there are people exploring the use of the consent frameworks or rhetoric in ICT user agreements – asking whether signing such user agreements can really be considered consent. There are others exploring the issue of consent for content distribution online, in particular personal or sensitive content such as private videos and photographs. And while these (and other) consent debates are related and important to this post, what we are specifically talking about is how we, our organizations and projects, address the issue of consent when we are collecting and using data from those who participate in programs or monitoring, evaluation, research and learning (MERL) that we are implementing.

This diagram highlights that no matter how someone is engaging with the data, how they do so and the decisions they make will impact on what is disclosed to the data subject.

No matter how someone is engaging with data, how they do so and the decisions they make will impact on what is disclosed to the data subject.

This is as timely as ever because introducing new technologies and kinds of data means we need to change how we build consent into project planning and implementation. In fact, it gives us an amazing opportunity to build consent into our projects in ways that our organizations may not have considered in the past. While it used to be that informed consent was the domain of frontline research staff, the reality is that getting informed consent – where there is disclosure, voluntariness, comprehension and competence of the data subject –  is the responsibility of anyone ‘touching’ the data.

Here we share examples from two organizations who have been exploring consent issues in their tech work.

Over the past two years, Girl Effect has been incorporating a number of mobile and digital tools into its programs. These include both the Girl Effect Mobile (GEM) and the Technology Enabled Girl Ambassadors (TEGA) programs.

Girl Effect Mobile is a global digital platform that is active in 49 countries and 26 languages. It is being developed in partnership with Facebook’s Free Basics initiative. GEM aims to provide a platform that connects girls to vital information, entertaining content and to each other. Girl Effect’s digital privacy, safety and security policy directs the organization to review and revise its terms and conditions to ensure that they are ‘girl-friendly’ and respond to local context and realities, and that in addition to protecting the organization (as many T&Cs are designed to do), they also protect girls and their rights. The GEM terms and conditions were initially a standard T&C. They were too long to expect girls to look at them on a mobile, the language was legalese, and they seemed one-sided. So the organization developed a new T&C with simplified language and removed some of the legal clauses that were irrelevant to the various contexts in which GEM operates. Consent language was added to cover polls and surveys, since Girl Effect uses the platform to conduct research and for its monitoring, evaluation and learning work. In addition, summary points are highlighted in a shorter version of the T&Cs with a link to the full T&Cs. Girl Effect also develops short articles about online safety, privacy and consent as part of the GEM content as a way of engaging girls with these ideas as well.

TEGA is a girl-operated mobile-enabled research tool currently operating in Northern Nigeria. It uses data-collection techniques and mobile technology to teach girls aged 18-24 how to collect meaningful, honest data about their world in real time. TEGA provides Girl Effect and partners with authentic peer-to-peer insights to inform their work. Because Girl Effect was concerned that girls being interviewed may not understand the consent they were providing during the research process, they used the mobile platform to expand on the consent process. They added a feature where the TEGA girl researchers play an audio clip that explains the consent process. Afterwards, girls who are being interviewed answer multiple choice follow up questions to show whether they have understood what they have agreed to. (Note: The TEGA team report that they have incorporated additional consent features into TEGA based on examples and questions shared in our session).

Oxfam, in addition to developing out their Responsible Program Data Policy, has been exploring ways in which technology can help address contemporary consent challenges. The organization had doubts on how much its informed consent statement (which explains who the organization is, what the research is about and why Oxfam is collecting data as well as asks whether the participant is willing to be interviewed) was understood and whether informed consent is really possible in the digital age. All the same, the organization wanted to be sure that the consent information was being read out in its fullest by enumerators (the interviewers). There were questions about what the variation might be on this between enumerators as well as in different contexts and countries of operation. To explore whether communities were hearing the consent statement fully, Oxfam is using mobile data collection with audio recordings in the local language and using speed violations to know whether the time spent on the consent page is sufficient, according to the length of the audio file played. This is by no means foolproof but what Oxfam has found so far is that the audio file is often not played in full and or not at all.

Efforts like these are only the beginning, but they help to develop a resource base and stimulate more conversations that can help organizations and specific projects think through consent in the digital age.

Additional resources include this framework for Consent Policies developed at a Responsible Data Forum gathering.

Because of how quickly technology and data use is changing, one idea that was shared was that rather than using informed consent frameworks, organizations may want to consider defining and meeting a ‘duty of care’ around the use of the data they collect. This can be somewhat accomplished through the creation of organizational-level Responsible Data Policies. There are also interesting initiatives exploring new ways of enabling communities to define consent themselves – like this data licenses prototype.

screen-shot-2016-11-02-at-10-20-53-am

The development and humanitarian sectors really need to take notice, adapt and update their thinking constantly to keep up with technology shifts. We should also be doing more sharing about these experiences. By working together on these types of wicked challenges, we can advance without duplicating our efforts.

Read Full Post »

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo courtesy of Amy O'Donnell).

Crowdsourcing our Responsible Data questions, challenges and lessons. (Photo by Amy O’Donnell).

At Catholic Relief Services’ ICT4D Conference in May 2016, I worked with Amy O’Donnell  (Oxfam GB) and Paul Perrin (CRS) to facilitate a participatory session that explored notions of Digital Privacy, Security and Safety. We had a full room, with a widely varied set of experiences and expertise.

The session kicked off with stories of privacy and security breaches. One person told of having personal data stolen when a federal government clearance database was compromised. We also shared how a researcher in Denmark scraped very personal data from the OK Cupid online dating site and opened it up to the public.

A comparison was made between the OK Cupid data situation and the work that we do as development professionals. When we collect very personal information from program participants, they may not expect that their household level income, health data or personal habits would be ‘opened’ at some point.

Our first task was to explore and compare the meaning of the terms: Privacy, Security and Safety as they relate to “digital” and “development.”

What do we mean by privacy?

The “privacy” group talked quite a bit about contextuality of data ownership. They noted that there are aspects of privacy that cut across different groups of people in different societies, and that some aspects of privacy may be culturally specific. Privacy is concerned with ownership of data and protection of one’s information, they said. It’s about who owns data and who collects and protects it and notions of to whom it belongs. Private information is that which may be known by some but not by all. Privacy is a temporal notion — private information should be protected indefinitely over time. In addition, privacy is constantly changing. Because we are using data on our mobile phones, said one person, “Safaricom knows we are all in this same space, but we don’t know that they know.”

Another said that in today’s world, “You assume others can’t know something about you, but things are actually known about you that you don’t even know that others can know. There are some facts about you that you don’t think anyone should know or be able to know, but they do.” The group mentioned website terms and conditions, corporate ownership of personal data and a lack of control of privacy now. Some felt that we are unable to maintain our privacy today, whereas others felt that one could opt out of social media and other technologies to remain in control of one’s own privacy. The group noted that “privacy is about the appropriate use of data for its intended purpose. If that purpose shifts and I haven’t consented, then it’s a violation of privacy.”

What do we mean by security?

The Security group considered security to relate to an individual’s information. “It’s your information, and security of it means that what you’re doing is protected, confidential, and access is only for authorized users.” Security was also related to the location of where a person’s information is hosted and the legal parameters. Other aspects were related to “a barrier – an anti-virus program or some kind of encryption software, something that protects you from harm…. It’s about setting roles and permissions on software and installing firewalls, role-based permissions for accessing data, and cloud security of individuals’ data.” A broader aspect of security was linked to the effects of hacking that lead to offline vulnerability, to a lack of emotional security or feeling intimidated in an online space. Lastly, the group noted that “we, not the systems, are the weakest link in security – what we click on, what we view, what we’ve done. We are our own worst enemies in terms of keeping ourselves and our data secure.”

What do we mean by safety?

The Safety group noted that it’s difficult to know the difference between safety and security. “Safety evokes something highly personal. Like privacy… it’s related to being free from harm personally, physically and emotionally.” The group raised examples of protecting children from harmful online content or from people seeking to harm vulnerable users of online tools. The aspect of keeping your online financial information safe, and feeling confident that a service was ‘safe’ to use was also raised. Safety was considered to be linked to the concept of risk. “Safety engenders a level of trust, which is at the heart of safety online,” said one person.

In the context of data collection for communities we work with – safety was connected to data minimization concepts and linked with vulnerability, and a compounded vulnerability when it comes to online risk and safety. “If one person’s data is not safely maintained it puts others at risk,” noted the group. “And pieces of information that are innocuous on their own may become harmful when combined.” Lastly, the notion of safety as related to offline risk or risk to an individual due to a specific online behavior or data breach was raised.

It was noted that in all of these terms: privacy, security and safety, there is an element of power, and that in this type of work, a power relations analysis is critical.

The Digital Data Life Cycle

After unpacking the above terms, Amy took the group through an analysis of the data life cycle (courtesy of the Engine Room’s Responsible Data website) in order to highlight the different moments where the three concepts (privacy, security and safety) come into play.

Screen Shot 2016-05-25 at 6.51.50 AM

  • Plan/Design
  • Collect/Find/Acquire
  • Store
  • Transmit
  • Access
  • Share
  • Analyze/use
  • Retention
  • Disposal
  • Afterlife

Participants added additional stages in the data life cycle that they passed through in their work (coordinate, monitor the process, monitor compliance with data privacy and security policies). We placed the points of the data life cycle on the wall, and invited participants to:

  • Place a pink sticky note under the stage in the data life cycle that resonates or interests them most and think about why.
  • Place a green sticky note under the stage that is the most challenging or troublesome for them or their organizations and think about why.
  • Place a blue sticky note under the stage where they have the most experience, and to share a particular experience or tip that might help others to better manage their data life cycle in a private, secure and safe way.

Challenges, concerns and lessons

Design as well as policy are important!

  • Design drives everScreen Shot 2016-05-25 at 7.21.07 AMything else. We often start from the point of collection when really it’s at the design stage when we should think about the burden of data collection and define what’s the minimum we can ask of people? How we design – even how we get consent – can inform how the whole process happens.
  • When we get part-way through the data life cycle, we often wish we’d have thought of the whole cycle at the beginning, during the design phase.
  • In addition to good design, coordination of data collection needs to be thought about early in the process so that duplication can be reduced. This can also reduce fatigue for people who are asked over and over for their data.
  • Informed consent is such a critical issue that needs to be linked with the entire process of design for the whole data life cycle. How do you explain to people that you will be giving their data away, anonymizing, separating out, encrypting? There are often flow down clauses in some contracts that shifts responsibilities for data protection and security and it’s not always clear who is responsible for those data processes? How can you be sure that they are doing it properly and in a painstaking way?
  • Anonymization is also an issue. It’s hard to know to what level to anonymize things like call data records — to the individual? Township? District Level? And for how long will anonymization actually hold up?
  • The lack of good design and policy contributes to overlapping efforts and poor coordination of data collection efforts across agencies. We often collect too much data in poorly designed databases.
  • Policy is not enough – we need to do a much better job of monitoring compliance with policy.
  • Institutional Review Boards (IRBs) and compliance aspects need to be updated to the new digital data reality. At the same time, sometimes IRBs are not the right instrument for what we are aiming to achieve.

Data collection needs more attention.

  • Data collection is the easy part – where institutions struggle is with analyzing and doing something with the data we collect.
  • Organizations often don’t have a well-structured or systematic process for data collection.
  • We need to be clearer about what type of information we are collecting and why.
  • We need to update our data protection policy.

Reasons for data sharing are not always clear.

  • How can share data securely and efficiently without building duplicative systems? We should be thinking more during the design and collection phase about whether the data is going to be interoperable and who needs to access it.
  • How can we get the right balance in terms of data sharing? Some donors really push for information that can put people in real danger – like details of people who have participated in particular programs that would put them at risk with their home governments. Organizations really need to push back against this. It’s an education thing with donors. Middle management and intermediaries are often the ones that push for this type of data because they don’t really have a handle on the risk it represents. They are the weak points because of the demands they are putting on people. This is a challenge for open data policies – leaving it open to people leaves it to doing the laziest job possible of thinking about the potential risks for that data.
  • There are legal aspects of sharing too – such as the USAID open data policy where those collecting data have to share with the government. But we don’t have a clear understanding of what the international laws are about data sharing.
  • There are so many pressures to share data but they are not all fully thought through!

Data analysis and use of data are key weak spots for organizations.

  • We are just beginning to think through capturing lots of data.
  • Data is collected but not always used. Too often it’s extractive data collection. We don’t have the feedback loops in place, and when there are feedback loops we often don’t use the the feedback to make changes.
  • We forget often to go back to the people who have provided us with data to share back with them. It’s not often that we hold a consultation with the community to really involve them in how the data can be used.

Secure storage is a challenge.

  • We have hundreds of databases across the agency in various formats, hard drives and states of security, privacy and safety. Are we able to keep these secure?
  • We need to think more carefully about where we hold our data and who has access to it. Sometimes our data is held by external consultants. How should we be addressing that?

Disposing of data properly in a global context is hard!

  • Screen Shot 2016-05-25 at 7.17.58 AMIt’s difficult to dispose of data when there are multiple versions of it and a data footprint.
  • Disposal is an issue. We’re doing a lot of server upgrades and many of these are remote locations. How do we ensure that the right disposal process is going on globally, short of physically seeing that hard drives are smashed up!
  • We need to do a better job of disposal on personal laptops. I’ve done a lot of data collection on my personal laptop – no one has ever followed up to see if I’ve deleted it. How are we handling data handover? How do you really dispose of data?
  • Our organization hasn’t even thought about this yet!

Tips and recommendations from participants

  • Organizations should be using different tools. They should be using Pretty Good Privacy techniques rather than relying on free or commercial tools like Google or Skype.
  • People can be your weakest link if they are not aware or they don’t care about privacy and security. We send an email out to all staff on a weekly basis that talks about taking adequate measures. We share tips and stories. That helps to keep privacy and security front and center.
  • Even if you have a policy the hard part is enforcement, accountability, and policy reform. If our organizations are not doing direct policy around the formation of best practices in this area, then it’s on us to be sure we understand what is best practice, and to advocate for that. Let’s do what we can before the policy catches up.
  • The Responsible Data Forum and Tactical Tech have a great set of resources.
  • Oxfam has a Responsible Data Policy and Girl Effect have developed a Girls’ Digital Privacy, Security and Safety Toolkit that can also offer some guidance.

In conclusion, participants agreed that development agencies and NGOs need to take privacy, security and safety seriously. They can no longer afford to implement security at a lower level than corporations. “Times are changing and hackers are no longer just interested in financial information. People’s data is very valuable. We need to change and take security as seriously as corporates do!” as one person said.

 

 

Read Full Post »

At our April 5th Salon in Washington, DC we had the opportunity to take a closer look at open data and privacy and discuss the intersection of the two in the framework of ‘responsible data’. Our lead discussants were Amy O’Donnell, Oxfam GB; Rob Baker, World Bank; Sean McDonald, FrontlineSMS. I had the pleasure of guest moderating.

What is Responsible Data?

We started out by defining ‘responsible data‘ and some of the challenges when thinking about open data in a framework of responsible data.

The Engine Room defines ‘responsible data’ as

the duty to ensure people’s rights to consent, privacy, security and ownership around the information processes of collection, analysis, storage, presentation and reuse of data, while respecting the values of transparency and openness.

Responsible Data can be like walking a tightrope, noted our first discussant, and you need to find the right balance between opening data and sharing it, all the while being ethical and responsible. “Data is inherently related to power – it can create power, redistribute it, make the powerful more powerful or further marginalize the marginalized. Getting the right balance involves asking some key questions throughout the data lifecycle from design of the data gathering all the way through to disposal of the data.

How can organizations be more responsible?

If an organization wants to be responsible about data throughout the data life cycle, some questions to ask include:

  • In whose interest is it to collect the data? Is it extractive or empowering? Is there informed consent?
  • What and how much do you really need to know? Is the burden of collecting and the liability of storing the data worth it when balanced with the data’s ability to represent people and allow them to be counted and served? Do we know what we’ll actually be doing with the data?
  • How will the data be collected and treated? What are the new opportunities and risks of collecting and storing and using it?
  • Why are you collecting it in the first place? What will it be used for? Will it be shared or opened? Is there a data sharing MOU and has the right kind of consent been secured? Who are we opening the data for and who will be able to access and use it?
  • What is the sensitivity of the data and what needs to be stripped out in order to protect those who provided the data?

Oxfam has developed a data deposit framework to help assess the above questions and make decisions about when and whether data can be open or shared.

(The Engine Room’s Responsible Development Data handbook offers additional guidelines and things to consider)

(See: https://wiki.responsibledata.io/Data_in_the_project_lifecycle for more about the data lifecycle)

Is ‘responsible open data’ an oxymoron?

Responsible Data policies and practices don’t work against open data, our discussant noted. Responsible Data is about developing a framework so that data can be opened and used safely. It’s about respecting the time and privacy of those who have provided us with data and reducing the risk of that data being hacked. As more data is collected digitally and donors are beginning to require organizations to hand over data that has been collected with their funding, it’s critical to have practical resources and help staff to be more responsible about data.

Some disagreed that consent could be truly informed and that open data could ever be responsible since once data is open, all control over the data is lost. “If you can’t control the way the data is used, you can’t have informed people. It’s like saying ‘you gave us permission to open your data, so if something bad happens to you, oh well….” Informed consent is also difficult nowadays because data sets are being used together and in ways that were not possible when informed consent was initially obtained.

Others noted that standard informed consent practices are unhelpful, as people don’t understand what might be done with their data, especially when they have low data literacy. Involving local communities and individuals in defining what data they would like to have and use could make the process more manageable and useful for those whose data we are collecting, using and storing, they suggested.

One person said that if consent to open data was not secured initially; the data cannot be opened, say, 10 years later. Another felt that it was one thing to open data for a purpose and something entirely different to say “we’re going to open your data so people can do fun things with it, to play around with it.”

But just what data are we talking about?

USAID was questioned for requiring grantees to share data sets and for leaning towards de-identification rather than raising the standard to data anonymity. One person noted that at one point the agency had proposed a 22-step process for releasing data and even that was insufficient for protecting program participants in a risky geography because “it’s very easy to figure out who in a small community recently received 8 camels.” For this reason, exclusions are an important part of open data processes, he said.

It’s not black or white, said another. Responsible open data is possible, but openness happens along a spectrum. You have financial data on the one end, which should be very open as the public has a right to know how its tax dollars are being spent. Human subjects research is on the other end, and it should not be totally open. (Author’s note: The Open Knowledge Foundation definition of open data says: “A key point is that when opening up data, the focus is on non-personal data, that is, data which does not contain information about specific individuals.” The distinction between personal data, such as that in household level surveys, and financial data on agency or government activities seems to be blurred or blurring in current debates around open data and privacy.) “Open data will blow up in your face if it’s not done responsibly,” he noted. “But some of the open data published via IATI (the International Aid Transparency Initiative) has led to change.”

A participant followed this comment up by sharing information from a research project conducted on stakeholders’ use of IATI data in 3 countries. When people knew that the open data sets existed they were very excited, she said. “These are countries where there is no Freedom of Information Act (FOIA), and where people cannot access data because no one will give it to them. They trusted the US Government’s data more than their own government data, and there was a huge demand for IATI data. People were very interested in who was getting what funding. They wanted information for planning, coordination, line ministries and other logistical purposes. So let’s not underestimate open data. If having open data sets means that governments, health agencies or humanitarian organizations can do a better job of serving people, that may make for a different kind of analysis or decision.”

‘Open by default’ or ‘open by demand’?

Though there are plenty of good intentions and rationales for open data, said one discussant, ‘open by default’ is a mistake. We may have quick wins with a reduction in duplicity of data collection, but our experiences thus far do not merit ‘open by default’. We have not earned it. Instead, he felt that ‘open by demand’ is a better idea. “We can put out a public list of the data that’s available and see what demand for data comes in. If we are proactive on what is available and what can be made available, and we monitor requests, we can avoid putting out information that no one is interested in. This would lower the overhead on what we are releasing. It would also allow us to have a conversation about who needs this data and for what.”

One participant agreed, positing that often the only reason that we collect data is to provide proof and evidence that we’re doing our job, spending the money given to us, and tracking back. “We tend to think that the only way to provide this evidence is to collect data: do a survey, talk to people, look at website usage. But is anyone actually using this data, this evidence to make decisions?”

Is the open data honeymoon over?

“We need to do a better job of understanding the impact at a wider level,” said another participant, “and I think it’s pretty light. Talking about open data is too general. We need to be more service oriented and problem driven. The conversation is very different when you are using data to solve a particular problem and you can focus on something tangible like service delivery or efficiency. Open data is expensive and not sustainable in the current setup. We need to figure this out.”

Another person shared results from an informal study on the use of open data portals around the world. He found around 2,500 open data portals, and only 3.8% of them use https (the secure version of http). Most have very few visitors, possibly due to poor Internet access in the countries whose open data they are serving up, he said. Several exist in countries with a poor Freedom House ranking and/or in countries at the bottom end of the World Bank’s Digital Dividends report. “In other words, the portals have been built for people who can’t even use them. How responsible is this?” he asked, “And what is the purpose of putting all that data out there if people don’t have the means to access it and we continue to launch more and more portals? Where’s all this going?”

Are we conflating legal terms?

Legal frameworks around data ownership were debated. Some said that the data belonged to the person or agency that collected it or paid for the cost of collecting in terms of copyright and IP. Others said that the data belonged to the individual who provided it. (Author’s note: Participants may have been referring to different categories of data, eg., financial data from government vs human subjects data.) The question was raised of whether informed consent for open data in the humanitarian space is basically a ‘contract of adhesion’ (a term for a legally binding agreement between two parties wherein one side has all the bargaining power and uses it to its advantage). Asking a person to hand over data in an emergency situation in order to enroll in a humanitarian aid program is akin to holding a gun to a person’s head in order to get them to sign a contract, said one person.

There’s a world of difference between ‘published data’ and ‘openly licensed data,’ commented our third discussant. “An open license is a complete lack of control, and you can’t be responsible with something you can’t control. There are ways to be responsible about the way you open something, but once it’s open, your responsibility has left the port.” ‘Use-based licensing’ is something else, and most IP is governed by how it’s used. For example, educational institutions get free access to data because they are educational institutions. Others pay and this subsidized their use of this data, he explained.

One person suggested that we could move from the idea of ‘open data’ to sub-categories related to how accessible the data would be and to whom and for what purposes. “We could think about categories like: completely open, licensed, for a fee, free, closed except for specific uses, etc.; and we could also specify for whom, whose data and for what purposes. If we use the term ‘accessible’ rather than ‘open’ perhaps we can attach some restrictions to it,” she said.

Is data an asset or a liability?

Our current framing is wrong, said one discussant. We should think of data as a toxic asset since as soon as it’s in our books and systems, it creates proactive costs and proactive risks. Threat modeling is a good approach, he noted. Data can cause a lot of harm to an organization – it’s a liability, and if it’s not used or stored according to local laws, an agency could be sued. “We’re far under the bar. We are not compliant with ‘safe harbor’ or ECOWAS regulations. There are libel questions and property laws that our sector is ignorant of. Our good intentions mislead us in terms of how we are doing things. There is plenty of room to build good practice here, he noted, for example through Civic Trusts. Another participant noted that insurance underwriters are already moving into this field, meaning that they see growing liability in this space.

How can we better engage communities and the grassroots?

Some participants shared examples of how they and their organizations have worked closely at the grassroots level to engage people and communities in protecting their own privacy and using open data for their own purposes. Threat modeling is an approach that helps improve data privacy and security, said one. “When we do threat modeling, we treat the data that we plan to collect as a potential asset. At each step of collection, storage, sharing process – we ask, ‘how will we protect those assets? What happens if we don’t share that data? If we don’t collect it? If we don’t delete it?’”

In one case, she worked with very vulnerable women working on human rights issues and together the group put together an action plan to protect its data from adversaries. The threats that they had predicted actually happened and the plan was put into action. Threat modeling also helps to “weed the garden once you plant it,” she said, meaning that it helps organizations and individuals keep an eye on their data, think about when to delete data, pay attention to what happens after data’s opened and dedicate some time for maintenance rather than putting all their attention on releasing and opening data.

More funding needs to be made available for data literacy for those whose data has been collected and/or opened. We need to help people think about what data is of use to them also. One person recalled hearing people involved in the creation of the Kenya Open Government Data portal say that the entire process was a waste of time because of low levels of use of any of the data. There are examples, however, of people using open data and verifying it at community level. For example, high school students in one instance found the data on all the so-called grocery stores in their community and went one-by-one checking into them, and identifying that some of these were actually liquor stores selling potato chips, not actual grocery stores. Having this information and engaging with it can be powerful for local communities’ advocacy work.

Are we the failure here? What are we going to do about it?

One discussant felt that ‘data’ and ‘information’ are often and easily conflated. “Data alone is not power. Information is data that is contextualized into something that is useful.” This brings into question the value of having so many data portals, and so much risk, when so little is being done to turn data into information that is useful to the people our sector says it wants to support and empower.

He gave the example of the Weather Channel, a business built around open data sets that are packaged and broadcast, which just got purchased for $2 billion. Channels like radio that would have provided information to the poor were not purchased, only the web assets, meaning that those who benefit are not the disenfranchised. “Our organizations are actually just like the Weather Channel – we are intermediaries who are interested in taking and using open data for public good.”

As intermediaries, we can add value in the dissemination of this open data, he said. If we have the skills, the intention and the knowledge to use it responsibly, we have a huge opportunity here. “However our enlightened intent has not yet turned this data into information and knowledge that communities can use to improve their lives, so are we the failure here? And if so, what are we doing about it? We could immediately begin engaging communities and seeing what is useful to them.” (See this article for more discussion on how ‘open’ may disenfranchise the poor.)

Where to from here?

Some points raised that merit further discussion and attention include:

  • There is little demand or use of open data (such as government data and finances) and preparing and maintaining data sets is costly – ‘open by demand’ may be a more appropriate approach than ‘open by default.’
  • There is a good deal of disagreement about whether data can be opened responsibly. Some of this disagreement may stem from a lack of clarity about what kind of data we are talking about when we talk about open data.
  • Personal data and human subjects data that was never foreseen to be part of “open data” is potentially being opened, bringing with it risks for those who share it as well as for those who store it.
  • Informed consent for personal/human subject data is a tricky concept and it’s not clear whether it is even possible in the current scenario of personal data being ‘opened’ and the lack of control over how it may be used now or in the future, and the increasing ease of data re-identification.
  • We may want to look at data as a toxic asset rather than a beneficial one, because of the liabilities it brings.
  • Rather than a blanket “open” categorization, sub-categorizations that restrict data sets in different ways might be a possibility.
  • The sector needs to improve its understanding of the legal frameworks around data and data collection, storage and use or it may start to see lawsuits in the near future.
  • Work on data literacy and community involvement in defining what data is of interest and is collected, as well as threat modeling together with community groups is a way to reduce risk and improve data quality, demand and use; but it’s a high-touch activity that may not be possible for every kind of organization.
  • As data intermediaries, we need to do a much better job as a sector to see what we are doing with open data and how we are using it to provide services and contextualized information to the poor and disenfranchised. This is a huge opportunity and we have not done nearly enough here.

The Technology Salon is conducted under Chatham House Rule so attribution has not been made in this post. If you’d like to attend future Salons, sign up here

 

Read Full Post »

« Newer Posts - Older Posts »