Feeds:
Posts
Comments

Posts Tagged ‘concerns’

At the 2016 American Evaluation Association conference, I chaired a session on benefits and challenges with ICTs in Equity-Focused Evaluation. The session frame came from a 2016 paper on the same topic. Panelists Kecia Bertermann from Girl Effect, and Herschel Sanders from RTI added fascinating insights on the methodological challenges to consider when using ICTs for evaluation purposes and discussant Michael Bamberger closed out with critical points based on his 50+ years doing evaluations.

ICTs include a host of technology-based tools, applications, services, and platforms that are overtaking the world. We can think of them in three key areas: technological devices, social media/internet platforms and digital data.

An equity focus evaluation implies ensuring space for the voices of excluded groups and avoiding the traditional top-down approach. It requires:

  • Identifying vulnerable groups
  • Opening up space for them to make their voices heard through channels that are culturally responsive, accessible and safe
  • Ensuring their views are communicated to decision makers

It is believed that ICTs, especially mobile phones, can help with inclusion in the implementation of development and humanitarian programming. Mobile phones are also held up as devices that can allow evaluators to reach isolated or marginalized groups and individuals who are not usually engaged in research and evaluation. Often, however, mobiles only overcome geographic inclusion. Evaluators need to think harder when it comes to other types of exclusion – such as that related to disability, gender, age, political status or views, ethnicity, literacy, or economic status – and we need to consider how these various types of exclusions can combine to exacerbate marginalization (e.g., “intersectionality”).

We are seeing increasing use of ICTs in evaluation of programs aimed at improving equity. Yet these tools also create new challenges. The way we design evaluations and how we apply ICT tools can make all the difference between including new voices and feedback loops or reinforcing existing exclusions or even creating new gaps and exclusions.

Some of the concerns with the use of ICTs in equity- based evaluation include:

Methodological aspects:

  • Are we falling victim to ‘elite capture’ — only hearing from higher educated, comparatively wealthy men, for example? How does that bias our information? How can we offset that bias or triangulate with other data and multi-methods rather than depending only on one tool-based method?
  • Are we relying too heavily on things that we can count or multiple-choice responses because that’s what most of these new ICT tools allow?
  • Are we spending all of our time on a device rather than in communities engaging with people and seeking to understand what’s happening there in person?
  • Is reliance on mobile devices or self-reporting through mobile surveys causing us to miss contextual clues that might help us better interpret the data?
  • Are we falling into the trap of fallacy in numbers – in other words, imagining that because lots of people are saying something, that it’s true for everyone, everywhere?

Organizational aspects:

  • Do digital tools require a costly, up-front investment that some organizations are not able to make?
  • How do fear and resistance to using digital tools impact on data gathering?
  • What kinds of organizational change processes are needed amongst staff or community members to address this?
  • What new skills and capacities are needed?

Ethical aspects:

  • How are researchers and evaluators managing informed consent considering the new challenges to privacy that come with digital data? (Also see: Rethinking Consent in the Digital Age)?
  • Are evaluators and non-profit organizations equipped to keep data safe?
  • Is it possible to anonymize data in the era of big data given the capacity to cross data sets and re-identify people?
  • What new risks might we be creating for community members? To local enumerators? To ourselves as evaluators? (See: Developing and Operationalizing Responsible Data Policies)

Evaluation of Girl Effect’s online platform for girls

Kecia walked us through how Girl Effect has designed an evaluation of an online platform and applications for girls. She spoke of how the online platform itself brings constraints because it only works on feature phones and smart phones, and for this reason it was decided to work with 14-16 year old urban girls in megacities who have access to these types of devices yet still experience multiple vulnerabilities such as gender-based violence and sexual violence, early pregnancy, low levels of school completion, poor health services and lack of reliable health information, and/or low self-esteem and self-confidence.

The big questions for this program include:

  • Is the content reaching the girls that Girl Effect set out to reach?
  • Is the content on the platform contributing to change?

Because the girl users are on the platform, Girl Effect can use features such as polls and surveys for self-reported change. However, because the girls are under 18, there are privacy and security concerns that sometimes limit the extent to which the organization feels comfortable tracking user behavior. In addition, the type of phones that the girls are using and the fact that they may be borrowing others’ phones to access the site adds another level of challenges. This means that Girl Effect must think very carefully about the kind of data that can be gleaned from the site itself, and how valid it is.

The organization is using a knowledge, attitudes and practices (KAP) framework and exploring ways that KAP can be measured through some of the exciting data capture options that come with an online platform. However it’s hard to know if offline behavior is actually shifting, making it important to also gather information that helps read into the self-reported behavior data.

Girl Effect is complementing traditional KAP indicators with web analytics (unique users, repeat visitors, dwell times, bounce rates, ways that users arrive to the site) with push-surveys that go out to users and polls that appear after an article (“Was this information helpful? Was it new to you? Did it change your perceptions? Are you planning to do something different based on this information?”) Proxy indicators are also being developed to help interpret the data. For example, does an increase in frequency of commenting on the site by a particular user have a link with greater self-esteem or self-efficacy?

However, there is only so much that can be gleaned from an online platform when it comes to behavior change, so the organization is complementing the online information with traditional, in-person, qualitative data gathering. The site is helpful there, however, for recruiting users for focus groups and in-depth interviews. Girl Effect wants to explore KAP and online platforms, yet also wants to be careful about making assumptions and using proxy indicators, so the traditional methods are incorporated into the evaluation as a way of triangulating the data. The evaluation approach is a careful balance of security considerations, attention to proxy indicators, digital data and traditional offline methods.

Using SMS surveys for evaluation: Who do they reach?

Herschel took us through a study conducted by RTI (Sanders, Lau, Lombaard, Baker, Eyerman, Thalji) in partnership with TNS about the use of SMS surveys for evaluation. She noted that the rapid growth of mobile phones, particularly in African countries, opens up new possibilities for data collection. There has been an explosion of SMS surveys for national, population-based surveys.

Like most ICT-enabled MERL methods, use of SMS for general population surveys brings both promise:

  • High mobile penetration in many African countries means we can theoretically reach a large segment of the population.
  • These surveys are much faster and less expensive than traditional face-to- face surveys.
  • SMS surveys work on virtually any GSM phone.
  • SMS offers the promise of reach. We can reach a large and geographically dispersed population, including some areas that are excluded from FTF surveys because of security concerns.

And challenges:

  • Coverage: We cannot include illiterate people or those without access to a mobile phone. Also, some sample frames may not include the entire population with mobile phones.
  • Non-response: Response rates are expected to be low for a variety of reasons, including limited network connectivity or electricity; if two or people share a phone, we may not reach all people associated with that phone; people may feel a lack of confidence with technology. These factors might affect certain sub-groups differently, so we might underrepresent the poor, rural areas, or women.
  • Quality of measurement. We only have 160 CHARACTERS for both the question AND THE RESPONSE OPTIONS. Further, an interviewer is not present to clarify any questions.

RTI’s research aimed to answer the question: How representative are general population SMS surveys and are there ways to improve representativeness?

Three core questions were explored via SMS invitations sent in Kenya, Ghana, Nigeria and Uganda:

  • Does the sample frame match the target population?
  • Does non-response have an impact on representativeness?
  • Can we improve quality of data by optimizing SMS designs?

One striking finding was the extent to which response rates may vary by country, Hershel said. In some cases this was affected by agreements in place in each country. Some required a stronger opt-in process. In Kenya and Uganda, where a higher percentage of users had already gone through an opt-in process and had already participated in SMS-based surveys, there was a higher rate of response.

screen-shot-2016-11-03-at-2-23-26-pm

These response rates, especially in Ghana and Nigeria, are noticeably low, and the impact of the low response rates in Nigeria and Ghana is evident in the data. In Nigeria, where researchers compared the SMS survey results against the face-to-face data, there was a clear skew away from older females, towards those with a higher level of education and who are full-time employed.

Additionally, 14% of the face-to-face sample, filtered on mobile users, had a post-secondary education, whereas in the SMS data this figure is 60%.

Additionally, Compared to face-to-face data, SMS respondents were:

  • More likely to have more than 1 SIM card
  • Less likely to share a SIM card
  • More likely to be aware of and use the Internet.

This sketches a portrait of a more technological savvy respondent in the SMS surveys, said Herschel.

screen-shot-2016-11-03-at-2-24-18-pm

The team also explored incentives and found that a higher incentive had no meaningful impact, but adding reminders to the design of the SMS survey process helped achieve a wider slice of the sample and a more diverse profile.

Response order effects were explored along with issues related to questionnaire designers trying to pack as much as possible onto the screen rather than asking yes/no questions. Hershel highlighted that that when multiple-choice options were given, 76% of SMS survey respondents only gave 1 response compared to 12% for the face-to-face data.

screen-shot-2016-11-03-at-2-23-53-pmLastly, the research found no meaningful difference in response rate between a survey with 8 questions and one with 16 questions, she said. This may go against common convention which dictates that “the shorter, the better” for an SMS survey. There was no observable break off rate based on survey length, giving confidence that longer surveys may be possible via SMS than initially thought.

Hershel noted that some conclusions can be drawn:

  • SMS excels for rapid response (e.g., Ebola)
  • SMS surveys have substantial non-response errors
  • SMS surveys overrepresent

These errors mean SMS cannot replace face-to-face surveys … yet. However, we can optimize SMS survey design now by:

  • Using reminders during data collection
  • Be aware of response order effects. So we need to randomize substantive response options to avoid bias.
  • Not using “select all that apply” questions. It’s ok to have longer surveys.

However, she also noted that the landscape is rapidly changing and so future research may shed light on changing reactions as familiarity with SMS and greater access grow.

Summarizing the opportunities and challenges with ICTs in Equity-Focused Evaluation

Finally we heard some considerations from Michael, who said that people often get so excited about possibilities for ICT in monitoring, evaluation, research and learning that they neglect to address the challenges. He applauded Girl Effect and RTI for their careful thinking about the strengths and weaknesses in the methods they are using. “It’s very unusual to see the type of rigor shown in these two examples,” he said.

Michael commented that a clear message from both presenters and from other literature and experiences is the need for mixed methods. Some things can be done on a phone, but not all things. “When the data collection is remote, you can’t observe the context. For example, if it’s a teenage girl answering the voice or SMS survey, is the mother-in-law sitting there listening or watching? What are the contextual clues you are missing out on? In a face-to-face context an evaluator can see if someone is telling the girl how to respond.”

Additionally,“no survey framework will cover everyone,” he said. “There may be children who are not registered on the school attendance list that is being used to identify survey respondents. What about immigrants who are hiding from sight out of fear and not registered by the government?” He cautioned evaluators to not forget about folks in the community who are totally missed out and skipped over, and how the use of new technology could make that problem even greater.

Another point Michael raised is that communicating through technology channels creates a different behavior dynamic. One is not better than the other, but evaluators need to be aware that they are different. “Everyone with teenagers knows that the kind of things we communicate online are very different than what we communicate in a face-to-face situation,” he said. “There is a style of how we communicate. You might be more frank and honest on an online platform. Or you may see other differences in just your own behavior dynamics on how you communicate via different kinds of tools,” he said.

He noted that a range of issues has been raised in connection to ICTs in evaluation, but that it’s been rare to see priority given to evaluation rigor. The study Herschel presented was one example of a focus on rigor and issues of bias, but people often get so excited that they forget to think about this. “Who has access.? Are people sharing phones? What are the gender dynamics? Is a husband restricting what a woman is doing on the phone? There’s a range of selection bias issues that are ignored,” he said.

Quantitative bias and mono-methods are another issue in ICT-focused evaluation. The tool choice will determine what an evaluator can ask and that in turn affects the quality of responses. This leads to issues with construct validity. If you are trying to measure complex ideas like girls’ empowerment and you reduce this to a proxy, there can often be a large jump in interpretation. This doesn’t happen only when using mobile phones for evaluation data collection purposes but there are certain areas that may be exacerbated when the phone is the tool. So evaluators need to better understand behavior dynamics and how they related to the technical constraints of a particular digital or mobile platform.

The aspect of information dissemination is another one worth raising, said Michael. “What are the dynamics? When we incorporate new tools, we tend to assume there is just one-step between the information sharer and receiver, yet there is plenty of literature that shows this is normally at least 2 steps. Often people don’t get information directly, but rather they share and talk with someone else who helps them verify and interpret the information they get on a mobile phone. There are gatekeepers who control or interpret, and evaluators need to better understand those dynamics. Social network analysis can help with that sometimes – looking at who communicates with whom? Who is part of the main infuencer hub? Who is marginalized? This could be exciting to explore more.”

Lastly, Michael reiterated the importance of mixed methods and needing to combine online information and communications with face-to-face methods and to be very aware of invisible groups. “Before you do an SMS survey, you may need to go out to the community to explain that this survey will be coming,” he said. “This might be necessary to encourage people to even receive the survey, to pay attention or to answer it.” The case studies in the paper “The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges” explore some of these aspects in good detail.

Read Full Post »

Coming from the viewpoint that accountability and transparency, citizen engagement and public debate are critical for good development, I posted yesterday on 5 ways that ICTs can support the MDGs. I got to thinking I would be remiss not to also post something on ways that ICTs (information and communication technologies) and poor or questionable use of ICTs and social media can hinder development.

It’s not really the fault of the technology. ICTs are tools, and the real issues lie behind the tools — they lie with people who create, market and use the tools. People cannot be separated from cultures and societies and power and money and politics. And those are the things that tend to hinder development, not really the ICTs themselves. However the combination of human tendencies and the possibilities ICTs and social media offer can can sometimes lead us down a shaky path to development or actually cause harm to the people that we are working with.

When do I start getting nervous about ICTs and social media for social good?

1) When the hype wins out over the real benefits of the technology. Sometimes the very idea of a cool and innovative technology wins out over an actual and realistic analysis of its impact and success. Here I pose the cases of the so-called Iran Twitter Revolution and One Laptop per Child (and I’ll throw in Play Pumps for good measure, though it’s not an ICT project, it’s an acknowledged hype and failure case). There are certainly other excellent examples. So many examples in fact that there are events called Fail Faires being organized to discuss these failures openly and learn how to avoid them in the future.

2) When it’s about the technology, not the information and communications needs.  When you hear someone say “We want to do an mHealth project” or “We need to have a Facebook page” or “We have a donor who wants to give us a bunch of mobile phones–do you know of something we can do with them?” you can be pretty sure that you have things backwards and are going to run into trouble down the road, wasting resources and energy on programs that are resting on weak foundations. Again, we can cite the One Laptop per Child (OLPC) initiative, where you have a tool, but the context for using it isn’t there (connectivity, power, teacher training and content). There is debate whether OLPC was a total failure or whether it paved the way for netbooks, cheap computers and other technologies that we use today. I’ll still say that the grand plan of OLPC:  a low-cost laptop for every child leading to development advances; had issues from the start because it was technology led.

3) When technology is designed from afar and parachuted in. If you don’t regularly involve people who will use your new technology, in the context where you’re planning for it to be used, you’re probably going to find yourself in a bind. True for ICTs and for a lot of other types of innovations out there. There’s a great conversation on Humanitarian Design vs. Design Imperialism that says it all. ICTs are no different. Designing information and communication systems in one place and imposing them on people in another place can hinder their uptake and/or waste time and money that could be spent in other ways that would better contribute to achieving development goals.

4) When the technology is part of a larger hidden agenda. I came across two very thought-provoking posts this week: one on the US Government’s Internet Freedom agenda and another from youth activists in the Middle East and North Africa region who criticize foundations and other donors for censoring their work when it doesn’t comply with US foreign policy messages. Clearly there are hidden political agendas at work which can derail the use of ICTs for human rights work and build mistrust instead of democracy. Another example of a potential hidden agenda is donation of proprietary hardware and software by large technology companies to NGOs and NGO consortia in order to lock in business (for example mHealth or eHealth) and prevent free and open source tools from being used, and which end up being costly to maintain, upgrade and license in the long-term.

5) When tech innovations put people and lives at risk. I’d encourage you to read this story about Haystack, a software hyped as a way to circumvent government censorship of social media tools that activists use to organize. After the US government fast-tracked it for use in Iran, huge security holes were found that could put activists in great danger. In our desire to see things as cool, cutting edge, and perhaps to be seen as cool and cutting edge ourselves, those of us suggesting and promoting ICTs for reporting human rights abuses or in other sensitive areas of work can cause more harm than we might imagine. It’s dangerous to push new technologies that haven’t been properly piloted and evaluated. It’s very easy to get caught up in coolness and forget the nuts and bolts and the time it takes to develop and test something new.

6) When technologists and humanitarians work in silos. A clear example of this might be the Crisis Camps that sprung up immediately after the Haiti Earthquakes in 2010. The outpouring of good will was phenomenal, and there were some positive results. The tech community got together to see how to help, which is a good thing. However the communication between the tech community and those working on the ground was not always conducive to developing tech solutions that were actually helpful. Here is an interesting overview by Ethan Zuckerman of some of the challenges the Crisis Commons faced. I remember attending a Crisis Camp and feeling confused about why one guy was building an iPhone application for local communities to gather data. Cool application for sure, but from what people I knew on the ground were saying, most people in local communities in Haiti don’t have iPhones. With better coordination among the sectors, people could put their talents and expertise to real use rather than busy work that makes them feel good.

7) When short attention spans give rise to vigilante development interventions. Because most of us in the West no longer have a full attention span (self included here), we want bite sized bits of information. But the reality of development is complicated, complex and deep. Social media has been heralded as a way to engage donors, supporters and youth; as a way to get people to help and to care. However the story being told has not gotten any deeper or more realistic in most cases than the 30 second television commercials or LiveAid concerts that shaped perceptions of the developing world 25 years ago. The myth of the simple story and simple solution propagates perhaps even further because of how quickly the message spreads. This gives rise to public perception that aid organizations are just giant bureaucracies (kind of true) and that a simple person with a simple idea could just go in and fix things without so much hullabaloo (not the case most of the time). The quick fix culture, supported and enhanced by social media, can be detrimental to the public’s patience with development, giving rise to apathy or what I might call vigilante development interventions — whereby people in the West (cough, cough, Sean Penn) parachute into a developing country or disaster scene to take development into their own hands because they can’t understand why it’s not happening as fast as the media tells them it should.

8 ) When DIY disregards proven practice. In line with the above, there are serious concerns in the aid and development community about the ‘amateurization’ of humanitarian and development work. The Internet allows people to link and communicate globally very easily. Anyone can throw up a website or a Facebook page and start a non-profit that way, regardless of their understanding of the local dynamics or good development practices built through years of experience in this line of work. Many see criticism from development workers as a form of elitism rather than a call for caution when messing around in other people’s lives or trying to do work that you may not be prepared for or have enough understanding about. The greater awareness and desire to use ‘social media for social good’ may be a positive thing, but it may also lead to good intentions gone awry and again, a waste of time and resources for people in communities, or even harm. There’s probably no better example of this phenomenon than #1millionshirts, originally promoted by Mashable, and really a terrible idea. See Good Intents for discussion around this phenomenon and tools to help donors educate themselves.

9) When the goal is not development but brand building through social media. Cause campaigns have been all the rage for the past several years. They are seen as a way for for-profit companies and non-profits to join together for the greater good. Social media and new ICTs have helped this along by making cause campaigns cheap and easy to do. However many ‘social media for social good’ efforts are simply bad development and can end up actually doing ‘social harm’. Perhaps a main reason for some of the bad ideas is that most social media cause campaigns are not actually designed to do social good. As Mashable says, through this type of campaign, ‘small businesses can gain exposure without breaking the bank, and large companies can reach millions of consumers in a matter of hours.’ When ‘social good’ goals are secondary to the ‘exposure for my brand’ goals, I really question the benefits and contribution to development.

10) When new media increases voyeurism, sensationalism or risk. In their rush to be the most innovative or hard-hitting in the competition for scarce donor dollars, organizations sometimes expose communities to child protection risks or come up with cutesy or edgy social media ideas that invade and interrupt people’s lives; for example, ideas like putting a live web camera in a community so that donors can log on 24/7 and see what’s happening in a ‘real live community.’ (This reminds me a bit of the Procrastination Pit’s 8 Cutest and Weirdest Live Animal Cams). Or when opportunities for donors to chat with people in communities become gimmicks and interrupt people in communities from their daily lives and work. Even professional journalists sometimes engage in questionable new media practices that can endanger their sources or trivialize their stories. With the Internet, stories stick around a lot longer and travel a lot farther and reach their fingers back to where they started a lot more easily than they used to. Here I will suggest two cases: Nick Kristof’s naming and fully identifying a 9-year-old victim of rape in the DRC and @MacClelland’s ‘live tweeting’ for Mother Jones of a rape survivor’s visit to the doctor in Haiti.

Update: Feb 22, 2011 – adding a 10a!

10a) When new media and new technologies put human rights activists at risk of identification and persecution. New privacy and anonymity issues are coming up due to the increasing ubiquity of video for human rights documenting. This was clearly seen in the February 2011 uprisings in Egypt, Tunisia, Libya, Bahrain and elsewhere.  From Sam Gregory’s excellent piece on privacy and anonymity in the digital age: “In the case of video (or photos), a largely unaddressed question arises. What about the rights to anonymity and privacy for those people who appear, intentionally or not, in visual recordings originating in sites of individual or mass human rights violations? Consider the persecution later faced by bystanders and people who stepped in to film or assist Neda Agha-Soltan as she lay dying during the election protests in Iran in 2009. People in video can be identified by old-fashioned investigative techniques, by crowd-sourcing (as with the Iran example noted above…) or by face detection/recognition software. The latter is now even built into consumer products like the Facebook Photos, thus exposing activists using Facebook to a layer of risk largely beyond their control.”

11) When ICTs and new media turn activism to slacktivism. Quoting from Evgeny Morozov, “slacktivism” is the ideal type of activism for a lazy generation: why bother with sit-ins and the risk of arrest, police brutality, or torture if one can be as loud campaigning in the virtual space? Given the media’s fixation on all things digital — from blogging to social networking to Twitter — every click of your mouse is almost guaranteed to receive immediate media attention, as long as it’s geared towards the noble causes. That media attention doesn’t always translate into campaign effectiveness is only of secondary importance.” Nuff said.

I’ll leave you with this kick-ass Le Tigre Video: Get off the Internet.… knowing full well that I’m probably the first one who needs to take that advice.

‘It feels so 80s… or early 90s…  to be political… where are my friends? GET OFF THE INTERNET, I’ll meet you in the streets….’


Related posts on Wait… What?

3 ways to integrate ICTs into development work

5 ways ICTs can support the MDGs

7 (or more) questions to ask before adding ICTs

Amateurs, professionals, innovations and smart aid

I and C, then T

MDGs through a child rights lens


Read Full Post »