Feeds:
Posts
Comments

Archive for the ‘m4D’ Category

Over the past 4 years I’ve had the opportunity to look more closely at the role of ICTs in Monitoring and Evaluation practice (and the privilege of working with Michael Bamberger and Nancy MacPherson in this area). When we started out, we wanted to better understand how evaluators were using ICTs in general, how organizations were using ICTs internally for monitoring, and what was happening overall in the space. A few years into that work we published the Emerging Opportunities paper that aimed to be somewhat of a landscape document or base report upon which to build additional explorations.

As a result of this work, in late April I had the pleasure of talking with the OECD-DAC Evaluation Network about the use of ICTs in Evaluation. I drew from a new paper on The Role of New ICTs in Equity-Focused Evaluation: Opportunities and Challenges that Michael, Veronica Olazabal and I developed for the Evaluation Journal. The core points of the talk are below.

*****

In the past two decades there have been 3 main explosions that impact on M&E: a device explosion (mobiles, tablets, laptops, sensors, dashboards, satellite maps, Internet of Things, etc.); a social media explosion (digital photos, online ratings, blogs, Twitter, Facebook, discussion forums, What’sApp groups, co-creation and collaboration platforms, and more); and a data explosion (big data, real-time data, data science and analytics moving into the field of development, capacity to process huge data sets, etc.). This new ecosystem is something that M&E practitioners should be tapping into and understanding.

In addition to these ‘explosions,’ there’s been a growing emphasis on documentation of the use of ICTs in Evaluation alongside a greater thirst for understanding how, when, where and why to use ICTs for M&E. We’ve held / attended large gatherings on ICTs and Monitoring, Evaluation, Research and Learning (MERL Tech). And in the past year or two, it seems the development and humanitarian fields can’t stop talking about the potential of “data” – small data, big data, inclusive data, real-time data for the SDGs, etc. and the possible roles for ICT in collecting, analyzing, visualizing, and sharing that data.

The field has advanced in many ways. But as the tools and approaches develop and shift, so do our understandings of the challenges. Concern around more data and “open data” and the inherent privacy risks have caught up with the enthusiasm about the possibilities of new technologies in this space. Likewise, there is more in-depth discussion about methodological challenges, bias and unintended consequences when new ICT tools are used in Evaluation.

Why should evaluators care about ICT?

There are 2 core reasons that evaluators should care about ICTs. Reason number one is practical. ICTs help address real world challenges in M&E: insufficient time, insufficient resources and poor quality data. And let’s be honest – ICTs are not going away, and evaluators need to accept that reality at a practical level as well.

Reason number two is both professional and personal. If evaluators want to stay abreast of their field, they need to be aware of ICTs. If they want to improve evaluation practice and influence better development, they need to know if, where, how and why ICTs may (or may not) be of use. Evaluation commissioners need to have the skills and capacities to know which new ICT-enabled approaches are appropriate for the type of evaluation they are soliciting and whether the methods being proposed are going to lead to quality evaluations and useful learnings. One trick to using ICTs in M&E is understanding who has access to what tools, devices and platforms already, and what kind of information or data is needed to answer what kinds of questions or to communicate which kinds of information. There is quite a science to this and one size does not fit all. Evaluators, because of their critical thinking skills and social science backgrounds, are very well placed to take a more critical view of the role of ICTs in Evaluation and in the worlds of aid and development overall and help temper expectations with reality.

Though ICTs are being used along all phases of the program cycle (research/diagnosis and consultation, design and planning, implementation and monitoring, evaluation, reporting/sharing/learning) there is plenty of hype in this space.

Screen Shot 2016-05-25 at 3.14.31 PM

There is certainly a place for ICTs in M&E, if introduced with caution and clear analysis about where, when and why they are appropriate and useful, and evaluators are well-placed to take a lead in identifying and trailing what ICTs can offer to evaluation. If they don’t, others are going to do it for them!

Promising areas

There are four key areas (I’ll save the nuance for another time…) where I see a lot of promise for ICTs in Evaluation:

1. Data collection. Here I’d divide it into 3 kinds of data collection and note that the latter two normally also provide ‘real time’ data:

  • Structured data gathering – where enumerators or evaluators go out with mobile devices to collect specific types of data (whether quantitative or qualitative).
  • Decentralized data gathering – where the focus is on self-reporting or ‘feedback’ from program participants or research subjects.
  • Data ‘harvesting’ – where data is gathered from existing online sources like social media sites, What’sApp groups, etc.
  • Real-time data – which aims to provide data in a much shorter time frame, normally as monitoring, but these data sets may be useful for evaluators as well.

2. New and mixed methods. These are areas that Michael Bamberger has been looking at quite closely. New ICT tools and data sources can contribute to more traditional methods. But triangulation still matters.

  • Improving construct validity – enabling a greater number of data sources at various levels that can contribute to better understanding of multi-dimensional indicators (for example, looking at changes in the volume of withdrawals from ATMs, records of electronic purchases of agricultural inputs, satellite images showing lorries traveling to and from markets, and the frequency of Tweets that contain the words hunger or sickness).
  • Evaluating complex development programs – tracking complex and non-linear causal paths and implementation processes by combining multiple data sources and types (for example, participant feedback plus structured qualitative and quantitative data, big data sets/records, census data, social media trends and input from remote sensors).
  • Mixed methods approaches and triangulation – using traditional and new data sources (for example, using real-time data visualization to provide clues on where additional focus group discussions might need to be done to better understand the situation or improve data interpretation).
  • Capturing wide-scale behavior change – using social media data harvesting and sentiment analysis to better understand wide-spread, wide-scale changes in perceptions, attitudes, stated behaviors and analyzing changes in these.
  • Combining big data and real-time data – these emerging approaches may become valuable for identifying potential problems and emergencies that need further exploration using traditional M&E approaches.

3. Data Analysis and Visualization. This is an area that is less advanced than the data collection area – often it seems we’re collecting more and more data but still not really using it! Some interesting things here include:

  • Big data and data science approaches – there’s a growing body of work exploring how to use predictive analytics to help define what programs might work best in which contexts and with which kinds of people — (how this connects to evaluation is still being worked out, and there are lots of ethical aspects to think about here too — most of us don’t like the idea of predictive policing, and in some ways you could end up in a situation that is not quite what was aimed at.) With big data, you’ll often have a hypothesis and you’ll go looking for patterns in huge data sets. Whereas with evaluation you normally have particular questions and you design a methodology to answer them — it’s interesting to think about how these two approaches are going to combine.
  • Data Dashboards – these are becoming very popular as people try to work out how to do a better job of using the data that is coming into their organizations for decision making. There are some efforts at pulling data from community level all the way up to UN representatives, for example, the global level consultations that were done for the SDGs or using “near real-time data” to share with board members. Other efforts are more focused on providing frontline managers with tools to better tweak their programs during implementation.
  • Meta-evaluation – some organizations are working on ways to better draw conclusions from what we are learning from evaluation around the world and to better visualize these conclusions to inform investments and decision-making.

4. Equity-focused Evaluation. As digital devices and tools become more widespread, there is hope that they can enable greater inclusion and broader voice and participation in the development process. There are still huge gaps however — in some parts of the world 23% less women have access to mobile phones — and when you talk about Internet access the gap is much much bigger. But there are cases where greater participation in evaluation processes is being sought through mobile. When this is balanced with other methods to ensure that we’re not excluding the very poorest or those without access to a mobile phone, it can help to broaden out the pool of voices we are hearing from. Some examples are:

  • Equity-focused evaluation / participatory evaluation methods – some evaluators are seeking to incorporate more real-time (or near real-time) feedback loops where participants provide direct feedback via SMS or voice recordings.
  • Using mobile to directly access participants through mobile-based surveys.
  • Enhancing data visualization for returning results back to the community and supporting community participation in data interpretation and decision-making.

Challenges

Alongside all the potential, of course there are also challenges. I’d divide these into 3 main areas:

1. Operational/institutional

Some of the biggest challenges to improving the use of ICTs in evaluation are institutional or related to institutional change processes. In focus groups I’ve done with different evaluators in different regions, this was emphasized as a huge issue. Specifically:

  • Potentially heavy up-front investment costs, training efforts, and/or maintenance costs if adopting/designing a new system at wide scale.
  • Tech or tool-driven M&E processes – often these are also donor driven. This happens because tech is perceived as cheaper, easier, at scale, objective. It also happens because people and management are under a lot of pressure to “be innovative.” Sometimes this ends up leading to an over-reliance on digital data and remote data collection and time spent developing tools and looking at data sets on a laptop rather than spending time ‘on the ground’ to observe and engage with local organizations and populations.
  • Little attention to institutional change processes, organizational readiness, and the capacity needed to incorporate new ICT tools, platforms, systems and processes.
  • Bureaucracy levels may mean that decisions happen far from the ground, and there is little capacity to make quick decisions, even if real-time data is available or the data and analysis are provided frequently to decision-makers sitting at a headquarters or to local staff who do not have decision-making power in their own hands and must wait on orders from on high to adapt or change their program approaches and methods.
  • Swinging too far towards digital due to a lack of awareness that digital most often needs to be combined with human. Digital technology always works better when combined with human interventions (such as visits to prepare folks for using the technology, making sure that gatekeepers; e.g., a husband or mother-in-law is on-board in the case of women). A main message from the World Bank 2016 World Development Report “Digital Dividends” is that digital technology must always be combined with what the Bank calls “analog” (a.k.a. “human”) approaches.

B) Methodological

Some of the areas that Michael and I have been looking at relate to how the introduction of ICTs could address issues of bias, rigor, and validity — yet how, at the same time, ICT-heavy methods may actually just change the nature of those issues or create new issues, as noted below:

  • Selection and sample bias – you may be reaching more people, but you’re still going to be leaving some people out. Who is left out of mobile phone or ICT access/use? Typical respondents are male, educated, urban. How representative are these respondents of all ICT users and of the total target population?
  • Data quality and rigor – you may have an over-reliance on self-reporting via mobile surveys; lack of quality control ‘on the ground’ because it’s all being done remotely; enumerators may game the system if there is no personal supervision; there may be errors and bias in algorithms and logic in big data sets or analysis because of non-representative data or hidden assumptions.
  • Validity challenges – if there is a push to use a specific ICT-enabled evaluation method or tool without it being the right one, the design of the evaluation may not pass the validity challenge.
  • Fallacy of large numbers (in cases of national level self-reporting/surveying) — you may think that because a lot of people said something that it’s more valid, but you might just be reinforcing the viewpoints of a particular group. This has been shown clearly in research by the World Bank on public participation processes that use ICTs.
  • ICTs often favor extractive processes that do not involve local people and local organizations or provide benefit to participants/local agencies — data is gathered and sent ‘up the chain’ rather than shared or analyzed in a participatory way with local people or organizations. Not only is this disempowering, it may impact on data quality if people don’t see any point in providing it as it is not seen to be of any benefit.
  • There’s often a failure to identify unintended consequences or biases arising from use of ICTs in evaluation — What happens when you introduce tablets for data collection? What happens when you collect GPS information on your beneficiaries? What risks might you be introducing or how might people react to you when you are carrying around some kind of device?

C) Ethical and Legal

This is an area that I’m very interested in — especially as some donors have started asking for the raw data sets from any research, studies or evaluations that they are funding, and when these kinds of data sets are ‘opened’ there are all sorts of ramifications. There is quite a lot of heated discussion happening here. I was happy to see that DFID has just conducted a review of ethics in evaluationSome of the core issues include:

  • Changing nature of privacy risks – issues here include privacy and protection of data; changing informed consent needs for digital data/open data; new risks of data leaks; and lack of institutional policies with regard to digital data.
  • Data rights and ownership: Here there are some issues with proprietary data sets, data ownership when there are public-private partnerships, the idea of data philanthropy’ when it’s not clear whose data is being donated, personal data ‘for the public good’, open data/open evaluation/ transparency, poor care taken when vulnerable people provide personally identifiable information; household data sets ending up in the hands of those who might abuse them, the increasing impossibility of data anonymization given that crossing data sets often means that re-identification is easier than imagined.
  • Moving decisions and interpretation of data away from ‘the ground’ and upwards to the head office/the donor.
  • Little funding for trialing/testing the validity of new approaches that use ICTs and documenting what is working/not working/where/why/how to develop good practice for new ICTs in evaluation approaches.

Recommendations: 12 tips for better use of ICTs in M&E

Despite the rapid changes in the field in the 2 years since we first wrote our initial paper on ICTs in M&E, most of our tips for doing it better still hold true.

  1. Start with a high-quality M&E plan (not with the tech).
    • But also learn about the new tech-related possibilities that are out there so that you’re not missing out on something useful!
  2. Ensure design validity.
  3. Determine whether and how new ICTs can add value to your M&E plan.
    • It can be useful to bring in a trusted tech expert in this early phase so that you can find out if what you’re thinking is possible and affordable – but don’t let them talk you into something that’s not right for the evaluation purpose and design.
  4. Select or assemble the right combination of ICT and M&E tools.
    • You may find one off the shelf, or you may need to adapt or build one. This is a really tough decision, which can take a very long time if you’re not careful!
  5. Adapt and test the process with different audiences and stakeholders.
  6. Be aware of different levels of access and inclusion.
  7. Understand motivation to participate, incentivize in careful ways.
    • This includes motivation for both program participants and for organizations where a new tech-enabled tool/process might be resisted.
  8. Review/ensure privacy and protection measures, risk analysis.
  9. Try to identify unintended consequences of using ICTs in the evaluation.
  10. Build in ways for the ICT-enabled evaluation process to strengthen local capacity.
  11. Measure what matters – not what a cool ICT tool allows you to measure.
  12. Use and share the evaluation learnings effectively, including through social media.

 

 

Read Full Post »

I used to write blog posts two or three times a week, but things have been a little quiet here for the past couple of years. That’s partly because I’ve been ‘doing actual work’ (as we like to say) trying to implement the theoretical ‘good practices’ that I like soapboxing about. I’ve also been doing some writing in other places and in ways that I hope might be more rigorously critiqued and thus have a wider influence than just putting them up on a blog.

One of those bits of work that’s recently been released publicly is a first version of a monitoring and evaluation framework for SIMLab. We started discussing this at the first M&E Tech conference in 2014. Laura Walker McDonald (SIMLab CEO) outlines why in a blog post.

Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.

For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact….

FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.

As an organization that provides technology strategy and support to other organizations — both large and small — it is important for SIMLab to better understand the quality of that support and how it may translate into improvements as well as how introduction or improvement of information and communication technology contributes to impact at the broader scale.

This is a difficult proposition, given that isolating a single factor like technology is extremely tough, if not impossible. The Framework thus aims to get at the breadth of considerations that go into successful tech-enabled project design and implementation. It does not aim to attribute impact to a particular technology, but to better understand that technology’s contribution to the wider impact at various levels. We know this is incredibly complex, but thought it was worth a try.

As Laura notes in another blogpost,

One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators – the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and consider the criteria for technology-enabled social change projects.

Here are the adapted criteria that you can read more about in the Framework. They were designed for internal use, but we hope they might be useful to evaluators of technology-enabled programming, commissioners of evaluations of these programs, and those who want to do in-house examination of their own technology-enabled efforts. We welcome your thoughts and feedback — The Framework is published in draft format in the hope that others working on similar challenges can help make it better, and so that they could pick up and use any and all of it that would be helpful to them. The document includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample log frames and evaluator terms of reference.

Happy reading and we really look forward to any feedback and suggestions!!

*****

The Criteria

Criterion 1: Relevance

The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design – particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?

Criterion 2: Effectiveness

A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

Criterion 3: Efficiency

Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?

Criterion 4: Impact

Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying, documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies evaluation criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us.

For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Consider: To what extent does the choice of communications channels or tools enable wider and/or higher quality participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?

Criterion 6: Coherence

The OECD-DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

Read Full Post »

Our December 2015 Technology Salon discussion in NYC focused on approaches to girls’ digital privacy, safety and security. By extension, the discussion included ways to reduce risk for other vulnerable populations. Our lead discussants were Ximena BenaventeGirl Effect Mobile (GEM) and Jonathan McKay, Praekelt Foundation. I also shared a draft Girls’ Digital Privacy, Safety and Security Policy and Toolkit I’ve been working on with both organizations over the past year.

Girls’ digital privacy, safety and security risks

Our first discussant highlighted why it’s important to think specifically about girls and digital security. In part, this is because different factors and vulnerabilities combine, exacerbating girls’ levels of risk. For example, girls living on less than $2 per day likely only have access to basic mobile phones, which are often borrowed from parents or siblings. The organization she works with always starts with deep research on aspects like ownership vs. borrowship and whether girls’ mobile usage is free/unlimited and un-supervised or controlled by gatekeepers such as parents, brothers, or other relatives. This helps to design better tools, services and platforms and to design for safety and security, she said. “Gatekeepers are very restrictive in many cases, but parental oversight is not necessarily a bad thing. We always work with parents and other gatekeepers as well as with girls themselves when we design and test.” When girls are living in more traditional or conservative societies, she said, we also need to think about how content might affect girls both online and offline. For example, “is content sufficiently progressive in terms of girls’ rights, yet safe for girls to read, comment on or discuss with friends and family without severe retaliation?”

Research suggests that girls who are more vulnerable offline (due to poverty or other forms of marginalization), are likely also more vulnerable to certain risks online, so we design with that in mind, she said. “When we started off on this project, our team members were experts in digital, but we had less experience with the safety and privacy aspects when it comes to girls living under $2/day or who were otherwise vulnerable. “Having additional guidance and developing a policy on this aspect has helped immensely – but has also slowed our processes down and sometimes made them more expensive,” she noted. “We had to go back to everything and add additional layers of security to make it as safe as possible for girls. We have also made sure to work very closely with our local partners to be sure that everyone involved in the project is aware of girls’ safety and security.”

Social media sites: Open, Closed, Private, Anonymous?

One issue that came up was safety for children and youth on social media networks. A Salon participant said his organization had thought about developing this type of a network several years back but decided in the end that the security risks outweighed the advantages. Participants discussed whether social media networks can ever be safe. One school of thought is that the more open a platform, the safer it is, as “there is no interaction in private spaces that cannot be constantly monitored or moderated.” Some worry about open sites, however, and set up smaller, closed, private groups that were closely monitored. “We work with victims of violence to share their stories and coping mechanisms, so, for us, private groups are a better option.”

Some suggested that anonymity on a social media site can protect girls and other vulnerable groups, however there is also research showing that Internet anonymity contributes to an increase in activities such as bullying and harassment. Some Salon participants felt that it was better to leverage existing platforms and try to use them safely. Others felt that there are no existing social media platforms that have enough security for girls or other vulnerable groups to use with appropriate levels of risk. “We sometimes recruit participants via existing social media platforms,” said one discussant, “but we move people off of those sites to our own more secure sites as soon as we can.”

Moderation and education on safety

Salon participants working with vulnerable populations said that they moderate their sites very closely and remove comments if users share personal information or use offensive language. “Some project budgets allow us to have a moderator check every 2 hours. For others, we sweep accounts once a day and remove offensive content within 24 hours.” One discussant uses moderation to educate the community. “We always post an explanation about why a comment was removed in order to educate the larger user base about appropriate ways to use the social network,” he said.

Close moderation becomes difficult and costly, however, as the user base grows and a platform scales. This means individual comments cannot be screened and pre-approved, because that would take too long and defeat the purpose of an engaging platform. “We need to acknowledge the very real tension between building a successful and engaging community and maintaining privacy and security,” said one Salon participant. “The more you lock it down and the more secure it is, the harder you find it is to create a real and active community.”

Another participant noted that they use their safe, closed youth platform to educate and reinforce messaging about what is safe and positive use of social media in hopes that young people will practice safe behaviors when they use other platforms. “We know that education and awareness raising can only go so far, however,” she said, “and we are not blind to that fact.” She expressed concern about risk for youth who speak out about political issues, because more and more governments are passing laws that punish critics and censor information. The organization, however, does not want to encourage youth to stop voicing opinions or participating politically.

Data breaches and project close-out

One Salon participant asked if organizations had examples of actual data breaches, and how they had handled them. Though no one shared examples, it was recommended that every organization have a contingency plan in place for accidental data leaks or a data breach or data hack. “You need to assume that you will get hacked,” said one person, “and develop your systems with that as a given.”

In addition to the day-to-day security issues, we need to think about project close-out, said one person. “Most development interventions are funded for a short, specific period of time. When a project finishes, you get a report, you do your M&E, and you move on. However, the data lives on, and the effects of the data live on. We really need to think more about budgeting for proper project wind-down and ensure that we are accountable beyond the lifetime of a project.”

Data security, anonymization, consent

Another question was related to using and keeping girls’ (and others’) data safe. “Consent to collect and use data on a website or via a mobile platform can be tricky, especially if we don’t know how to explain what we might do with the data,” said one Salon participant. Others suggested it would be better not to collect any data at all. “Why do we even need to collect this data? Who is it for?” he asked. Others countered that this data is often the only way to understand what people are doing on the site, to make adjustments and to measure impact.

One scenario was shared where several partner organizations discussed opening up a country’s cell phone data records to help contain a massive public health epidemic, but the privacy and security risks were too great, so the idea was scrapped. “Some said we could anonymize the data, but you can never really and truly anonymize data. It would have been useful to have a policy or a rubric that would have guided us in making that decision.”

Policy and Guidelines on Girls Privacy, Security and Safety

Policy guidelines related to aspects such as responsible data for NGOs, data security, privacy and other aspects of digital security in general do exist. (Here are some that we compiled along with some other resources). Most IT departments also have strict guidelines when it comes to donor data (in the case of credit card and account information, for example). This does not always cross over to program-level ICT or M&E efforts that involve the populations that NGOs are serving through their programming.

General awareness around digital security is increasing, in part due to recent major corporate data hacks (e.g., Target, Sony) and the Edward Snowden revelations from a few years back, but much more needs to be done to educate NGO staff and management on the type of privacy and security measures that need to be taken to protect the data and mitigate risk for those who participate in their programs.  There is an argument that NGOs should have specific digital privacy, safety and security policies that are tailored to their programming and that specifically focus on the types of digital risks that girls, women, children or other vulnerable people face when they are involved in humanitarian or development programs.

One such policy (focusing on vulnerable girls) and toolkit (its accompanying principles and values, guidelines, checklists and a risk matrix template); was shared at the Salon. (Disclosure: – This policy toolkit is one that I am working on. It should be ready to share in early 2016). The policy and toolkit take program implementers through a series of issues and questions to help them assess potential risks and tradeoffs in a particular context, and to document decisions and improve accountability. The toolkit covers:

  1. data privacy and security –using approaches like Privacy by Design, setting limits on the data that is collected, achieving meaningful consent.
  2. platform content and design –ensuring that content produced for girls or that girls produce or volunteer is not putting girls at risk.
  3. partnerships –vetting and managing partners who may be providing online/offline services or who may partner on an initiative and want access to data, monetizing of girls’ data.
  4. monitoring, evaluation, research and learning (MERL) – how will program implementers gather and store digital data when they are collecting it directly or through third parties for organizational MERL purposes.

Privacy, Security and Safety Implications

Our final discussant spoke about the implications of implementing the above-mentioned girls’ privacy, safety and security policy. He started out saying that the policy starts off with a manifesto: We will not compromise a girl in any way, nor will we opt for solutions that cut corners in terms of cost, process or time at the expense of her safety. “I love having this as part of our project manifesto, he said. “It’s really inspiring! On the flip side, however, it makes everything I do more difficult, time consuming and expensive!”

To demonstrate some of the trade-offs and decisions required when working with vulnerable girls, he gave examples of how the current project (implemented with girls’ privacy and security as a core principle) differed from that of a commercial social media platform and advertising campaign he had previously worked on (where the main concern was the reputation of the corporation, not that of the users of the platform and the potential risks they might put themselves in by using the platform).

Moderation

On the private sector platform, said the discussant, “we didn’t have the option of pre-moderating comments because of the budget and because we had 800 thousand users. To meet the campaign goals, it was more important for users to be engaged than to ensure content was safe. We focused on removing pornographic photos within 24 hours, using algorithms based on how much skin tone was in the photo.” In the fields of marketing and social media, it’s a fairly well-known issue that heavy-handed moderation kills platform engagement. “The more we educated and informed users about comment moderation, or removed comments, the deader the community became. The more draconian the moderation, the lower the engagement.”

The discussant had also worked on a platform for youth to discuss and learn about sexual health and practices, where he said that users responded angrily to moderators and comments that restricted their participation. “We did expose our participants to certain dangers, but we also knew that social digital platforms are more successful when they provide their users with sense of ownership and control. So we identified users that exhibited desirable behaviors and created a different tier of users who could take ownership (super users) to police and flag comments as inappropriate or temporarily banned users.” This allowed a 25% decrease in moderation. The organization discovered, however, that they had to be careful about how much power these super users had. “They ended up creating certain factions on the platform, and we then had to develop safeguards and additional mechanisms by which we moderated our super users!”

Direct Messages among users

In the private sector project example, engagement was measured by the number of direct or private messages sent between platform users. In the current scenario, however, said the discussant, “we have not allowed any direct messages between platform users because of the potential risks to girls of having places on the site that are hidden from moderators. So as you can see, we are removing some of our metrics by disallowing features because of risk. These activities are all things that would make the platform more engaging but there is a big fear that they could put girls at risk.”

Adopting a privacy, security, and safety policy

One discussant highlighted the importance of having privacy, safety and security policies before a project or program begins. “If you start thinking about it later on, you may have to go back and rebuild things from scratch because your security holes are in the design….” The way a database is set up to capture user data can make it difficult to query in the future or for users to have any control of what information is or is not being shared about them. “If you don’t set up the database with security and privacy in mind from the beginning, it might be impossible to make the platform safe for girls without starting from scratch all over again,” he said.

He also cautioned that when making more secure choices from the start, platform and tool development generally takes longer and costs more. It can be harder to budget because designers may not have experience with costing and developing the more secure options.

“A valuable lesson is that you have to make sure that what you’re trying to do in the first place is worth it if it’s going to be that expensive. It is worth a girls’ while to use a platform if she first has to wade through a 5-page terms and conditions on a small mobile phone screen? Are those terms and conditions even relevant to her personally or within her local context? Every click you ask a user to make will reduce their interest in reaching the platform. And if we don’t imagine that a girl will want to click through 5 screens of terms and conditions, the whole effort might not be worth it.” Clearly, aspects such as terms and conditions and consent processes need to be designed specifically to fit new contexts and new kinds of users.

Making responsible tradeoffs

The Girls Privacy, Security and Safety policy and toolkit shared at the Salon includes a risk matrix where project implementers rank the intensity and probability of risks as high, medium and low. Based on how a situation, feature or other potential aspect is ranked and the possibility to mitigate serious risks, decisions are made to proceed or not. There will always be areas with a certain level of risk to the user. The key is in making decisions and trade-offs that balance the level of risk with the potential benefits or rewards of the tool, service, or platform. The toolkit can also help project designers to imagine potential unintended consequences and mitigate risk related to them. The policy also offers a way to systematically and pro-actively consider potential risks, decide how to handle them, and document decisions so that organizations and project implementers are accountable to girls, peers and partners, and organizational leadership.

“We’ve started to change how we talk about user data in our organization,” said one discussant. “We have stopped thinking about it as something WE create and own, but more as something GIRLS own. Banks don’t own people’s money – they borrow it for a short time. We are trying to think about data that way in the conversations we’re having about data, funding, business models, proposals and partnerships. You don’t get to own your users’ data, we’re not going to share de-anonymized data with you. We’re seeing legislative data in some of the countries we work that are going that way also, so it’s good to be thinking about this now and getting prepared”

Take a look at our list of resources on the topic and add anything we may have missed!

 

Thanks to our friends at ThoughtWorks for hosting this Salon! If you’d like to join discussions like this one, sign up at Technology SalonSalons are held under Chatham House Rule, therefore no attribution has been made in this post.

Read Full Post »

panel session photoIn line with my last post (10 myths about girls empowerment and mobile learning), I thought I’d also share what we covered during our panel on ‘Gender Sensitive Content and Pedagogy’ during UNESCO and UN Women’s Mobile Learning Week 2015. This year’s theme was ‘leveraging technology to empower women and girls.’ UN Women did a fantastic job of finding really smart women with varied backgrounds to join the panel, including: Sarah Jaffe, Worldreader;  Andrea Bertone, FHI360; Hongjuan Liu, Beijing Royal School; Catherine King, Global Fund for Women; and Anne Githuku-Shongwe, Afroes. I had the pleasure of moderating the conversation, and here’s some of what we talked about. I’ll put up a few more posts after this one to share the full session.

First, what is ‘gender responsive content?’ Hongjuan sent over a general introduction to include in this post. To begin with, she said, simply having access to schools does not guarantee a proper education and a better future. “Outdated teaching materials silently reinforce girls’ sense of inferiority. Materials rarely picture woman as managers, pilots, doctors or political leaders. The subconscious words neglect the contributions of girls and women to the modern economic world and show women as subordinate to men.” Even worse, she noted, “unless they are trained on gender sensitivity, most teachers and parents are not knowledgeable enough to banish gender bias. Silence in the face of discrimination is the equivalent of allowing lies and distorted facts to continue. And, such blindness is even more dangerous to the gender-bias content itself. As a result, these mistakenly delivered messages will denigrate girls and women from one generation to another.”

According to Hongjuan, teachers are a critical part of efforts to “dig out the seeds of gender-bias in our children’s heart” and they should be paying attention to both content and pedagogy. “Given that boys and girls learn differently, we need to employ diverse pedagogies in order to respond to different learning styles –from small group, individual, lecture, reading, experiences, laboratory work, etc. Diversity in pedagogy matters and increases the opportunities for all students to learn.”

Overturning gender stereotyping must be a collective and universal effort, she said. “Institutions must respond to the call to overturn gender bias discrimination. Some citizens are too weak to resist the strong stereotypes present in their countries and religions. Life is too short to wait to base our actions on a collective worldwide outcry for a harmonious world where woman and man are equally accepted, appreciated and treated. At the very least we should live by our words and deeds so that we are seen as desiring and fighting for equality. We should wish to be painted as believing in not only the potential of women and girls, but the rights they should have. That will inspire women to work to craft their own more promising future.”

Andrea noted that we should pay attention to gender responsive content and pedagogy because “if we don’t prioritize gender responsive content we see the consequences: girls and boys who stay disempowered and miss out on learning opportunities which challenge the unequal gender norms that they are socialized to believe.” In addition, she said, gender-responsive content offers rich tools that we can use to transform unequal gender norms — “those norms that dictate to girls what they can and can’t do, where they can or can’t go, or norms that encourage boys to engage in harmful behaviors against themselves and others.” We have the potential to link two extremely relevant and potentially transformative mechanisms — mobile and gender sensitive content and pedagogy — in the education space, “and that is quite exciting!” Andrea added.

Sarah agreed, noting that what we experience in media and literature shapes us, particularly as children.  “If a girl never sees an example of a woman neuroscientist, in either fiction or non-fiction, how will she know that is a possibility for her?”  We know life gives us all sorts of examples that challenge literary tropes, but “when we are inundated with one-note ideas of what it means to be a boy or a girl, these shape us in subconscious ways,” she said. “This example applies mainly to fiction, but of course, non-fiction and informational gender responsive content is also key.”

Hongjuan shared how she was influenced by gender stereotyping. “I chose to be a teacher, because this is the best thing I found in books. Women were never pictured in other roles. These subconscious words imply that a girl’s sweat is so cheap that it will never win them a higher social status,” she said. “We need to change these gender biases. These mistaken messages poison girls and woman from one generation to another.”

“We need to be a part of combating these persistent stereotypes,” continued Catherine. “A lack of representation and the misrepresentation of women and girls persist in mainstream media.” We see this as well in non-traditional sectors, including in the online environment, she noted. “As content developers, we have an opportunity – a responsibility – to disrupt pervasive stereotypical and counterproductive images.” Catherine explained that the Global Fund for Women has expanded its mission to prioritize raising the voices of women via digital storytelling and advocacy campaigns as an equal lever to grant making to create greater momentum for the change we all want to see in the long term.

Finally, Anne noted that “today, even in Africa, we live in a connected world that is more transparent, where oppression, harassment or discrimination are not cool and are in fact are exposed because of our connectedness.” She referred to stories we’ve all become aware of — rape in India, pedophiles, the Arab Spring. “On the other hand, gendered relationships at home, at work and in public spaces have changed forever as women’s choices open up more and more.” In the meantime, however, “we old school parents and teachers continue to enforce old stereotypes that are close to dead to the world – confusing our young ones.” Anne emphasized that it is critical to equip young men and women – our future leaders – for a new reality. “In our work building motivated learning products on mobile — using games and gamification rules — we are at pains in our engaged user-based design and testing processes to challenge gender stereotypes and offer a platform to shape new ones. Gender-responsive content is not a nicety, it is imperative!!”

Tune in over the next week or two for summaries of the other areas covered on the panel, including: combating unconscious gender bias; the role of mobile in creation/implementation of gender-responsive content and pedagogy; challenges in the area of gender-sensitive mobile learning; and thoughts on where we can expect mobile technology and gender-responsive content and pedagogy to head in the future.

 

Read Full Post »

Cameroon - realizing phone takes videoI had the chance to share some thoughts at UNESCO’s recent Mobile Learning Week. My presentation explored some myths about girls empowerment and mobile learning and offered suggestions of things to think about when designing and implementing programs. Ideas for the presentation were drawn from research and practitioner experiences (mine and those of others that I’ve talked with and worked with over the past few years). Here’s what I talked about below. Since realities are subjective and complex, and contexts differ immensely around the world, I’m putting these out mainly as discussion starters. Some seem super obvious and some contradict each other (which may speak to the point that there is no universal truth!), so I’m curious to know what other people think…

Myth 1: Mobile as a stand-alone solution.

Reality: The mobile phone is just one part of the informational and cultural ecosystem. There is a lot of hype about mobile. I think as a sector we are mostly past the idea of mobile as a stand-alone solution, but in case not, it’s the first myth I’d challenge. There is not a lot that a mobile phone can do as a stand-alone tool to empower girls or improve their education and learning. 

Things to consider: The mobile phone is the device that is most likely to already be in the hands of your target user — but the possibilities and channels don’t start and end with mobile phones. It’s important to think of the mobile phone as just one part of a much wider informational, social, cultural and educational ecosystem and see where it might fit in to support girls’ learning. It’s likely that mobile phones will be used more outside of the classroom than in – in my experience, I’ve found that schools often don’t allow mobiles to be brought into class. So, it’s more about integrating mobiles as a tool that supports rather than as the sole channel for learning and information sharing.

Myth 2: It’s the technology that’s mobile.

Reality: In most cases, the learner is mobile, too. This is one of the exciting things about technology and learning. It’s something I heard John Traxler say a few years ago, and I thought it was really smart. John said we should really be thinking about mobile learners, not just mobile technology. Learners access and share information in all kinds of ways, at different locations, using different devices or not using devices at all.

Things to consider: Rather than starting with the mobile phone, think about design based on a clear understanding of ’digital repertoires’ – in other words, user behaviors or patterns that span places and devices based on factors like data capacity, cost, purpose. These repertoires will differ according to culture, sex, economic status, and availability of information points and sources. For example, maybe some girls use Google search to do homework at an Internet café but use their own phone or a borrowed phone for quick, short text reminders or questions to friends about schoolwork. Maybe other girls are not allowed to go to Internet cafés or they feel uncomfortable doing so, and they rely more on their mobile phone and their friends. This was the case in one community near Jakarta that I was in last month. One of the girls talked about her 15-year-old friend:

 

“She’s too shy to go to the Internet shop…. Boys are always sitting out, calling you to ask ‘where are you going?’ or whistling. She feels too embarrassed to go into the shop because everyone will look at her.”

In a consultation conducted by Plan in 2011, girls in some countries said it was too dangerous to travel to the Internet café, especially at night. When men and boys watch porn and play video games in Internet cafes, girls tend to feel quite uncomfortable. Libraries, if available, may be places where girls go to access Internet because they feel safer. Girls may face reputation risk if they go too often to the Internet café. So in this case, girls may rely on phones. In some parts of East and West Africa, however, girls with mobile phones may be accused of having ‘sugar daddies’ or selling sex for airtime or nice phones, so the phone also carries reputation risk. All of these situations impact on girls’ communication repertoires, and program designers need to take them into consideration. And perhaps most importantly, ‘girls’ are not a homogeneous group so we always need to unpack which girls, where, when, what, at what age, living where, with what kinds of social or cultural restrictions, etc.

Myth 3: Vulnerable girls don’t have access to mobiles.

Reality: Many girls with phones are more vulnerable than we think, and more girls that we consider vulnerable are accessing mobiles. This is something that Colman Chamberlain from the Girl Effect’s mobile initiative pointed out. “We often hear that the most vulnerable girls don’t have access to mobile phones,” he says, “but this depends on how we understand and define vulnerability. Many girls with phones are vulnerable, and many vulnerable girls are starting to access mobile. This means we have a real chance to reach and engage with them.”

Things to consider: Age does normally play a role in access to mobiles. Younger girls from lower income families in most countries do not have their own mobile phones. Upper class children may, however, have phones. It really varies. Recent research (unpublished) found that it was common for 14-15 yr olds in Indonesia to have their own phones. In India and Bangladesh, that age was closer to 18. Girls who were no longer in school often had a mobile — some had even dropped out to get jobs in order to purchase a mobile. Sometimes married girls’ husbands purchase them a phone, yet it may be primarily to control and monitor their whereabouts.

When designing programs, it’s really important to take the time to learn whether the girls you’d like to work with own or borrow mobile phones and whether their access is controlled by someone else or if they are free to use a mobile however they’d like. Design for different scenarios and ‘user repertoires’ based on girls’ access and use habits. Don’t make assumptions on which girls access mobiles for what and how based on perceived vulnerability, do the research and you may be surprised when you get into the weeds.

Myth 4: Cost is the biggest barrier to girls’ mobile phone access and use. 

Reality: Cost is a barrier, but perhaps not the biggest one. Clearly cost is still a big barrier for the poorest girls. But the unwillingness to invest in a girl’s access to mobile or to information and learning is linked to other aspects like a girl’s position in her family or society. Mobiles are also becoming cheaper, so the cost barrier has been reduced in some ways. Overall, compared to landlines, as Katie Ramsay at Plan Australia notes, mobile is cheaper and that opens up access to information for even the poorest families.

Research conducted this past year in India, Bangladesh and Indonesia, found that in some communities girls have much greater access than assumed, and cost was a lower barrier than originally thought. Parents and gatekeepers were actually a bigger barrier in some countries. For many of us this is a total no-brainer, but I still think it’s worth bringing up.

Things to consider: As already mentioned, the key when developing programs is to dig deep and talk with girls directly to understand and help them to overcome different barriers, whether those are personal, familiar, economic, societal or institutional.

In order to help get past these barriers, mobile-enabled programming or product/service offerings need to have real value to girls as well as their gatekeepers, so that girls’ participation in programs and use of mobiles is seen by gatekeepers as positive. This was shown clearly in a UNESCO girls’ literacy program in Pakistan, where 87% of parents changed from a negative opinion about girls using a mobile phone to a positive perspective by the end of the program, because they saw the utility of the phone for girls’ literacy.

It’s important to do work on educating and changing behaviors of parents. Katie Ramsay also notes that in places where men own the tech, there is a huge opportunity for targeting them to gain their support for girls’ education. So it’s worth re-thinking the role of mobiles in girl-focused programs, especially where girls’ access to mobile is low or controlled. The best use of mobiles for learning may not be ‘delivering content’ to girls via a mobile device. Instead it might be using mobile and other media to target gatekeepers to change their behavior and beliefs around girls’ education and girls’ empowerment.

Myth 5: Girls share their phones.

Reality: Phone sharing brings with it a challenging social power dynamic. Many people in ‘the West’ hold the romantic notion that people in ‘developing countries’ like to share everything and live communally. Now, I’m not saying that girls are not generous, but when it comes to girls and phones, we have not really seen a great desire to share.

In some of the unpublished research conducted in Asia (and previously referenced in this post), girls without phones said that they do borrow phones, often from family members or friends, but they don’t necessarily like doing so. They said that borrowing here and there just isn’t enough to do anything substantial on a phone. Girls described girls who do not have mobile phones as sad and unpopular. They drew girls with phones as happy, popular, and successful. Some girls also described girls with phones as stuck up and selfish and said that girls who have phones don’t share them with girls that don’t have phones.

 

“A girl with a phone would look down on me, and show off what her phone does. She would let me hold it, but only because she would like to take it back from me again.” —Girl, 18, Dhaka

I was at a school in Cameroon last year, when a big fight broke out because one girl had taken another girl’s phone and thrown it in the toilet. The professor said that fighting over mobile phones was common among students. Phones had been prohibited at school in part to reduce conflicts, and sometimes students ratted each other out for having phones at school. This is not specifically a “mobile phone” problem, it’s a wealth or class or equity issue, but it manifests itself with phones because they are an asset that defines haves and have-nots. 

Things to consider: Don’t assume it’s easy for girls to borrow phones. If you find that many of your targeted users for a mobile-enabled initiative are borrowers, then it’s important to design short, to-the-point options for them, because they may have only a few minutes at a time with a mobile. Girls may not share their phones unless there is some kind of incentive for doing so. If you are designing for borrowers, think about rapid communication in bursts, and don’t communicate about anything that would put a girl at social or reputation risk if the person she borrows the phone from should see it.

Myth 6: All girls (& all youth) are tech savvy.

Reality: Many girls are indeed tech savvy, but some are still behind the curve. In many places, girls with phones are way more tech savvy than their parents. And most young people around the world are pretty quick to pick up on technology. But girls’ level of savvy will obviously depend on what they have access to.

Girls I talked with in the urban slums areas of Jakarta were quite tech-adept and had Internet-ready phones, but they still only used Facebook and Google. They also mixed up ‘Facebook’ and ‘Google’ with ‘The Internet’ and did not use email. They were unfamiliar with the concept of an “app”. Girls knew how to search for jobs online (via Google), but they said they had trouble understanding how to fill out online forms to apply for those jobs. So regardless of a girl’s level of tech savvy, in this case, she was still missing certain skills and relevant online content that would have helped her get to the next level of job-seeking.

Things to consider: It’s really important to do your research to understand what technologies and platforms girls are familiar with and be sure to plan for how to engage girls with those that they are unfamiliar with. Basic literacy might also still be a huge issue among adolescent girls in some places.

Basically, the message here again is to avoid making assumptions, to do your research, and to remember that girls are not a homogeneous group. Market research techniques can be helpful to really start understanding nuances regarding which girls do what, where and how on a mobile device.

Myth 7: Girls don’t have time to use mobile phones.

Reality: You might be surprised by which girls find time to spend on a mobile phone. This again really depends on which girls, and where! Girls find the time to use mobile, even if it’s not at the always on-line levels that we find in places like the US and Europe, notes Colman from Girl Effect. Spending time in the communities you’re working with can allow you to find times that girls have free and uncontrolled access. Jessica Heinzelman from DAI told us that in one project she was working on, they had assumed that girls in more traditional communities and rural geographies would have less access to mobiles. In reality, it was common for girls to be sent on errands with mobiles to places where there was connectivity to contact relatives on behalf of the family, leaving the girls with at least some alone time with the mobile.

Schoolgirls in the slum area of Jakarta that I worked in earlier this year said they checked their Facebook every day. Out of school urban girls checked at least a few times per week, and rural out of school girls also usually managed to borrow a phone to check Facebook quickly now and then.

Things to consider: I’m beating the drum again here about the importance of on-the-ground research and user testing to find out what is happening in a particular context. Alexandra Tyers from GSMA points out that user testing is really a critical piece of any girls and mobile learning effort, and that it can actually be done for a reasonable price. She notes that in her case, “Bangladesh user testing cost $5,000 USD for fifty tests in five different locations around the country. And yet the return on investment by making those necessary changes is likely to be large because making sure the product is right will ensure easy adoption and maximum uptake.”

Myth 8: Mobile phones can’t address girls’ real needs.

Reality: Mobile phones can help address girls’ real needs, but probably not as stand-alone devices, and maybe not as ‘content delivery’ channels. There is a lot of hype around mobile learning and mEducation, and as some presenters talked about at Mobile Learning Week, there is little evidence to help us know how to integrate mobiles in ways that could scale (where appropriate) and offer real results. I sometimes think this is because we are expecting mobile and ICTs in general to do more than they feasibly can.

Depending on the context and situation, where I have seen the greatest opportunity for mobiles is:

  • enabling girls to connect with peers and information
  • allowing girls more opportunities for voicing their opinions
  • linking girls to online support and services
  • linking girls with offline support and services.
  • helping organizations to track and monitor their programs (and hopefully then do a better job of adapting them to girls’ real needs).

Things to consider: It’s really important to think through what the best role for mobile is (if any role at all). Here is where you can (and should) be super creative. You may not get the biggest impact by involving girls as the end user. Rather, the best place might be aiming your mobile component at behavior change with gatekeepers. Or sending text messages that link a girl to a service or opportunity that lives offline. It might be getting feedback on the school system or using mobile to remind parents about school meetings.

Myth 9: Mobile phones are dangerous.

Reality: Many girls and women say a mobile helps them feel safer, more independent, and more successful. The 2011 Cherie Blair/GSMA study on women and mobiles noted that 93% of women said a mobile made them feel safer and 84% felt more independent. Tech can also offer a certain level of anonymity for girls that can be beneficial in some cases. “Tech is good for girls because they can be anonymous. If you go to the bank, everyone can see you’re a girl. But if you start a business online, they don’t know that you’re a girl, so you don’t have to deal with the stereotypes,” according to Tuulia Virha, formerly of Plan Finland. Parents may also see mobiles as a tool to help them keep their children safe.

Things to consider: Mobiles can help with an increased sense of security, safety and autonomy, depending on context and situation. However, and this is what I’ll say next, mobiles also bring risk with them, and most girls we talked to for our research were aware of obvious risks – meeting strangers, exposure to pornography, pedophiles and trafficking – but not so aware of other risks like privacy. They were also not very aware of how to reduce their risk levels. So in order to really reap the safety and empowerment rewards that mobiles can bring, initiatives need to find ways to improve girls’ digital literacy and digital safety. Data security is another issue, and organizations should develop responsible data policies so that they are not contributing to putting girls at risk.

And that brings us to the other side of the coin – the myth that mobiles make girls safer.

Myth 10: Mobiles make girls safer.

Reality: Mobiles can put girls at risk. That sense of being safer with a mobile in hand can be a false one, as I noted above. Dirk Slater, from Tactical Technology Collective noted, “A big issue of working with adolescent girls is their lack of awareness of how the information they share can be stored and used. It’s important to educate girls. Look at how much information you find out about a person through social media, and what does that mean about how much information someone else can find about them.”

Things to consider: Institutions should aim to mitigate risks and help to improve girls’ digital security and safety.

Girls face safety risks on mobile at a number of levels, including:

  • Content
  • Contact
  • Data privacy and security
  • Legal and political risk (in some places they may face backlash simply for seeking out an education)
  • Financial risk (spam, hacking, spending money they don’t have on airtime)
  • Reputation risk (if they participate on social networks or speak out)

It’s also key for organizations working with girls and mobile to develop ethical policies and procedures to mitigate risks at various levels.

And that’s that for the top 10 myths! Curious to know what you think about those, and if there are other myths you find in your work with girls, mobile and learning….

 

Read Full Post »

AnthropologyThere’s a popular saying amongst the tech and development crowd that 10% of an ICT4D initiative is the tech and the rest is…. well, the rest. I’ve recently heard a modified version that says 5% is the idea and 10% is the business model, and the other 85% is…. well, the rest. The ‘rest’ is mostly made up of people, culture, context and the stuff of anthropologists.

At the Slush conference in Helsinki in November, I joined a short ‘Fireside Chat’ with Tanya Accone (UNICEF) and Mika Valitalo (Plan Finland) about the importance of that other 85-90%, which Tanya referred to as ‘peopleware’.

Tanya kicked off the panel by asking people to think about how much time they’d dedicated to the technology of their start-up idea or their tech solution – the hardware and the software – and to then ask themselves how much time they’d spent on the people component. “People are what will make or break your idea,” she said. When it comes to mobile adoption, for example, we are seeing an exponential adoption pattern all over the world, and people are driving that. “I bet every single one of you at SLUSH hopes to see that curve in your future.”

She went on to note that conventional wisdom is that ‘content is king,’ however a key takeaway from her work in the mobile and social entrepreneurship space is that content been deposed by context. For example, when working with the U-Report project in Liberia, lessons from other countries where it had been rolled out were incorporated, but they had to be contextualized to make them work in Liberia. This involved talking and working directly with youth to ensure that the programming could be adapted properly.

Screen Shot 2014-11-29 at 4.34.47 PMMika agreed that ‘peopleware’ is a critical consideration. “I’ve witnessed this 10:90% ratio several times when co-designing and supporting projects using technology for social impact in African countries,” he said, and told the story of working on enhancing birth registration in Kenya, where the slow and manual flow of information between people and the government seemed to be a key challenge that could be tackled with use of mobiles and computers and applications.

“However, the deeper we dug the more varied the challenges seemed to be. We realized that people might be reluctant to register children when local practices were not in sync with the existing legislation. For example, if men are marrying girls under the age of 18, they might not like the idea of birth registration as it would prove a girl’s age. People living near the Kenya-Tanzania border might not want to be identified as being from one or the other country, because being unregistered may allow them to move back and forth across the border more easily and receive some type of benefit or commerce opportunity.

Even with a functioning mobile phone and app in their hand, people will weigh multiple aspects based on their personal situation before taking action. So, spending enough time with end-users and trying to see the world through their eyes as much as possible is crucial, especially when working in places that are not familiar to you. This may sound self-evident, but I’d encourage everyone keep this top on the list.”

Screen Shot 2014-11-29 at 4.28.39 PMI shared two of the key points from the Technology Salon earlier in the week on the topic of start-ups and social impact: a) the importance of partnership and collaboration (eg, people), and b) knowing the local context — not just the technical landscape, but people and culture.

These two aspects were really highlighted for me when I was working on a project in Cameroon that trained youth to use mobile phones to make short videos that they used to organize and advocate for change in their communities and more broadly. The donor was a large mobile phone manufacturer who assumed youth would use their higher-end phones to create the videos. The youth, however, were much more familiar with simple phones like the Nokia 1100. The phones we purchased in order to get good video quality had too many layers and folders and features. So we ended up getting some Flip cameras, because what we really needed was a push and shoot video camera, and this design was a better fit for low-income rural youth who had limited experience with technology.

We also realized that though the training was set up for youth, community adults were really interested in learning to make videos too. So we had to find ways to engage them so that they would not feel left out and so that we could ensure their continued support for the youth’s efforts. This meant we had to spread our resources out a little further than we had imagined, but we saw it as necessary. In all these processes we had to balance the context and reality on the ground, the expectations of the youth and community, expectations of our local partners, and those of the donor.

Tanya added that achieving success with social impact sometimes means rethinking your business model, because you’re in pursuit of the double dividend of financial return and social impact. She gave an example in Burundi where only 3% of the population has access to the electricity grid. “You would think it’s a market ripe for alternative energy solutions. But many businesses avoided it because their existing retail and distribution models simply would not work in that context. It took deconstructing and reconstructing business models to create something that does work — a network of microfinanced microfranchises operated by village-level entrepreneurs.” Now the families use robust, fast-charging LED lights recharged through a pedal-powered generator, a system that also recharges mobile phones. 

Another aspect is understanding the value proposition, she said. It would seem to be basic business, but all too often well-intended initiatives forget this and rush in with a cheaply-made solution. “In the process, they trample over the basic human dignity of their target consumer or beneficiary.” She suggested keeping in mind that people with limited resources are among the most discerning consumers because they don’t have disposable income. They are cost conscious, and equally, they are looking at value for money and return on investment in the durability, feature sets and total cost of ownership of everything they buy and value. This means that more energy-efficient chips, better battery technology, and robust handsets are important to economically challenged users.

Tanya also noted that ‘base of the pyramid’ users are no less style-conscious or aspirational than consumers in general, so “don’t disrespect them by skimping on the design and delivery of your solution. And like you and me, consumers in marginalized communities seek enjoyment and entertainment and fun too. Music has huge pull and potential… and don’t forget that pay-as-you-go comes with data!”

Screen Shot 2014-11-29 at 4.29.41 PMMika shared an example where the technology that was introduced carried almost too much power with it. In this project, a mobile phone was loaded with videos and connected to a portable projector. Daycare workers and parents were able to watch good childcare practices from model early childhood care and development centers. “What we found out was that using new technology not seen before sometimes amplified the message so much that caregivers wanted to discard what they already knew and replace it with what they saw on the screen from the model daycare centers.” Though the project showed the power of tech, unintended consequences may come up at the intersection of software, hardware and ‘peopleware’.

Mika talked about another project in Uganda that supported parents’ involvement into school activities. Plan realized that men were more willing to come to parent-teacher meetings once they introduced a mobile SMS service through which they sent invitations. The technology lowered the threshold for men to participate in issues they might have previously considered ‘women’s issues’. These subtle dynamics in the local context can have a big influence on how an innovation works, he noted.

Mika’s takeaways for startups and innovators were that civil society organizations might offer good synergy for co-designing, testing out and distributing products and services. “I’ve seen startups getting needs and ideas from the ground through NGOs, and then innovating products and services together. For example we produced a start-up mobile data gathering tool called Poimapper based on the needs coming from our frontline staff. We did on the ground pilots and product development in Kenya with actual end users who gave crucial feedback to make the service work well. Peopleware matters and partnering with NGOs can help startups to get it right,” he said.  “INGOs often have a wide presence around the world, and they are on the ground in communities and the surrounding society. They know quite a lot about peopleware, participatory methods, and community engagement. Then again, they don’t necessary have the same agility and fast innovation processes combined with new business models that startups are often good at.  So, my advice to NGOs is to go and meet startups and visa versa.”

I added that it’s important to understand who has access to and control of devices, and to ensure that a product or service is valuable to people in the long-term. So first — Who owns the phone? Who controls it? Often the story is that everyone has a phone but you may find that some people own 2 phones, some don’t have any. You may find that the people you least expect to have phones have them or can access them, and those you’d think would have a phone don’t. This is critical especially when working with girls and women who typically have lower access and control – and of course you should be sure the project is including girls and women!

Screen Shot 2014-11-29 at 4.31.42 PMAlso, you may be working with people who have very little disposable cash – but if your application or idea saves time and money and meets a real need, they may be willing to move their resources from one thing to another. For example, using solar for light and charging up phones can save money and time as well as eliminate the health risks of kerosene lamps. However, you need to make sure that what you offer is a long-term and sustainable change. When people have limited resources, they’ll be hesitant to invest in something new if they are not assured that it will be available, sustainable and cheaper in the long term.

Lastly, as Mika said, partnering with non-profits can offer start-ups a way to reach communities, because some non-profits are quite well-known and respected by the community (though of course, some are not too!). But ethical non-profits will not risk their reputations on ideas that they do not believe in, that are unconvincing, or that seem to take advantage of the poor. Start-ups will need to have clear ideas and evidence that a proposition is solid, because most non-profits have a low tolerance for risk and failure and (one hopes) a higher ethical standard than a basic money-making operation.

Tanya closed us out by summing up the key points:

  1. People are your critical success factor. “People” include your end-user as well as those that you may be partnering with.
  2. Context is king! Understand the social dynamics, know who owns and controls the device, know what people spend money on.
  3. Build a better business model.
  4. Understand the value proposition — Figure out how your application/tool/innovation can help save precious $ and time.
  5. Understand your partners — Remember that brand and reputation are very important to non-profits, and they don’t like risk.

Thanks to Tanya and Mika for co-collaboration on the Fireside Chat and this blog post!

Read Full Post »

IMG_5689Technology Salon Helsinki kicked off as part of Slush, a fantastic start-up and technology event that takes place with about 10,000 people every Fall in the Finnish capital. Slush added a social impact stream for the first time this year, making it a good fit for Technology Salon. Plan Finland organized the Salon and Netlight hosted.

Our topic for this Salon was broad – how can technology increase social impact? – but lead discussants (Jussi Hinkkanen of Fuzu, René Parker from rLabs, and Mika Valitalo of Plan Finland) brought inspiring personal stories, fundamental questions, practical experiences, challenges and questions that made for an intimate and lively conversation that incorporated expertise from everyone in the room.

The discussion raised a number of key points for social impact start-ups and those working in the development space:

1. Making a direct contribution to social impact is a prime motivator. Most people in the room who considered themselves to be entrepreneurs or who felt they were working with a ‘start-up’ or ‘social innovation’ mentality had tried different pathways before landing on their current one, yet had found them unsatisfying due to bureaucracy, lack of agility, unsustainable efforts, systems not based on merit, and feelings of not being able to input into or control decisions. “Do I want a job where I’m comfortable, well-paid and getting accolades for the supposed social good I’m doing, but where I know I’m not having any real impact, or do I want to be somewhere that I’m paid less but I’m actually doing something worthwhile?” summed up one participant.

2. It’s not clear how to best achieve social impact at scale. There was some disagreement in the room regarding whether it was better to work outside of the system to avoid the above-noted problems with corporate social responsibility efforts, governments, multi-laterals and international development agencies, or whether it was imperative to work with those institutions in order to achieve longer-lasting impact at scale. Questions were also raised about what is meant by scale. If we help communities to demand better government services through some kind of innovative approach, that can also lead to a scaled impact and more resources and social good coming into a community, even though the scaled impact is not so directly attributable. The big question is how to achieve scale yet remain locally relevant and contextually sensitive.

3. Keeping a social impact focus is a challenge. It’s critical to think about both social impact and sustainability from the very beginning, participants agreed. A social impact start-up, like any business, needs to pay salaries and other costs, so it needs a good business model that brings in enough revenue. “If you do not show revenue and growth, you will drive off investors,” said others, “and then your start-up won’t grow.” Yet those in the lowest income bracket will not have the highest capacity to pay for services, and donors often have policies prohibiting them from funding profit-building entities, even if they start off as non-profits. Ensuring that investors have a social impact motivation so that the mission of the start-up does not skew as it grows can also be a challenge. This area is being somewhat addressed by ‘social impact investing’ however, “as a start-up entrepreneur,” said one participant, “you know that next phase investors don’t like it if you have an impact investor already on board, so that makes it difficult to get further funding.” This all poses real challenges for start-ups.

4. Social good is in the eye of the beholder. Everyone will say that their company is values-based and that it’s ‘doing good’ but who decides on and judges the social function of a company? “Maybe one way is to see if it motivates Generation Y,” said one participant. Another pointed out that one company might be doing something that is perceived as ‘socially good’, but it might have a very small impact. Whereas another company might be doing something not perceived as ‘socially good’ (say, selling clothing) yet it has embedded strong values, good business ethics, pays workers well with good benefits, doesn’t pollute the environment and contributes to local economic growth in a large way. People won’t think of the second company as doing social good even if its social impact is greater than the first company. The idea of social impact is largely in the mind of the beholder, concluded one person, it’s in the psyche.

5. Staying true to social impact values in the long-term is difficult. As one discussant noted, keeping the social impact mindset requires constant consideration as to whether you are doing good with and for your employees, but you also need to ask the community that you are serving what they think. “It’s easy to say you are doing social good, but if you go directly to ask people in the community whether your initiative is doing what it says and if it’s having a good impact, you’ll see it’s not easy. When an investor comes along who wants to change things, you always have to go back to look at who you are, how you started, how a particular change will impact the organization, and how it will impact on the thousands of people who rely on you.”

6. A sustainable business model helps bring autonomy according to one discussant. A start-up can remain agile and make its own decisions if there are no donors or external funders. Having its own sustainable revenue stream will allow it to stay true to its vision and to community needs, or at least provide enough to cover staff and operations costs. However, partnership and collaboration are key. “You have to work with other people whether you like it or not. If you are working as a social impact start-up, you’ll need to partner with those already working in the community, and work with everyone to bring in their part. Just because there is a community out there somewhere, you can’t assume that they don’t know what is happening or that they don’t know anything. You need to partner with these local groups and work with the existing community context and structures.”

7. An innovative business model trumps innovative technology. Many of the places where non-profits are working and where people may think about ‘social good’ start-ups are those where the market doesn’t work and people have very few resources. Yet these are the very people we want to support the most in terms of social impact, said one discussant, so how can we do it? Targeting solutions and payment for different parts of the markets might be one way, for example, offering a solution to the segment of the market that can pay and in that way extending the services to those who cannot pay. “The most innovative thing here is the business model, not the technological solution,” advised another person. “And if you really listen to people and you build according to people’s needs, you may uncover needs as well as new markets and business models.” Your services will need to keep evolving over time, however, as people’s needs and the context changes. “You need to go there and spend time with people in order to deeply understand their needs, their contexts and their behaviors.”

8. People won’t think like you think. Another participant quoted activists in the disability movement “Nothing about us without us,” saying that start-ups should follow that mantra also. All the really bad examples of NGO, government, development or corporate failures have been when people are looking top-down or outside-in, she said. “When you think ‘since those people are poor, they have nothing, they will really want this thing I’m going to give them,’ you will fail,” she added. “People everywhere already have values, knowledge, relationships, things that they themselves value. This all impacts on what they want and what they are willing to receive. The biggest mistake is assuming that you know what is best, and thinking ‘these people would think like me if I were them.’ That is never the case.”

9. There is space for various approaches. You won’t want one single product or service to monopolize, said one person. “There are roles and limitations for different entities in any community. There are some non-income generating things that can and need to happen, and that is actually fine. It used to be a charity and welfare mentality, but now we think markets will solve everything. Neither extreme is correct. We need to have space for various partners and efforts.” At the same time, there needs to be space for different partners at different stages in time. It is important for the various partners to understand what their role is. Emergency support is good in an immediate post-conflict stage, for example, but then humanitarian organizations need to step aside and open space for other actors when a community or country moves to a more stable development and growth period.

10. It’s difficult to find investors for social impact in ‘the South.’ The perceived risk in investing in start-ups that want to ‘go South’ or start-ups already based in ‘the South’ makes it hard to find investors. “Finnish investors are myopic,” said one person. “Finland has already provided examples of how companies can access these new opportunities and also have a social impact. Spending power has skyrocketed in some countries. If investors looked properly, they would see the potential of making more money in some of these vast markets than they can in Europe or Finland,” noted another person. The risk is indeed greater due to various elements in some of these countries, added one person. “It’s like courtship – you can’t go after people who are not in your league or not right for you. But if you find the right investor who understands the risk as well as the significant potential returns, it can be a great marriage.”

11. NGOs and start-ups can be great partners. They can come up with ideas from scratch, or they can partner later in the process. NGOs can take advantage of start-up applications and services, whereas the start-ups can find new customers, build a portfolio, do field-testing and get feedback on what to improve with their idea. In addition the two have a lot to teach each other, said one discussant. “NGOs can learn a lot from start-ups about how to operate. They should be learning how to think about iterative improvements, pivoting and changing quickly, failing fast and learning fast.” Start-ups can also learn from NGOs. “Some NGOs are quite good at participatory practices, knowing the community well, collaborating at multiple levels with various stakeholders, communities and governments.” In addition, community-based organizations know the community very well and often work together well with start-ups and NGOs.

12. Pacing and timing can make collaboration tricky. The pacing in these different organizations and partners is quite different, however, and that causes friction and frustration. But even large multi-lateral agencies can be helpful for start-ups who want to gain entry into different countries or communities because they are well-known and because they can provide an ethical and legal framework that helps protect the start-up from making big mistakes due to a lack of understanding of these key elements. NGOs can also serve as a kind of infrastructure upon which to build start-up efforts. Lack of NGO and donor agility however sometimes causes efforts to fail. Hybrid models of funding that can enable start-up-NGO collaboration are needed. One discussant emphasized the importance for start-ups to generate their own funding on the one hand while seeking donor funds for some things too, but never doing anything for a donor that is not part of the organizations core mission.

13. You need to lose the ego. In every sector, egos and brands get in the way of social impact. Start-up founders have egos too, and the start-up personality may often be one that wants the spotlight, or in order to obtain funding the start-up may need to act in a particular way, and this can be detrimental. “For social impact work, we need to think about catalyzing something, not being the center of it. We need to help bring snowballs to the top of the hills, and then let them roll down on their own without branding,” recommended one participant. “We hear that 60% of mHealth initiatives die before they thrive. They are isolated, with little connection and interface with one another. We need more platforms and sharing, less egos and brands.”

IMG_5690Next Technology Salon Helsinki. Plan Finland is hoping to continue convening in Helsinki. If you are interested, sign up to get invitations at Technology Salon!

I’d also recommend attending Slush next year – especially if you like high energy, high-tech, Helsinki and lasers! I’m sure next year’s impact stream will be as good or even better than this year.

Thanks again to Plan for convening and sponsoring the first Salon, to Slush for including it as part of their Social Impact Stream, and to Netlight for hosting at their beautiful offices!

Read Full Post »

Older Posts »