Feeds:
Posts
Comments

Posts Tagged ‘development’

On November 14 Technology Salon NYC met to discuss issues related to the role of film and video in development and humanitarian work. Our lead discussants were Ambika Samarthya from Praekelt.org; Lina Srivastava of CIEL, and Rebekah Stutzman, from Digital Green’s DC office.

How does film support aid and development work?

Lina proposed that there are three main reasons for using video, film, and/or immersive media (such as virtual reality or augmented reality) in humanitarian and development work:

  • Raising awareness about an issue or a brand and serving as an entry point or a way to frame further actions.
  • Community-led discussion/participatory media, where people take agency and ownership and express themselves through media.
  • Catalyzing movements themselves, where film, video, and other visual arts are used to feed social movements.

Each of the above is aimed at a different audience. “Raising awareness” often only scratches the surface of an issue and can have limited impact if done on its own without additional actions. Community-led efforts tend to go deeper and focus on the learning and impact of the process (rather than the quality of the end product) but they usually reach fewer people (thus have a higher cost per person and less scale). When using video for catalyzing moments, the goal is normally bringing people into a longer-term advocacy effort.

In all three instances, there are issues with who controls access to tools/channels, platforms, and distribution channels. Though social media has changed this to an extent, there are still gatekeepers that impact who gets to be involved and whose voice/whose story is highlighted, funders who determine which work happens, and algorithms that dictate who will see the end products.

Participants suggested additional ways that video and film are used, including:

  • Social-emotional learning, where video is shown and then discussed to expand on new ideas and habits or to encourage behavior change.
  • Personal transformation through engaging with video.

Becky shared Digital Green’s approach, which is participatory and where community members to use video to help themselves and those around them. The organization supports community members to film videos about their agricultural practices, and these are then taken to nearby communities to share and discuss. (More on Digital Green here). Video doesn’t solve anyone’s development problem all by itself, Becky emphasized. If an agricultural extensionist is no good, having a video as part of their training materials won’t solve that. “If they have a top-down attitude, don’t engage, don’t answer questions, etc., or if people are not open to changing practices, video or no video, it won’t work.”

How can we improve impact measurement?

Questions arose from Salon participants around how to measure impact of film in a project or wider effort. Overall, impact measurement in the world of film for development is weak, noted one discussant, because change takes a long time and it is hard to track. We are often encouraged to focus on the wrong things like “vanity measurements” such as “likes” and “clicks,” but these don’t speak to longer-term and deeper impact of a film and they are often inappropriate in terms of who the audience is for the actual films (E.g., are we interested in impact on the local audience who is being impacted by the problem or the external audience who is being encouraged to care about it?)

Digital Green measures behavior change based on uptake of new agriculture practices. “After the agriculture extension worker shows a video to a group, they collect data on everyone that’s there. They record the questions that people ask, the feedback about why they can’t implement a particular practice, and in that way they know who is interested in trying a new practice.” The organization sets indicators for implementing the practice. “The extension worker returns to the community to see if the family has implemented a, b, c and if not, we try to find out why. So we have iterative improvement based on feedback from the video.” The organization does post their videos on YouTube but doesn’t know if the content there is having an impact. “We don’t even try to follow it up as we feel online video is much less relevant to our audience.” An organization that is working with social-emotional learning suggested that RCTs could be done to measure which videos are more effective. Others who work on a more individual or artistic level said that the immediate feedback and reactions from viewers were a way to gauge impact.

Donors often have different understandings of useful metrics. “What is a valuable metric? How can we gather it? How much do you want us to spend gathering it?” commented one person. Larger, longer-term partners who are not one-off donors will have a better sense of how to measure impact in reasonable ways. One person who formerly worked at a large public television station noted that it was common to have long conversation about measurement, goals, and aligning to the mission. “But we didn’t go by numbers, we focused on qualitative measurement.” She highlighted the importance of having these conversations with donors and asking them “why are you partnering with us?” Being able to say no to donors is important, she said. “If you are not sharing goals and objectives you shouldn’t be working together. Is gathering these stories a benefit to the community ? If you can’t communicate your actual intent, it’s very complicated.”

The goal of participatory video is less about engaging external (international) audiences or branding and advocacy. Rather it focuses on building skills and capacities through the process of video making. Here, the impact measurement is more related to individual, and often self-reported, skills such as confidence, finding your voice, public speaking, teamwork, leadership skills, critical thinking and media literacy. The quality of video production in these cases may be low, and videos unsuitable for widespread circulation, however the process and product can be catalysts for local-level change and locally-led advocacy on themes and topics that are important to the video-makers.

Participatory video suffers from low funding levels because it doesn’t reach the kind of scale that is desired by funders, though it can often contribute to deep, personal and community-level change. Some felt that even if community-created videos were of high production quality and translated to many languages, large-scale distribution is not always feasible because they are developed in and speak to/for hyper-local contexts, thus their relevance can be limited to smaller geographic areas. Expectation management with donors can go a long way towards shifting perspectives and understanding of what constitutes “impact.”

Should we re-think compensation?

Ambika noted that there are often challenges related to incentives and compensation when filming with communities for organizational purposes (such as branding or fundraising). Organizations are usually willing to pay people for their time in places such New York City and less inclined to do so when working with a rural community that is perceived to benefit from an organization’s services and projects. Perceptions by community members that a filmmaker is financially benefiting from video work can be hard to overcome, and this means that conflict may arise during non-profit filmmaking aimed at fundraising or building a brand. Even when individuals and communities are aware that they will not be compensated directly, there is still often some type of financial expectation, noted one Salon participant, such as the purchase of local goods and products.

Working closely with gatekeepers and community leaders can help to ease these tensions. When filmmaking takes several hours or days, however, participants may be visibly stressed or concerned about household or economic chores that are falling to the side during filming, and this can be challenging to navigate, noted one media professional. Filming in virtual reality can exacerbate this problem, since VR filming is normally over-programmed and repetitive in an effort to appear realistic.

One person suggested a change in how we approach incentives. “We spent about two years in a community filming a documentary about migration. This was part of a longer research project. We were not able to compensate the community, but we were able to invest directly in some of the local businesses and to raise funds for some community projects.” It’s difficult to understand why we would not compensate people for their time and their stories, she said. “This is basically their intellectual property, and we’re stealing it. We need a sector rethink.” Another person agreed, “in the US everyone gets paid and we have rules and standards for how that happens. We should be developing these for our work elsewhere.”

Participatory video tends to have less of a challenge with compensation. “People see the videos, the videos are for their neighbors. They are sharing good agricultural or nutrition approaches with people that they already know. They sometimes love being in the videos and that is partly its own reward. Helping people around them is also an incentive,” said one person.

There were several other rabbit holes to explore in relation to film and development, so look for more Salons in 2018!

To close out the year right, join us for ICT4Drinks on December 14th at Flatiron Hall from 7-9pm. If you’re signed up for Technology Salon emails, you’ll find the invitation in your inbox!

Salons run under Chatham House Rule so no attribution has been made in this post. If you’d like to attend a future Salon discussion, join the list at Technology Salon.

 

Read Full Post »

I used to write blog posts two or three times a week, but things have been a little quiet here for the past couple of years. That’s partly because I’ve been ‘doing actual work’ (as we like to say) trying to implement the theoretical ‘good practices’ that I like soapboxing about. I’ve also been doing some writing in other places and in ways that I hope might be more rigorously critiqued and thus have a wider influence than just putting them up on a blog.

One of those bits of work that’s recently been released publicly is a first version of a monitoring and evaluation framework for SIMLab. We started discussing this at the first M&E Tech conference in 2014. Laura Walker McDonald (SIMLab CEO) outlines why in a blog post.

Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.

For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact….

FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.

As an organization that provides technology strategy and support to other organizations — both large and small — it is important for SIMLab to better understand the quality of that support and how it may translate into improvements as well as how introduction or improvement of information and communication technology contributes to impact at the broader scale.

This is a difficult proposition, given that isolating a single factor like technology is extremely tough, if not impossible. The Framework thus aims to get at the breadth of considerations that go into successful tech-enabled project design and implementation. It does not aim to attribute impact to a particular technology, but to better understand that technology’s contribution to the wider impact at various levels. We know this is incredibly complex, but thought it was worth a try.

As Laura notes in another blogpost,

One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators – the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and consider the criteria for technology-enabled social change projects.

Here are the adapted criteria that you can read more about in the Framework. They were designed for internal use, but we hope they might be useful to evaluators of technology-enabled programming, commissioners of evaluations of these programs, and those who want to do in-house examination of their own technology-enabled efforts. We welcome your thoughts and feedback — The Framework is published in draft format in the hope that others working on similar challenges can help make it better, and so that they could pick up and use any and all of it that would be helpful to them. The document includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample log frames and evaluator terms of reference.

Happy reading and we really look forward to any feedback and suggestions!!

*****

The Criteria

Criterion 1: Relevance

The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design – particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?

Criterion 2: Effectiveness

A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

Criterion 3: Efficiency

Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?

Criterion 4: Impact

Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying, documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies evaluation criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us.

For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Consider: To what extent does the choice of communications channels or tools enable wider and/or higher quality participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?

Criterion 6: Coherence

The OECD-DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

Read Full Post »

2016-01-14 16.51.09_resized

Photo: Duncan Edwards, IDS.

A 2010 review of impact and effectiveness of transparency and accountability initiatives, conducted by Rosie McGee and John Gaventa of the Institute of Development Studies (IDS), found a prevalence of untested assumptions and weak theories of change in projects, programs and strategies. This week IDS is publishing their latest Bulletin titled “Opening Governance,” which offers a compilation of evidence and contributions focusing specifically on Technology in Transparency and Accountability (Tech for T&A).

It has a good range of articles that delve into critical issues in the Tech for T&A and Open Government spaces; help to clarify concepts and design; explore gender inequity as related to information access; and unpack the ‘dark side’ of digital politics, algorithms and consent.

In the opening article, editors Duncan Edwards and Rosie McGee (both currently working with the IDS team that leads the Making All Voices Count Research, Learning and Evidence component) give a superb in-depth review of the history of Tech for T&A and outline some of the challenges that have stemmed from ambiguous or missing conceptual frameworks and a proliferation of “buzzwords and fuzzwords.”

They unpack the history of and links between concepts of “openness,” “open development,” “open government,” “open data,” “feedback loops,” “transparency,” “accountability,” and “ICT4D (ICT for Development)” and provide some examples of papers and evidence that could help to recalibrate expectations among scholars and practitioners (and amongst donors, governments and policy-making bodies, one hopes).

The editors note that conceptual ambiguity continues to plague the field of Tech for T&A, causing technical problems because it hinders attempts to demonstrate impact; and creating political problems “because it clouds the political and ideological differences between projects as different as open data and open governance.”

The authors hope to stoke debate and promote the existing evidence in order to tone down the buzz. Likewise, they aim to provide greater clarity to the Tech for T&A field by offering concrete conclusions stemming from the evidence that they have reviewed and digested.

Download the Opening Governance report here.

 

 

 

 

Read Full Post »

Screen Shot 2015-04-23 at 8.59.45 PMBy Mala Kumar and Linda Raftree

Our April 21st NYC Technology Salon focused on issues related to the LGBT ICT4D community, including how LGBTQI issues are addressed in the context of stakeholders and ICT4D staff. We examined specific concerns that ICT4D practitioners who identify as LGBTQI have, as well as how LGBTQI stakeholders are (or are not) incorporated into ICT4D projects, programs and policies. Among the many issues covered in the Salon, the role of the Internet and mobile devices for both community building and surveillance/security concerns played a central part in much of the discussion.

To frame the discussion, participants were asked to think about how LGBTQI issues within ICT4D (and more broadly, development) are akin to gender. Mainstreaming gender in development starts with how organizations treat their own staff. Implementing programs, projects and policies with a focus on gender cannot happen if the implementers do not first understand how to treat staff, colleagues and those closest to them (i.e. family, friends). Likewise, without a proper understanding of LGBTQI colleagues and staff, programs that address LGBTQI stakeholders will be ineffective.

The lead discussants of the Salon were Mala Kumar, writer and former UN ICT4D staff, Tania Lee, current IRC ICT4D Program Officer, and Robert Valadéz, current UN ICT4D staff. Linda Raftree moderated the discussion.

Unpacking LGBTQI

The first discussant pointed out how we as ICT4D/development practitioners think of the acronym LGBTQI, particularly the T and I – transgender and intersex. Often, development work focuses on the sexual identity portion of the acronym (the LGBQ), and not what is considered in Western countries as transgenderism.

As one participant said, the very label of “transgender” is hard to convey in many countries where “third gender” and “two-spirit gender” exist. These disagreements in terminology have – in Bangladesh and Nepal for example – resulted in creating conflict and division of interest within LGBTQI communities. In other countries, such as Thailand and parts of the Middle East, “transgenderism” can be considered more “normal” or societally acceptable than homosexuality. Across Africa, Latin America, North America and Europe, homosexuality is a better understood – albeit sometimes severely criminalized and socially rejected – concept than transgenderism.

One participant cited that in her previous first-hand work on services for lesbian, gay and bisexual people; often in North America, transgender communities are prioritized less in LGBTQI services. In many cases she saw in San Francisco, homeless youth would identify as anything in order to gain access to needed services. Only after the services were provided did the beneficiaries realize the consequences of self-reporting or incorrectly self-reporting.

Security concerns within Unpacking LGBTQI

For many people, the very notion of self-identifying as LGBTQI poses severe security risks. From a data collection standpoint, this results in large problems in accurate representation of populations. It also results in privacy concerns. As one discussant mentioned, development and ICT4D teams often do not have the technical capacity (i.e. statisticians, software engineers) to properly anonymize data and/or keep data on servers safe from hackers. On the other hand, the biggest threat to security may just be “your dad finding your phone and reading a text message,” as one person noted.

Being an LGBTQI staff in ICT4D

 Our second lead discussant spoke about being (and being perceived as) an LGBTQI staff member in ICT4D. She noted that many of the ICT4D hubs, labs, centers, etc. are in countries that are notoriously homophobic. Examples include Uganda (Kampala), Kenya (Nairobi), Nigeria (Abuja, Lagos), Kosovo and Ethiopia (Addis). This puts people who are interested in technology for development and are queer at a distinct disadvantage.

Some of the challenges she highlighted include that ICT4D attracts colleagues from around the world who are the most likely to be adept at computers and Internet usage, and therefore more likely to seek out and find information about other staff/colleagues online. If those who are searching are homophobic, finding “evidence” against colleagues can be both easy and easy to disseminate. Along those lines, ICT4D practitioners are encouraged (and sometimes necessitated) to blog, use social media, and keep an online presence. In fact, many people in ICT4D find posts and contracts this way. However, keeping online professional and personal presences completely separate is incredibly challenging. Since ICT4D practitioners are working with colleagues most likely to actually find colleagues online, queer ICT4D practitioners are presented with a unique dilemma.

ICT4D practitioners are arguably the set of people within development that are the best fitted to utilize technology and programmatic knowledge to self-advocate as LGBT staff and for LGBT stakeholder inclusion. However, how are queer ICT4D staff supposed to balance safety concerns and professional advancement limitations when dealing with homophobic staff? This issue is further compounded (especially in the UN, as one participant noted) by being awarded the commonly used project-based contracts, which give staff little to no job security, bargaining power or general protection when working overseas.

Security concerns within being an LGBTQI staff in ICT4D

A participant who works in North America for a Kenyan-based company said that none of her colleagues ever mentioned her orientation, even though they must have found her publicly viewable blog on gender and she is not able to easily disguise her orientation. She talked about always finding and connecting to the local queer community wherever she goes, often through the Internet, and tries to support local organizations working on LGBT issues. Still, she and several other participants and discussants emphasized their need to segment online personal and professional lives to remain safe.

Another participant mentioned his time working in Ethiopia. The staff from the center he worked with made openly hostile remarks about gays, which reinforced his need to stay closeted. He noticed that the ICT staff of the organization made a concerted effort to research people online, and that Facebook made it difficult, if not impossible, to keep personal and private lives separate.

Another person reiterated this point by saying that as a gay Latino man, and the first person in his family to go to university, grad school and work in a professional job, he is a role model to many people in his community. He wants to offer guidance and support, and used to do so with a public online presence. However, at his current internationally-focused job he feels the need to self-censor and has effectively limited talking about his public online presence, because he often interacts with high level officials who are hostile towards the LGBTQI community.

One discussant also echoed this idea, saying that she is becoming a voice for the queer South Asian community, which is important because much of LGBT media is very white. The tradeoff for becoming this voice is compromising her career in the field because she cannot accept a lot of posts because they do not offer adequate support and security.

Intersectionality

Several participants and discussants offered their own experiences on the various levels of hostility and danger involved with even being suspected as gay. One (female) participant began a relationship with a woman while working in a very conservative country, and recalled being terrified at being killed over the relationship. Local colleagues began to suspect, and eventually physically intervened by showing up at her house. This participant cited her “light skinned privilege” as one reason that she did not suffer serious consequences from her actions.

Another participant recounted his time with the US Peace Corps. After a year, he started coming out and dating people in host country. When one relationship went awry and he was turned into the police for being gay, nothing came of the charges. Meanwhile, he saw local gay men being thrown into – and sometimes dying in – jail for the same charges. He and some other participants noted their relative privilege in these situations because they are white. This participant said he felt that as a white male, he felt a sense of invincibility.

In contrast, a participant from an African country described his experience growing up and using ICTs as an escape because any physical indication he was gay would have landed him in jail, or worse. He had to learn how to change his mannerisms to be more masculine, had to learn how to disengage from social situations in real life, and live in the shadows.

One of the discussants echoed these concerns, saying that as a queer woman of color, everything is compounded. She was recruited for a position at a UN Agency in Kenya, but turned the post down because of the hostility towards gays and lesbians there. However, she noted that some queer people she has met – all white men from the States or Europe – have had overall positive experiences being gay with the UN.

Perceived as predators

One person brought up the “predator” stereotype often associated with gay men. He and his partner have had to turn down media opportunities where they could have served as role models for the gay community, especially poor, gay queer men of color, (who are one of the most difficult socioeconomic classes to reach) out of fear that this stereotype may impact on their being hired to work in organizations that serve children.

Monitoring and baiting by the government

One participant who grew up in Cameroon mentioned that queer communities in his country use the Internet cautiously, even though it’s the best resource to find other queer people. The reason for the caution is that government officials have been known to pose as queer people to bait real users for illegal gay activity.

Several other participants cited this same phenomenon in different forms. A recent article talked about Egypt using new online surveillance tactics to find LGBTQI people. Some believe that this type of surveillance will also happen in Nigeria, a notoriously hostile country towards LGBTQI persons and other places.

There was also discussion about what IP or technology is the safest for LGBTQI people. While the Internet can be monitored and traced back to a specific user, being able to connect from multiple access points and with varying levels of security creates a sense of anonymity that phones cannot provide. A person also generally carries phones, so if the government intercepts a message on either the originating or receiving device, implications of existing messages are immediate unless a user can convince the government the device was stolen or used by someone else. In contrast, phones are more easily disposable and in several countries do not require registration (or a registered SIM card) to a specific person.

In Ethiopia, the government has control over the phone networks and can in theory monitor these messages for LGBTQI activity. This poses a particular threat since there is already legal precedent for convictions of illegal activity based on text messages. In some countries, major telecom carriers are owned by a national government. In others, major telecom carries are national subsidiaries of an international company.

Another major concern raised relates back to privacy. Many major international development organizations do not have the capacity or ability to retain necessary software engineers, ICT architects and system operators, statisticians and other technology people to properly prevent Internet hacks and surveillance. In some cases, this work is illegal by national government policy, and thus also requires legal advocacy. The mere collection of data and information can therefore pose a security threat to staff and stakeholders – LGBTQI and allies, alike.

The “queer divide”

One discussant asked the group for data or anecdotal information related to the “queer divide.” A commonly understood problem in ICT4D work are divides – between genders, urban and rural, rich and poor, socially accepted and socially marginalized. There have also been studies to clearly demonstrate that people who are naturally extroverted and not shy benefit more from any given program or project. As such, is there any data to support a “queer divide” between those who are LGBTQI and those who are not, he wondered. As demonstrated in the above sections, many queer people are forced to disengage socially and retreat from “normal” society to stay safe.

Success stories, key organizations and resources

Participants mentioned organizations and examples of more progressive policies for LGBTQI staff and stakeholders (this list is not comprehensive, nor does it suggest these organizations’ policies are foolproof), including:

We also compiled a much more extensive list of resources on the topic here as background reading, including organizations, articles and research. (Feel free to add to it!)

What can we do moving forward?

  • Engage relevant organizations, such as Out in Tech and Lesbians who Tech, with specific solutions, such as coding privacy protocols for online communities and helping grassroots organizations target ads to relevant stakeholders.
  • Lobby smartphone manufacturers to increase privacy protections on mobile devices.
  • Lobby US and other national governments to introduce “Right to be forgotten” law, which allows Internet users to wipe all records of themselves and personal activity.
  • Support organizations and services that offer legal council to those in need.
  • Demand better and more comprehensive protection for LGBTQI staff, consultants and interns in international organizations.

Key questions to work on…

  • In some countries, a government owns telecom companies. In others, telecom companies are national subsidiaries of international corporations. In countries in which the government is actively or planning on actively surveying networks for LGBTQI activity, how does the type of telecom company factor in?
  • What datasets do we need on LGBTQI people for better programming?
  • How do we properly anonymize data collected? What are the standards of best practices?
  • What policies need to be in place to better protect LGBTQI staff, consultants and interns? What kind of sensitizing activities, trainings and programming need to be done for local staff and less LGBTQI sensitive international staff in ICT4D organizations?
  • How much capacity have ICT4D/international organizations lost as a result of their policies for LGBTQI staff and stakeholders?
  • What are the roles and obligations of ICT4D/international organizations to their LGBTQI staff, now and in the future?
  • What are the ICT4D and international development programmatic links with LGBT stakeholders and staff? How does LGBT stakeholders intersect with water? Public health? Nutrition? Food security? Governance and transparency? Human rights? Humanitarian crises? How does LGBT staff intersect with capacity? Trainings? Programming?
  • How do we safely and responsibility increase visibility of LGBTQI people around the world?
  • How do we engage tech companies that are pro-LGBTQI, including Google, to do more for those who cannot or do not engage with their services?
  • What are the economic costs of homophobia, and does this provide a compelling enough case for countries to stop systemic LGBTQI-phobic behavior?
  • How do we mainstream LGBTQI issues in bigger development conferences and discussions?

Thanks to the great folks at ThoughtWorks for hosting and providing a lovely breakfast to us! Technology Salons are carried out under Chatham House Rule, so no attribution has been made. If you’d like to join us for Technology Salons in future, sign up here!

Read Full Post »

by Hila Mehr and Linda Raftree

On March 31, 2015, nearly 40 participants, joined by lead discussants Robert Fabricant, Dalberg Design Team; Despina Papadopoulos, Principled Design; and Roop Pal, PicoSatellite eXploration Lab; came together for Technology Salon New York City where we discussed the future of wearables in international development. As follows is a summary of our discussion.

While the future of wearables is uncertain, major international development stakeholders are already incorporating wearables into their programs. UNICEF Kid Power is introducing wearables into the fight against malnutrition, and is launching a Global Wearables Challenge. The MUAC (mid-upper arm circumference) band already exists in international health. Other participants present were working on startups using wearables to tackle global health and climate change.

As Kentaro Toyama often says “technology is an amplifier of human intent” and the Tech Salon discussion certainly resonated with that sentiment. The future of wearables in international development is one that we–the stakeholders as consumers, makers, and planners–will create. It’s important to recognize the history of technology interventions in international development, and that while wearables enable a new future, technology interventions are not new; there is a documented history of failures and successes to learn from. Key takeaways from the Salon, described below, include reframing our concept of wearables, envisioning what’s possible, tackling behavior change, designing for context, and recognizing the tension between data and privacy.

Reframing our Concept of Wearables

Our first discussant shared historical and current examples of wearables, some from as far back as the middle ages, and encouraged participants to rethink the concept of wearables by moving beyond the Apple Watch and existing, primarily health-related, use cases. While Intel, Arm, and Apple want to put chips on and in our bodies, and we think these are the first cases of wearables, glasses have always been wearable, and watches are wearables that change our notions of time and space. In short, technology has always been wearable. If we stay focused on existing, primarily luxury, use cases like FitBit and Apple Watch, we lose our creativity in new use cases for varying scenarios, he said.

In many cases of technology introduction into a ‘developing world’ context, the technology adds a burden rather than contributing ease. We should be thinking about how wearables can capture data without requiring input, for example. There is also an intimacy with wearables that could eliminate or reframe some of the ingrained paradigms with existing technologies, he noted.

In the most common use cases of wearables and other technology in international development, data is gathered and sent up the chain. Participants should rethink this model and use of wearables and ensure that any data collected benefits people in the moment. This, said the discussant, can help justify the act of wearing something on the body. The information gathered must be better incorporated into a personal-level feedback loop. “The more intimate technology becomes, the greater responsibility you have for how you use it,” he concluded. 

In the discussion of reframing our notion of wearables, our second discussant offered a suggestion as to why people are so fascinated with wearables. “It’s about the human body connected to the human mind,” she explained. “What is it to be human? That’s why we’re so fascinated with wearables. They enlarge the notion of technology, and the relationship between machine, human, and animal.”

Envisioning What’s Possible

In discussing the prominent use of wearables for data collection, one participant asked, “What is possible to collect from the body? Are we tracking steps because that is what we want to track or because that is what’s possible? What are those indicators that we’ve chosen and why?”

We need to approach problems by thinking about both our priorities and what’s possible with wearable technology, was one reply. “As consumers, designers, and strategists, we need to push more on what we want to see happen. We have a 7-year window to create technology that we want to take root,” noted our lead discussant.

She then shared Google Glass as an example of makers forgetting what it is to be human. While Google Glass is a great use case for doctors in remote areas or operators of complex machinery, Google Glass at dinner parties and in other social interactions quickly became problematic, requiring Google to publish guidelines for social uses cases. “It’s great that it’s out there as a blatant failure to teach other designers to take care of this space,” she said. 

Another discussant felt that the greatest opportunity is the hybrid space between specialized and the generalized. The specialized use cases for wearables are with high medical value. And then there are the generalized cases. With expensive and new technology, it becomes cheaper and more accessible as it meets those hybrid use cases in-between specialized and generalized to justify the cost and sophistication of technology. Developing far out and futuristic ideas, such as one lead discussant’s idea for a mind-controlled satellite, can also offer opportunities for those working with and studying technology to unpack and ‘de-scaffold’ the layers between the wearable technology itself and the data and future it may bring with it.

Tackling Behavior Change

One of the common assumptions with wearables is that our brains work in a mechanical way, and that if we see a trend in our data, we will change our behavior. But wearables have proven that is not the case. 

The challenge with wearables in the international development context is making sure that the data collected serves a market and consumer need — what people want to know about themselves — and that wearables are not only focused on what development organizations and researchers want to know. Additionally, the data needs to be valuable and useful to individuals. For example, if a wearable tracks iron levels but the individual doesn’t understand the intricacies of nutrition, their fluctuations in iron levels will be of no use.

Nike Plus and its FuelBand has been one of the most successful activity trackers to date, argued one discussant, because of the online community created around the device. “It wasn’t the wearable device that created behavior change, but the community sharing that went with it.” One participant trained in behavioral economics noted the huge potential for academic research and behavioral economists with the data collected from wearables. A program she had worked on looked closely at test-taking behaviors of boys versus those of girls, and wearables were able to track and detect specific behaviors that were later analyzed and compared.

Designing for Context

Mainstream wearables are currently tailored for the consumer profile of the 35-year-old male fitness buff. But how do we think about the broader population, on the individual and community level? How might wearables serve the needs of those in emergency, low resource, or conflict settings? And what are some of the concerns with wearables?

One participant urged the group to think more creatively. “I’m having trouble envisioning this in the humanitarian space. 5-10 years out, what are concrete examples of someone in Mali, Chad, or Syria with a wearable. How is it valuable? And is there an opportunity to leapfrog with this technology?”

Humanitarian disaster contexts often face massive chaos, low literacy rates, and unreliable Internet connectivity, if Internet exists at all. How can wearables be useful in these cases? One participant suggested they could be used for better ways of coordinating and organizing — such as a warning siren signal wearable for individuals in warzones, or water delivery signal wearable for when water arrives — while keeping in mind real restrictions. For example, there are fears today about vaccines and other development agency interventions. This may escalate with wearable devices or edible tracking devices.

No amount of creativity, however, replaces the realistic and sustainable value of developing technology that addresses real needs in local contexts. That’s where human-centered design and participatory processes play a vital role. Wearable products cannot be built in isolation without users, as various participants highlighted.

As one lead discussant said, we too often look at technology as a magic bullet and we need to avoid doing this again when it comes to wearables. We can only know if wearable technology is an appropriate use case by analyzing the environment and understanding the human body. In Afghanistan, she noted, everyone has an iPhone now, and that’s powerful. But not everyone will have a FitBit, because there is no compelling use case.

Appropriate use cases can be discovered by involving the community of practice from day one, making no assumptions, and showing and sharing methodology and processes. Makers and planners should also be wary of importing resources and materials, creating an entire new ecosystem. If a foreign product breaks with no access to materials and training, it won’t be fixed or sustainable. Designing for context also means designing with local resources and tailored to what the community currently has access to. At the same time, international development efforts and wearable technology should be about empowering people, and not infantilizing them.

The value of interdisciplinary teams and systems maps cannot be overlooked, participants added. Wearables highlight our individual-centric nature, while systems thinking and mapping shows how we relate with ourselves, our community, and the world. Thinking about all of these levels will be important if wearables are to contribute to development in a positive way.

Tensions around Privacy, Data, and Unethical Uses

Wearables exist in tension with identity, intimacy, and privacy. As consumers, users, makers, and planners of wearables, we have to think critically and deeply about how we want our data to be shared. One discussant emphasized that we need to involve VCs, industry, and politicians in discussion around the ethical implications of wearable technology products. The political implications and erosion of trust may be even more complex in developing world contexts, making a consortia and standards even more necessary. 

One participant noted the risks of medical wearable technology and the lack of HIPAA privacy requirements in other countries. The lack of HIPAA should not mean that privacy concerns are glossed over. The ethics of testing apply no matter the environment, and testing completely inappropriate technology in a developing context just for the captive audience is ethically questionable.

Likewise, other participants raised the issue of wearables and other types of technology being used for torture, mind control and other nefarious purposes, especially as the science of ‘mind hacking’ and the development of wearables and devices inserted under the skin becomes more sophisticated.

Participants noted the value in projects like the EU’s Ethics Inside and the pressure for a UN Representative on privacy rights. But there is still much headway to be made as data privacy and ethical concerns only grow.

The Future We Wear

The rapid evolution of technology urges us to think about how technology affects our relationships with our body, family, community, and society. What do we want those relationships to look like in the future? We have an opportunity, as consumers, makers and planners of wearables for the international context to view ourselves as stakeholders in building the future opportunities of this space. Wearables today are where the Internet was during its first five mainstream years. Now is the perfect time to put our stake in the ground and create the future we wish to exist in.

***

Our Wearables and Development background reading list is available here. Please add articles or other relevant resources or links.

Other posts about the Salon, from Eugenia Lee and Hila Mehr.

Many thanks to our lead discussants and participants for joining us, and a special thank you to ThoughtWorks for hosting us and providing breakfast!

Technology Salons run under Chatham House Rule, therefore no attribution has been made in this summary post. If you’d like to join future Salons to discuss these and related issues at the intersection of technology and development, sign up at Technology Salon.

Read Full Post »

Our Technology Salon in New York City on Oct 7 focused on the potential of games and gamification in international development work. Joining us as lead discussants were Asi Burak (Games for Change), Nick Martin (TechChange), and Craig Savel and Stan Mierzwa from Population Council.

TechChange is a social enterprise that uses principles of gamification on its online education platform to incentivize learners to participate and interact. Games for Change has been around for 10 years, working on convening events and festivals about games for social change, curating and sharing these kinds of games, producing games in an effort to mainstream social issues, and providing strategy support for games. Population Council is using personalized avatars (participants select skin color and tone, weight, dress, piercings, and sexuality) to encourage US youth in low-income settings to provide health data through online surveys.

The three discussants provided background on their programs, benefits of gaming and gamification, and challenges they are facing. Then we opened up to a wider discussion, whose main points are summarized here:

Public perception about games is mixed. Some people believe in the potential of games for a range of goals. Interest in gaming in the field of development is growing quickly – but many organizations rush into gaming as a ‘silver bullet’ or trend and do not fully understand how to go about integrating game theory or creating games. Others believe games are shallow and violent and cannot be used for serious work like education or development.

More nuanced discussion is needed to distill out the various ways that games can support development. “The conversation needs to move from ‘why would you use games?’ to how can you use games? How do you succeed? What can we learn from failure? What methodologies work and what are the good practices? Games for Change is working on a typology report that would outline what games can do, when, where and how. The organization is also working to define how to measure success for different types of games. A learning/educational game is very different from an awareness raising or a behavior change game, and approaches need to reflect that fact.

Data extraction was one concern. “Are we just creating games so that we can extract and use people’s data? Or so that we can shift their behaviors from one thing to another?” asked a Salon participant. “Why not co-create and co-design games that provide benefits to the players and their communities rather than extracting information for our own purposes? Where are the games that encourage two-way conversations with authorities or NGOs? How can we use game theory to open up these discussions to make for more transparent conversations?”

Games by their very nature are more participatory than other media, others felt. “Traditional media are top down and linear, providing a scripted story that someone designed for us and where we have no choice as to the consequences. We can’t change the path, and there is one beginning and one end. But games are about participation and feedback. People surprise us by how they play our games. They get takeaways that we didn’t anticipate. At the same time, you can glean a lot of information about people from how they use, play, and move through games. So games are also very good for collecting information and evaluating people and their behaviors.”

(Co)-creation of games is indeed happening all over the world. Rather than create games in New York or DC, there are huge opportunities for working with local developers to create games that resonate with local audiences and communities. Some organizations use open source, free software such as Scratch, a game making software that even children are using to create games. “In Colombia, we used Scratch to engage kids who were going to Internet cafes and just doing emails and chatting. We trained the kids in 30 minutes and they learned to program in blocks and to do animations and make games,” reported one Salon participant.

Games and gamification resonate with all cultures because they stem from play, and all cultures play. There are cultural differences however that need to be considered, and even more important may be aspects like gender barriers, urban/rural differences, and access to the technology and platforms on which digital games are played. In rural Uganda, for example, one organization is using gaming in an online learning program with 3000 pharmacists to train them on how to use a new rapid diagnostic test for treating malaria. The gaming was not a problem, as the group was well versed in gaming concepts. The challenge was teaching the group to use computers. It’s likely that fewer women and girls game in some contexts because of their lower access to technology, lack of time, cultural/gender barriers.

Photo sourced from: Do Something, on http://mashable.com/2012/12/20/dosomething-texting/

Moving games to mobile may help reduce technology barriers. SMS gaming has been quite successful in some cases. One example cited at the Salon was DoSomething.org who created an SMS-based game around teen pregnancy that girls and their boyfriends could sign up for. They would receive texts from “the baby” asking for a diaper change, food, and other things, and this helped them to understand what having a baby might be like. The game was very social and engaged both males and females.

Building on existing viral trends was another suggestion for making better use of gaming. “The most successful and fast-spreading services over mobile seem to be horoscopes and tips. For example, viral spread of a game where someone records their voice, the app scrambles it, and they send it to their friend, is huge right now. How can we harness and try to better understand the potential of those types of threads in the communities where we are working?” asked one Salon participant. “In addition, there is an explosion of things like gambling games in China. What could development organizations do to tap into trends that have nothing to do with development, and piggyback on them?”

Balancing fun/games and learning/impact was another challenge or organizations using games. Some media houses are taking documentary films about serious issues and making them interactive through games so that people can engage more deeply with the story and information. One of these puts the viewer in the shoes of an investigative journalist working on a story about pirate fishing off the coast of Sierra Leone. “The challenge is balancing the need to tell a linear story with people’s desire to control the action, and how to land the player at the end point where the story actually should end.” The trade offs between greater interaction vs staying true to the story can be difficult to manage. Likewise, when using game theory in education, the challenge is balancing game play with the social issue. “We need to figure out how to reward for quality not quantity of participation when using gamification for learning,” said one Salon participant, “and figuring out how to avoid making the game the whole point and losing the learning aspect.”

What about girls and women and gaming? The field of gaming has traditionally appealed to men and boys, noted some, but what is behind this? Some Salon participants felt this was in part because of biological differences, but others felt that we may be playing into stereotypes by thinking that men like gaming and women like social media and communication. One study (by Matthew Kam), for example, notes that when girls in India were given mobile phones on which to play a literacy game, their brothers would often take the phones away and parents would not intervene.

Women are gaming – a lot! Statistically speaking, in the US and Europe, women are now around 50% of gamers. There is a big gap in terms of women game developers, however, and the gaming field is struggling intensely with the balance shifting to a more equal one between men and women.This has erupted into a controversy called ‘GamerGate’ that is rocking the gaming industry and provoking a heated (and often ugly) conversation. “Women who game and who develop games face an incredible amount of harassment online and in person because they are a minority in their field,” commented one Salon participant.

Few game developers are women. “Only 10% of game developers are women,” noted one Salon participant. “There is a lot of discussion in the sector on misogyny. If you play any of the more popular games, the main characters are always men.  Outspoken female developers talking about more inclusive games (as opposed to shooting games) have been targeted and harassed.” (See the recent backlash against Anita Sarkeesian). There is a sense from some male game developers that women developers are entering into a space and taking something away from it. An upcoming film called “GTFO” documents the situation. Though there are changes happening in the independent gaming sector and women are playing a bigger and more vocal role, this has not yet hit the mainstream. Until that happens, said one Salon participant, we will continue to see a lack of representation and agency of women in games/gaming similar to that which we see in the Hollywood movie industry. (See the Geena Davis Institute’s research on gender bias in the film industry.)

Can games reach low income gamers in ‘developing countries’? Much of the gaming dynamic will need to adapt if we want to reach lower income populations. Likely this will happen through mobile, but in places like Kenya and India, for example, people have mobiles but don’t download games. “It may take 5-10 years before people everywhere are downloading and playing mobile games,” said one Salon participant. In addition, there is a lot of work to be done on business models and monetization so that these efforts are sustainable. “People love free things. And they are not used to paying for things in the digital space.” In the meantime, analog games can also be hugely beneficial, and many organizations at present are using more traditional games in their work.

What would help organizations to work more with games and gamification? Areas for further exploration include:

  • Stories of surprise wins and successes
  • More evidence and analysis of what works and where/why/how/when/with whom
  • Better understanding of free content and business models and sustainability
  • More evidence that games contribute to learning, literacy, numeracy, behavior change (eg., studies like this one)
  • Better understanding by donors/organizations of the wider ecosystem that goes into creating a successful game
  • Funding that includes support for the wider ecosystem, not just funding to make a game
  • More work with local developers who are proficient at making games and who understand the local context and market

As a starting point, Games for Change’s website has a good set of resources that can help organizations begin to learn more. We’re also compiling resources here, so take a look and add yours!

Thanks to Nick, Asi, Stan and Craig for joining as lead discussants, and to Population Council for hosting the Salon!

If you’d like to attend future Salons, sign up here!

Read Full Post »

Screen Shot 2014-07-22 at 5.13.57 AM

I spent last week in Berlin at the Open Knowledge Festival – a great place to talk ‘open’ everything and catch up on what is happening in this burgeoning area that crosses through the fields of data, science, education, art, transparency and accountability, governance, development, technology and more.

One session was on Power, politics, inclusion and voice, and it encouraged participants to dig deeper into those 4 aspects of open data and open knowledge. The organizers kicked things off by asking us to get into small groups and talk about power. Our group was assigned the topic of “feeling powerless” and we shared personal experiences of when we had felt powerless. There were several women in my group, many of whom, unsurprisingly, recounted experiences that felt gendered.

Screen Shot 2014-07-22 at 5.24.53 AMThe concept of ‘mansplaining‘ came up. Mansplaining (according to Wikipedia) is a term that describes when a man speaks to a woman with the assumption that she knows less than he does about the topic being discussed because she is female. ‘Mansplaining is different from other forms of condescension because mansplaining is rooted in the assumption that, in general, a man is likely to be more knowledgeable than a woman.’

From there, we got into the tokenism we’d seen in development programs that say they want ‘participation’ but really don’t care to include the viewpoints of the participants. One member of our group talked about the feelings of powerlessness development workers create when they are dismissive of indigenous knowledge and assume they know more than the poor in general. “Like when they go out and explain climate change to people who have been farming their entire lives,” she said.

A lightbulb went off. It’s the same attitude as ‘mansplaining,’ but seen in development workers. It’s #devsplaining.

So I made a hashtag (of course) and tried to come up with a definition.

Devsplaining – when a development worker, academic, or someone who generally has more power within the ‘development industry’ speaks condescendingly to someone with less power. The devsplainer assumes that he/she knows more and has more right to an opinion because of his/her position and power within the industry. Devsplaining is rooted in the assumption that, in general, development workers are likely to be more knowledgeable about the lives and situations of the people who participate in their programs/research than the people themselves are.

What do people think? Any good examples?

 

 

Read Full Post »

Screen Shot 2014-06-05 at 7.10.25 AMPlan International’s Finnish office has just published a thorough user-friendly guide to using ICTs in community programs. The guide has been in development for over a year, based on experiences and input from staff working on the ground with communities in Plan programs in several countries.

It was authored and facilitated by Hannah Beardon, who also wrote two other great ICT4D guides for Plan in the past: Mobiles for Development (2009) and ICT Enabled Development (2010).

The guide is written in plain language and comes from the perspective of folks working together with communities to integrate ICTs in a sustainable way.

It’s organized into 8 sections, each covering a stage of project planning, with additional practical ideas and guidance in the annexes at the end.

Chapters include:

1 Assessing the potential of ICTs

2 Assessing the social context for ICTs

3 Assessing the physical context for ICTs

4 Reviewing

5 Choosing the ICT

6 Planning for sustainability

7 Building capacity

8 Monitoring, evaluation and sharing learning

Screen Shot 2014-06-05 at 7.27.32 AMThe sections are not set up as a linear process, and depending on each situation and the status of a project the whole guide can be used, or smaller sections can be pulled out to offer some guidance. Each section includes steps to follow and questions to ask. There are detailed orientations in the annexes as well, for example, how to conduct a participatory communications assessment at the community level, how to map information and communication flows and identify bottlenecks where ICTs might help, how to conduct a feasibility study, how to budget and consider ‘total cost of ownership.’

One thing I especially like about the guide is that it doesn’t push ICTs or particular ‘ICT solutions’ (I really hate that term for some reason!). Rather, it helps people to look at the information and communication needs in a particular situation and to work through a realistic and contextually appropriate process to resolve them, which may or may not involve digital technology. It also assumes that people in communities, district offices and country offices know the context best, and simply offers a framework for pulling that knowledge together and applying it.

Screen Shot 2014-06-05 at 7.07.45 AM99% of my hands-on experience using ICTs in development programming comes from my time at Plan International, much of it spent working alongside and learning from the knowledgeable folks who put this guide together. So I’m really happy to see that now other people can benefit from their expertise as well!

Let @vatamik know if you have questions, or if you have feedback for them and the team!

Download “A practical guide to using ICTs” here.

 

 

Read Full Post »

Screen Shot 2014-05-08 at 9.36.00 AMDebate and thinking around data, ethics, ICT have been growing and expanding a lot lately, which makes me very happy!

Coming up on May 22 in NYC, the engine room, Hivos, the Berkman Center for Internet and Society, and Kurante (my newish gig) are organizing the latest in a series of events as part of the Responsible Data Forum.

The event will be hosted at ThoughtWorks and it is in-person only. Space is limited, so if you’d like to join us, let us know soon by filling in this form. 

What’s it all about?

This particular Responsible Data Forum event is an effort to map the ethical, legal, privacy and security challenges surrounding the increased use and sharing of data in development programming. The Forum will aim to explore the ways in which these challenges are experienced in project design and implementation, as well as when project data is shared or published in an effort to strengthen accountability. The event will be a collaborative effort to begin developing concrete tools and strategies to address these challenges, which can be further tested and refined with end users at events in Amsterdam and Budapest.

We will explore the responsible data challenges faced by development practitioners in program design and implementation.

Some of the use cases we’ll consider include:

  • projects collecting data from marginalized populations, aspiring to respect a do no harm principle, but also to identify opportunities for informational empowerment
  • project design staff seeking to understand and manage the lifespan of project data from collection, through maintenance, utilization, and sharing or destruction.
  • project staff that are considering data sharing or joint data collection with government agencies or corporate actors
  • project staff who want to better understand how ICT4D will impact communities
  • projects exploring the potential of popular ICT-related mechanisms, such as hackathons, incubation labs or innovation hubs
  • projects wishing to use development data for research purposes, and crafting responsible ways to use personally identifiable data for academic purposes
  • projects working with children under the age of 18, struggling to balance the need for data to improve programming approaches, and demand higher levels of protection for children

By gathering a significant number of development practitioners grappling with these issues, the Forum aims to pose practical and critical questions to the use of data and ICTs in development programming. Through collaborative sessions and group work, the Forum will identify common pressing issues for which there might be practical and feasible solutions. The Forum will focus on prototyping specific tools and strategies to respond to these challenges.

What will be accomplished?

Some outputs from the event may include:

  • Tools and checklists for managing responsible data challenges for specific project modalities, such as sms surveys, constructing national databases, or social media scraping and engagement.
  • Best practices and ethical controls for data sharing agreements with governments, corporate actors, academia or civil society
  • Strategies for responsible program development
  • Guidelines for data-driven projects dealing with communities with limited representation or access to information
  • Heuristics and frameworks for understanding anonymity and re-identification of large development data sets
  • Potential policy interventions to create greater awareness and possibly consider minimum standards

Hope to see some of you on the 22nd! Sign up here if you’re interested in attending, and read more about the Responsible Data Forum here.

 

Read Full Post »

The NYC Technology Salon on February 28th examined the connection between bigger, better data and resilience. We held morning and afternoon Salons due to the high response rate for the topic. Jake Porway, DataKind; Emmanuel Letouzé, Harvard Humanitarian Initiative; and Elizabeth Eagen, Open Society Foundations; were our lead discussants for the morning. Max Shron, Data Strategy; joined Emmanuel and Elizabeth for the afternoon session.

This post summarizes key discussions from both Salons.

What the heck do we mean by ‘big data’?

The first question at the morning salon was: What precisely do we mean by the term ‘big data’? Participants and lead discussants had varying definitions. One way of thinking about big data is that it is comprised of small bits of unintentionally produced ‘data exhaust’ (website cookies, cellphone data records, etc.) that add up to a dataset. In this case, the term big data refers to the quality and nature of the data, and we think of non-sampled data that are messy, noisy and unstructured. The mindset that goes with big data is one of ‘turning mess into meaning.’

Some Salon participants understood big data as datasets that are too large to be stored, managed and analyzed via conventional database technologies or managed on normal computers. One person suggested dropping the adjective ‘big,’ forgetting about the size, and instead considering the impact of the contribution of the data to understanding. For example, if there were absolutely no data on something and 1000 data points were contributed, this might have a greater impact than adding another 10,000 data points to an existing set of 10 million.

The point here was that when the emphasis is on big (understood as size and/or volume), someone with a small data set (for example, one that fits into an excel sheet) might feel inadequate, yet their data contribution may be actually ‘bigger’ than a physically larger data set (aha! it’s not the size of the paintbrush…). There was a suggestion that instead of talking about big data we should talk about smart data.

How can big data support development?

Two frameworks were shared for thinking about big data in development. One from UN Global Pulse considers that big data can improve a) real-time awareness, b) early warning and c) real-time monitoring. Another looks at big data being used for three kinds of analysis: a) descriptive (providing a summary of something that has already happened), b) predictive (likelihood and probability of something occurring in the future), and c) diagnostic (causal inference and understanding of the world).

What’s the link between big data and resilience?

‘Resilience’ as a concept is contested, difficult to measure and complex. In its most simple definition, resilience can be thought of as the ability to bounce back or bounce forward. (For an interesting discussion on whether we should be talking about sustainability or resilience, see this piece). One discussant noted that global processes and structures are not working well for the poor, as evidenced from continuing cycles of poverty and glaring wealth inequalities. In this view, people are poor as a result of being more exposed and vulnerable to shocks, at the same time, their poverty increases their vulnerability, and it’s difficult to escape from the cycle where over time, small and large shocks deplete assets. An assets-based model of resilience would help individuals, families and communities who are hit by a shock in one sphere — financial, human, capital, social, legal and/or political — to draw on the assets within another sphere to bounce back or forward.

Big data could help this type of an assets-based model of resilience by predicting /helping poor and vulnerable people predict when a shock might happen and preparing for it. Big data analytics, if accessible to the poor, could help them to increase their chances of making better decisions now and for the future. Big data then, should be made accessible and available to communities so that they can self-organize and decrease their own exposure to shocks and hazards and increase their ability to bounce back and bounce forward. Big data could also help various actors to develop a better understanding of the human ecosystem and contribute to increasing resilience.

Can ivory tower big data approaches contribute to resilience?

The application of big data approaches to efforts that aim to increase resilience and better understand human ecosystems often comes at things from the wrong angle, according to one discussant. We are increasingly seeing situations where a decision is made at the top by people who know how to crunch data yet have no way of really understanding the meaning of the data in the local context. In these cases, the impact of data on resilience will be low, because resilience can only truly be created and supported at the local level. Instead of large organizations thinking about how they can use data from afar to ‘rescue’ or ‘help’ the poor, organizations should be working together with communities in crisis (or supporting local or nationally based intermediaries to facilitate this process) so that communities can discuss and pull meaning from the data, contextualize it and use it to help themselves. They can also be more informed what data exist about them and more aware of how these data might be used.

For the Human Rights community, for example, the story is about how people successfully use data to advocate for their own rights, and there is less emphasis on large data sets. Rather, the goal is to get data to citizens and communities. It’s to support groups to define and use data locally and to think about what the data can tell them about the advocacy path they could take to achieve a particular goal.

Can data really empower people?

To better understand the opportunities and challenges of big data, we need to unpack questions related to empowerment. Who has the knowledge? The access? Who can use the data? Salon participants emphasized that change doesn’t come by merely having data. Rather it’s about using big data as an advocacy tool to tell the world to change processes and to put things normally left unsaid on the table for discussion and action. It is also about decisions and getting ‘big data’ to the ‘small world,’ e.g., the local level. According to some, this should be the priority of ‘big data for development’ actors over the next 5 years.

Though some participants at the Salon felt that data on their own do not empower individuals; others noted that knowing your credit score or tracking how much you are eating or exercising can indeed be empowering to individuals. In addition, the process of gathering data can help communities understand their own realities better, build their self-esteem and analytical capacities, and contribute to achieving a more level playing field when they are advocating for their rights or for a budget or service. As one Salon participant said, most communities have information but are not perceived to have data unless they collect it using ‘Western’ methods. Having data to support and back information, opinions and demands can serve communities in negotiations with entities that wield more power. (See the book “Who Counts, the power of participatory statistics” on how to work with communities to create ‘data’ from participatory approaches).

On the other hand, data are not enough if there is no political will to make change to respond to the data and to the requests or demands being made based on the data. As one Salon participant said: “giving someone a data set doesn’t change politics.”

Should we all jump on the data bandwagon?

Both discussants and participants made a plea to ‘practice safe statistics!’ Human rights organizations wander in and out of statistics and don’t really understand how it works, said one person. ‘You wouldn’t go to court without a lawyer, so don’t try to use big data unless you can ensure it’s valid and you know how to manage it.’ If organizations plan to work with data, they should have statisticians and/or data scientists on staff or on call as partners and collaborators. Lack of basic statistical literacy is a huge issue amongst the general population and within many organizations, thought leaders, and journalists, and this can be dangerous.

As big data becomes more trendy, the risk of misinterpretation is growing, and we need to place more attention on the responsible use of statistics and data or we may end up harming people by bad decisions. ‘Everyone thinks they are experts who can handle statistics – bias, collection, correlation’ these days. And ‘as a general rule, no matter how many times you say the data show possible correlation not causality, the public will understand that there is causality,’ commented one discussant. And generally, he noted, ‘when people look at data, they believe them as truth because they include numbers, statistics, science.’ Greater statistical literacy could help people to not just read or access data and information but to use them wisely, to understand and question how data are interpreted, and to detect political or other biases. What’s more, organizations today are asking questions about big data that have been on statisticians’ minds for a very long time, so reaching out to those who understand these issues can be useful to avoid repeating mistakes and re-learning lessons that have already been well-documented.

This poor statistical literacy becomes a serious ethical issue when data are used to determine funding or actions that impact on people’s lives, or when they are shared openly, accidentally or in ways that are unethical. In addition, privacy and protection are critical elements in using and working with data about people, especially when the data involve vulnerable populations. Organizations can face legal action and liability suits if their data put people at harm, as one Salon participant noted. ‘An organization could even be accused of manslaughter… and I’m speaking from experience,’ she added.

What can we do to move forward?

Some potential actions for moving forward included:

  • Emphasis with donors that having big data does not mean that in order to cut costs, you should eliminate community level processes related to data collection, interpretation, analysis, and ownership;
  • Evaluations and literature/documentation on the effectiveness of different tools and methods, and when and in which contexts they might be applicable, including things like cost-benefit analyses of using big data and evaluation of its impact on development/on communities when combined with community level processes vs used alone/without community involvement — practitioner gut feelings are that big data without community involvement is irresponsible and ineffective in terms of resilience, and it would be good to have evidence to help validate or disprove this;
  • More and better tools and resources to support data collection, visualization and use and to help organizations with risk analysis, privacy impact assessments, strategies and planning around use of big data; case studies and a place to share and engage with peers, creation of a ‘cook book’ to help organizations understand the ingredients, tools, processes of using data/big data in their work;
  • ‘Normative conventions’ on how big data should be used to avoid falling into tech-driven dystopia;
  • Greater capacity for ‘safe statistics’ among organizations;
  • A community space where frank and open conversations around data/big data can occur in an ongoing way with the right range of people and cross-section of experiences and expertise from business, data, organizations, etc.

In conclusion?

We touched upon all types of data and various levels of data usage for a huge range of purposes at the two Salons. One closing thought was around the importance of having a solid idea of what questions we trying to answer before moving on to collecting data, and then understanding what data collection methods are adequate for our purpose, what ICT tools are right for which data collection and interpretation methods, what will done with the data/what is the purpose of collecting data, how we’ll interpret them, and how data will be shared, with whom, and in what format.

See this growing list of resources related to Data and Resilience here and add yours!

Thanks to participants and lead discussants for the fantastic exchange, and a big thank you to ThoughtWorks for hosting us at their offices for this Salon. Thanks also to Hunter Goldman, Elizabeth Eagen and Emmanuel Letouzé for their support developing this Salon topic, and to Somto Fab-Ukozor for support with notes and the summary. Salons are held under Chatham House Rule, therefore no attribution has been made in this post. If you’d like to attend future Salons, sign up here!

Read Full Post »

Older Posts »