Feeds:
Posts
Comments

Posts Tagged ‘technology’

Western perspectives on technology tend to dominate the media, despite the fact that technology impacts on people’s lives are nuanced, diverse and contextually specific. At our March 8 Technology Salon NYC (hosted by Thoughtworks) we discussed how structural issues in journalism and technology lead to narrowed perspectives and reduced nuance in technology reporting.

Joining the discussion were folks from for-profit and non-profit US-based media houses with global reporting remits, including: Nabiha Syed, CEO, The Markup; Tekendra Parmar, Tech Features Editor, Business Insider; Andrew Deck, Reporter, Rest of World and Vittoria Elliot, Reporter, WIRED. Salon participants working for other media outlets and in adjacent fields contributed to our discussion as well.

Power dynamics are at the center. English language technology media establishments tend to report as if tech stories begin and end in Silicon Valley. This affects who media talks and listens to, what stories are found and who is doing the finding, which angles and perspectives are centered, and who decides what is published. As one Salon participant said, “we came to the Salon for a conversation about tech journalism, but bigger issues are coming up. This is telling, because, no matter what type of journalism you’re doing, you’re reckoning with wider systemic issues in journalism… [like] how we pay for it, who the audiences are, how we shift the sense of who we’re reporting for, and all the existential questions in journalism.”

Some media outlets are making an intentional effort to better ground stories in place, cultural context, political context, and non-Western markets in order to challenge certain assumptions and biases in Silicon Valley. Their work aims to bring non-US-centric stories to a wider general audience in the US and abroad and to enter the media diet of Silicon Valley itself to change perspectives and expand world views using narrative, character, and storytelling that is not laced with US biases.

Challenges remain with building global audiences, however. Most publications have only a handful of people focusing on stories outside of their headquarter country. Yet “in addition to getting the stories – you also have to build global and local networks so that the stories get distributed,” as one person said. US media outlets don’t often invest in building relationships with local influencers and policy makers who could help to spread a story, react or act on it. This can mean there is little impact and low readership, leading to decision makers at media outlets saying “see, we didn’t have good metrics, those kinds of stories don’t perform well.”  This is not only the case for journalism in the US. An Indian reader may not be interested in reading about the Philippines and vice versa. So, almost every story needs a different conceptualization of audience, which is difficult for publications to afford and achieve.

Ad-revenue business models are part of the problem.  While the vision of a global audience with wide perspectives and nuance is lofty, the practicalities of implementation make it difficult. Business models based on ad revenue (clicks, likes, time spent on a page) tend to reinforce status quo content at the cost of excluding non-Western voices and other marginalized users of technology. Moving to alternative ways to measure impact can be hard for editors that have been working in the for-profit industry for several years. Even in non-profit media, “there is a shadow cast from these old metrics…. Donors will say, ‘okay, great, wonderful story, super glad that there was a regulatory change… but how many people saw it?’ And so there’s a lot of education that needs to happen.”

Identifying new approaches and metrics. Some Salon participants are looking at how to get beyond clicks to measure impact and journalism’s contribution to change without committing the sin of centering the story on the journalist. Some teams are testing “impact meetings,” with the reporting team looking at “who has power – Consumers? Regulators? Legislators? Civil society? Mapping that out, and figuring out what form the information needs to be in to get into audiences’ hands and heads… Cartoons? Instagram? An academic conversation? We identify who in the room has some power, get something into their hands, and then they do all the work.”

Another person talked about creating Listening Circles to develop participatory and grounded narratives that will have greater impact. In this case, journalists convene groups of experts and people with lived experiences on a particular topic to learn who are the power brokers, what key topics need to be raised, what is the media covering too much or too little of, and what stories or perspectives are missing from this coverage. This is similar to how a journalist normally works — talking with sources — except that the sources are in a group together and can sharpen each other’s ideas. In this sense, media works as a convener to better understand the issue and themes. It makes space for smaller more grounded organizations to join the conversations. It also helps media outlets identify key influencers and involve them from the start so that they are more interested in sharing the story when it’s ready to go. This can help catalyze ongoing movement on the theme or topic among these organizations.

These approaches look familiar to advocacy, community development, communication for development and social and behavior change communication approaches used in the development sector, since they include an entryway, a plan for inclusion from the start, an off ramp and hand over, and an understanding that the media agency is not the center of the story but can feed extra energy into a topic to help it move forward.

The difference between journalism and advocacy has emerged as a concern as traditional approaches to reporting change. Participatory work is often viewed as being less “objective” and more like advocacy. “Should journalists be advocates or not?” is a key question. Yet, as noted during the Salon discussion, journalists have always interrogated the actions of powerful people – e.g., the Elon Musks of the world. “If we’re going to interrogate power, then it’s not a huge jump to say we want to inform people about the power they already have, and all we’re doing is being intentional about getting this information to where it needs to go,” one person commented.

‘Another Salon participant agreed. ‘If you break a story about a corrupt politician, you expect that corrupt politician to be hauled before whatever institutions exist or for them to lose their job. No one is hand wringing there about whether we’ve done our jobs well, right? It is when we start to take active interest in areas that are considered outside of traditional media, when you move from politics and the economy to technology or gender or any of these other areas considered softer’, that there is a sense that you have shifted into activism and are less focused on hard-hitting journalism.” Another participant said “there’s a real discomfort when activist organizations like our work… even though the idea is that you’re supposed to be creating impact, but you’re not supposed to want that activist label.”’

Another Salon participant agreed. ‘If you break a story about a corrupt politician, you expect that corrupt politician to be hauled before whatever institutions exist or for them to lose their job. No one is hand wringing there about whether we’ve done our jobs well, right? It is when we start to take active interest in areas that are considered outside of traditional media, when you move from politics and the economy to technology or gender or any of these other areas considered ‘softer’, that there is a sense that you have shifted into activism and are less focused on hard-hitting journalism.” Another participant said “there’s a real discomfort when activist organizations like our work… Even though the idea is that you’re supposed to be creating impact, you’re not supposed to want that activist label.”

Identity and objectivity came up in the discussion as well. “The people who are most precious about whether we are objective tend to be a cohort at the intersection of gender, race, and class. Upper middle class white guys are the ones who can go anywhere in the world and report any story and are still ‘objective’. But if you try and think about other communities reporting on themselves or working in different ways, the question is always, ‘wait, how can that be done objectively?’”

A Pew Research Poll in 2022 found that overall,76% of journalists in the US are white and 51% are male. In science and tech beats, 60% of political reporters and 58% of tech journalists are men, and 77% of science and tech reporters are white, 7% Asian, 3% Black, and 3% Hispanic. Some Salon participants pointed out that this is a human resource and hiring problem that derives from structural issues both in journalism and the wider world. In tech reporting and the media space in general, those who tend to be hired are English speaking, highly educated, upper or upper middle class people from a major metropolitan area in their country. There are very few, media outlets that bring in other perspectives.

Salon participants pointed to these statistics and noted that white, US-born journalists are considered able to “objectively” report on any story in any part of the world. They can “parachute in and cover anything they want.” Yet non-white and/or non-US-born and queer journalists are either shoehorned into being experts for their own race, gender, sexual orientation or ethnicity./national identity or seen as unable to be objective because of their identities. “If you’re an English speaking, educated person from the motherland, [it’s assumed that] your responsibility is to tell the story of your people.”

In addition, the US flattens nuance in racism, classism, and other equity issues. Because the US is in an era of diversity, said one Salon participant, media outlets think it’s enough to find a Brown person and put them in leadership. They don’t often look at other issues like race, class, caste or colorism or how those play out within communities of color. “You also have to ask the question of, okay, which people from this place have the resources, the access to get the kind of education that makes them the people that institutions rely on to tell the stories of an entire country or region. How does the system reinforce, again, that internal class dynamic or that broader class and racial dynamic, even as it’s counting for ‘diversity’ on the internal side.”

Waiting for harm to happen. Another challenge raised with tech reporting is the tendency to wait until something terrible happens before a story or issue is covered. News outlets wait until a problem is acute and then write an article and say “look over here, this is happening, isn’t that awful, someone should do something,” as one Salon participant said. The mandate tends to be to “wait until harm is bad enough to be visible before reporting” rather than reducing or mitigating harm. “With technology, the speed of change is so rapid – there needs to be something beyond the horse-race journalism of ‘here’s some investment, here’s a new technology, here’s a hot take and here’s why that matters,’. There needs to be something more meaningful than that.”

Newsworthiness is sometimes weaponized to kill reporting on marginalized communities, said one person. Pitches are informed by the subjectivity and lived experiences of senior editors who may not have a nuanced understanding of how technologies and related issues affect queer communities and/or people of color. Reporters often have to find an additional “hook” to get approval to run a story about these groups or populations because the story itself is not considered newsworthy enough. The hook will often be something that ties it back to Silicon Valley — for example, a story deemed “not newsworthy” might suddenly become important when it can be linked to something that a powerful person in tech does. Reporters have to be creative to get buy in for international stories whose importance is not fully grasped by editors; for example, by pitching how a story will bring in subscriptions, traffic, or an award, or by running a US-focused story that does well, and then pitching the international version of the story.

Reporting on structural challenges in tech. Media absolutely helps bring issues to the forefront, said one Salon participant, and there are lots of great examples recently of dynamic investigative reporting and layered, nuanced storytelling. It remains difficult, however, to report on structural issues or infrastructure. Many of the harms that happen due to technology need to be resolved at the policy, regulatory, or structural level. “This is the ‘boring’ part of the story, but it’s where everything is getting cemented in terms of what technology can do and what harms will result.”

One media outlet tackled this by conducting research to show structural barriers to equity in technology access. A project measured broadband speeds in different parts of cities across the US during COVID to show how inequalities in bandwidth affected people’s access to jobs, income and services. The team joined up with other media groups and shared the data so that it could reach different audiences through a variety of story lines, some national and some local.

The field is shifting, as one Salon participant concluded, and it’s all about owning the moment. “You must own the choices that you’re making…. I do not care if this thing called journalism and these people called journalists continue to exist in the way that they do now… We must rediscover the role of the storyteller who keeps us alive and gives meaning to our societies. This model [of journalism] was not built for someone like me to engage in it fully, to see myself reflected in it fully. Institutional journalism was not made for many of people in this room. It was not made for us to imagine that we are leaders in it, bearers of it, creators of it, or anything other than just its subjects in some sort of ‘National Geographic’ way. And that means owning the moment that we’re in and the opportunities it’s bringing us.”

Technology Salons run under Chatham House Rule, so no attribution has been made in this post. If you’d like to join us for a Salon, sign up here. If you’d like to suggest a topic or provide funding support to Salons in NYC please get in touch!

Read Full Post »

(Reposting, original appears here)

Back in 2014, the humanitarian and development sectors were in the heyday of excitement over innovation and Information and Communication Technologies for Development (ICT4D). The role of ICTs specifically for monitoring, evaluation, research and learning (aka “MERL Tech“) had not been systematized (as far as I know), and it was unclear whether there actually was “a field.” I had the privilege of writing a discussion paper with Michael Bamberger to explore how and why new technologies were being tested and used in the different steps of a traditional planning, monitoring and evaluation cycle. (See graphic 1 below, from our paper).

.

The approaches highlighted in 2014 focused on mobile phones, for example: text messages (SMS), mobile data gathering, use of mobiles for photos and recording, mapping with specific handheld global positioning systems (GPS) devices or GPS installed in mobile phones. Promising technologies included tablets, which were only beginning to be used for M&E; “the cloud,” which enabled easier updating of software and applications; remote sensing and satellite imagery, dashboards, and online software that helped evaluators do their work more easily. Social media was also really taking off in 2014. It was seen as a potential way to monitor discussions among program participants, gather feedback from program participants, and considered an underutilized tool for greater dissemination of evaluation results and learning. Real-time data and big data and feedback loops were emerging as ways that program monitoring could be improved, and quicker adaptation could happen.

In our paper, we outlined five main challenges for the use of ICTs for M&E: selectivity bias; technology- or tool-driven M&E processes; over-reliance on digital data and remotely collected data; low institutional capacity and resistance to change; and privacy and protection. We also suggested key areas to consider when integrating ICTs into M&E: quality M&E planning, design validity; value-add (or not) of ICTs; using the right combination of tools; adapting and testing new processes before role-out; technology access and inclusion; motivation to use ICTs, privacy and protection; unintended consequences; local capacity; measuring what matters (not just what the tech allows you to measure); and effectively using and sharing M&E information and learning.

We concluded that:

  • The field of ICTs in M&E is emerging and activity is happening at multiple levels and with a wide range of tools and approaches and actors. 
  • The field needs more documentation on the utility and impact of ICTs for M&E. 
  • Pressure to show impact may open up space for testing new M&E approaches. 
  • A number of pitfalls need to be avoided when designing an evaluation plan that involves ICTs. 
  • Investment in the development, application and evaluation of new M&E methods could help evaluators and organizations adapt their approaches throughout the entire program cycle, making them more flexible and adjusted to the complex environments in which development initiatives and M&E take place.

Where are we now:  MERL Tech in 2019

Much has happened globally over the past five years in the wider field of technology, communications, infrastructure, and society, and these changes have influenced the MERL Tech space. Our 2014 focus on basic mobile phones, SMS, mobile surveys, mapping, and crowdsourcing might now appear quaint, considering that worldwide access to smartphones and the Internet has expanded beyond the expectations of many. We know that access is not evenly distributed, but the fact that more and more people are getting online cannot be disputed. Some MERL practitioners are using advanced artificial intelligence, machine learning, biometrics, and sentiment analysis in their work. And as smartphone and Internet use continue to grow, more data will be produced by people around the world. The way that MERL practitioners access and use data will likely continue to shift, and the composition of MERL teams and their required skillsets will also change.

The excitement over innovation and new technologies seen in 2014 could also be seen as naive, however, considering some of the negative consequences that have emerged, for example social media inspired violence (such as that in Myanmar), election and political interference through the Internet, misinformation and disinformation, and the race to the bottom through the online “gig economy.”

In this changing context, a team of MERL Tech practitioners (both enthusiasts and skeptics) embarked on a second round of research in order to try to provide an updated “State of the Field” for MERL Tech that looks at changes in the space between 2014 and 2019.

Based on MERL Tech conferences and wider conversations in the MERL Tech space, we identified three general waves of technology emergence in MERL:

  • First wave: Tech for Traditional MERL: Use of technology (including mobile phones, satellites, and increasingly sophisticated data bases) to do ‘what we’ve always done,’ with a focus on digital data collection and management. For these uses of “MERL Tech” there is a growing evidence base. 
  • Second wave:  Big Data. Exploration of big data and data science for MERL purposes. While plenty has been written about big data for other sectors, the literature on the use of big data and data science for MERL is somewhat limited, and it is more focused on potential than actual use. 
  • Third wave:  Emerging approaches. Technologies and approaches that generate new sources and forms of data; offer different modalities of data collection; provide ways to store and organize data, and provide new techniques for data processing and analysis. The potential of these has been explored, but there seems to be little evidence base to be found on their actual use for MERL. 

We’ll be doing a few sessions at the American Evaluation Association conference this week to share what we’ve been finding in our research. Please join us if you’ll be attending the conference!

Session Details:

Thursday, Nov 14, 2.45-3.30pm: Room CC101D

Friday, Nov 15, 3.30-4.15pm: Room CC101D

Saturday, Nov 16, 10.15-11am. Room CC200DE

Read Full Post »

At our April Technology Salon we discussed the evidence and good practice base for blockchain and Distributed Ledger Technologies (DLTs) in the humanitarian sector. Our discussants were Larissa Fast (co-author with Giulio Coppi of the Global Alliance for Humanitarian Innovation/GAHI’s report on Humanitarian Blockchain, Senior Lecturer at HCRI, University of Manchester and Research Associate at the Humanitarian Policy Group) and Ariana Fowler (UNICEF Blockchain Strategist).

Though blockchain fans suggest DLTs can address common problems of humanitarian organizations, the extreme hype cycle has many skeptics who believe that blockchain and DLTs are simply overblown and for the most part useless for the sector. Until recently, evidence on the utility of blockchain/DLTs in the humanitarian sector has been slim to none, with some calling for the sector to step back and establish a measured approach and a learning agenda in order to determine if blockchain is worth spending time on. Others argue that evaluators misunderstand what to evaluate and how.

The GAHI report provides an excellent overview of blockchain and DLTs in the sector along with recommendations at the project, policy and system levels to address the challenges that would need to be overcome before DLTs can be ethically, safely, appropriately and effectively scaled in humanitarian contexts.

What’s blockchain? What’s a DLT?

We started with a basic explanation of DLTs and Blockchain and how they work. (See page 5 of the GAHI report for more detail).

The GAHI report aimed to get beyond the potential of Blockchain and DLTs to actual use cases — however, in the humanitarian sector there is still more potential than evidence. Although there were multiple use cases to choose from, the report authors chose to go in-depth on five, selected to provide a sense of the different ways that blockchain is specifically being used in the sector.

These use cases all currently have limited “nodes” (e.g., places where the data is stored) and only a few “controlling entities” (that determine what information is stored or put on the chain). They are all “private“ (as opposed to public) blockchains, meaning they are not taking advantage of DLT potential for dispersed information, and they end up being more like “a very expensive database.”

What’s the deal with private vs public blockchains?

Private versus public blockchains are an ideological sticking point in “deep blockchain culture,” noted one Salon participant. “’Cryptobros’ and blockchain fundamentalists think private blockchains are the Antichrist.” Private blockchains are considered an oxymoron and completely antithetical to the idea of blockchain.

So why are humanitarian organizations creating private blockchains? “They are being cautious about protecting data as they test out blockchain and DLTs. It’s a conscious choice to proceed in a controlled way, because once information is on the blockchain, it’s immutable — it cannot be removed.” When first trying out a DLT or blockchain, “Humanitarians tend to be cautious. They don’t want to play with the permanency of a public blockchain since they are working with vulnerable populations.”

Because of the blockchain hype cycle, however, there is some skepticism about organizations using private blockchains. “Are they setting up a private blockchain with one node so that they can say that they’re using blockchain just to get funding?”

An issue with private blockchains is that they are not open and transparent. The code is developed behind closed doors, meaning that it’s difficult to make it interoperable, whereas “with a public chain, you can check the code and interact with it.”

Does the humanitarian sector have the capacity to use blockchain?

As one person pointed out, knowledge and capacity around blockchain in the humanitarian sector is very low. There are currently very few people who understand both humanitarian work and the private sector/technology side of blockchain. “We desperately need intermediaries because people in the two sectors talk past each other. They use the same words to mean very different things, and this leads to misunderstandings.” This is a perpetual issue in the “humanitarian tech” space, and it often leads to applications that are not in the best interest of those on the receiving end of humanitarian work.

Capacity challenges also come up with regard to managing partnerships that involve intellectual properly. When cooperating with the private sector, organizations are normally required to sign an MOU that gives rights to the company. Often humanitarian agencies do not fully understand what they are signing up for. This can mean that the company uses the humanitarian collaboration to develop technologies that are later used in ways that the humanitarian agency considers unethical or disturbing. Having technology or blockchain expertise within an organization makes it possible to better negotiate those types of situations, but often only the larger INGOs can afford that type of expertise. Similarly, organizations lack expertise in the legal and regulatory space with regard to blockchain.

How will blockchain become locally owned? Should we wait for a user-friendly version?

Technology moves extremely fast, and organizations need a certain level of capacity to create it and maintain it. “I’m an engineer working in the humanitarian space,” said one Salon participant. “Blockchain is such a complex software solution that I’m very skeptical it will ever be at a stage where it could be locally owned and managed. Even with super basic SMS-based services we have maintenance issues and challenges handing off the tech. If in this room we are struggling to understand blockchain, how will this ever work in lower tech and lower resource areas?” Another participant asked a similar question with regard to handing off a blockchain solution to a local government.

Does the sector needs to wait for a simplified and “user friendly” version of blockchain before humanitarians get into the space? Some said yes, but other participants said that the technology is moving quickly, and that it is critical for humanitarians to “get in there” to try to slow it down. “Sometimes blockchain is not the solution. Sometimes a database is just fine. We need people to pump the brakes before things get out of control.”

“How can people learn about blockchain? How could a grassroots organization begin to set one up?” asked one person. There is currently no “Square Space for Blockchain,” and the technology remains complicated, but those with a strong drive could learn, according to one person. But although “coders might be able to teach themselves ‘light blockchain,’ there is definitely a barrier to entry.” This is a challenge with the whole area of blockchain. “It skipped the education step. We need a ‘learning revolution ‘if we want people to actually use it.”

Enabling environments for learning to use blockchain don’t exist in conflict zones. The knowledge is held by a few individuals, and this makes long-term support and maintenance of DLT and blockchain systems very difficult. How to localize and own the knowledge? How to ensure sustainability? The sector needs to think about what the “Blockchain 101” is. There needs to be more accompaniment, investment and support for the enabling environment if blockchain is to be useful and sustainable in the sector.

Are there any examples of humanitarian blockchain that are working?

The GAHI report talks about five cases in particular. Disberse was highlighted by one Salon participant as an example that seems to be working. Disberse is a private fin-tech company that uses blockchain, but it was started by former humanitarians. “This example works in part because there is a sense of commitment to the humanitarian sector alongside the technical expertise.”

In general, in the humanitarian space, the place where blockchain/ DLTs appear to be the most effective is in back-end use cases. In other words, blockchain is helpful for making behind-the-scenes transactions in humanitarian assistance more efficient. It can eliminate bank transaction fees, and this leads to savings. Agencies can also use blockchain to create efficiencies and benefits for record keeping and auditability. This situation is not unique to blockchain. A recent DIAL baseline study of the global ICT4D ecosystem also found that in the social sector, the main benefits of ICTs were going to organizations, not to vulnerable populations.

“This is all fine,” according to one Salon participant, “but one must be clear that the benefits accrue to the agencies, not the ‘beneficiaries,’ who may not even know that DLTs are being used.” On the one hand, having a seamless backend built on blockchain where users don’t even know that blockchain is involved sounds ideal, However, this can be somewhat problematic. “Are agencies getting meaningful and responsible consent for using blockchain? If executives don’t even understand what the blockchain is, how do you explain that to people more generally?”

Because there is not a simple, accessible way of developing blockchain solutions and there are not a lot of user-friendly interfaces for the general population, for at least the next few years, humanitarian applications of blockchain will likely only be useful for back-office operations. This means that is is up to humanitarian organizations to re-invest any money saved by blockchain into program funding, so that “beneficiaries” are accruing the benefits.

What other “social” use cases are there for blockchain?

In the wider social sector and development sector, there are plenty of potential use cases, but again, very little documented evidence of their short- and long-term impacts. (Author’s note: I am not talking about financial and private sector use cases, I’m referring very specifically to social sectors and the international development and humanitarian sector). For example, Oxfam is tracing supply chains of rice, however this is a one-off pilot and it’s unclear whether it can scale. IBM has a variety of supply chain examples. Land registries and sustainable fishing are also being explored as are digital ID, birth registration and civil registries.

According to one Salon participant, “supply chain is the low-hanging fruit of blockchain – just recording something, tracking it, and referencing it. It’s all basically a ledger, a spreadsheet. Even digital ID – it’s a supply chain of movement. Provenance is a good way to use a blockchain solution.” Other areas where blockchain is said to have potential is in situations where election transparency is needed and also “smart contracts” where one needs complex contracts and there is a lack of trust amongst the parties. In general, where there is a recurring need for anonymized, disaggregated data, blockchain could be a solution.

The important thing, however, is having a very clear definition of the problem before deciding that blockchain is the solution. “A lot of times people don’t know what their problem is, and the problem is not one that can be fixed with blockchain.” Additionally, accuracy (”garbage in, garbage out”) remains a problem that blockchain on its own cannot solve. “If the off-chain process isn’t accurate, If you’re looking at human rights abuses of migrant workers, but everything is being fudged. If your supply chain is blurry, or if the information being put on the blockchain is not verified, then you have a separate problem to figure out before thinking about blockchain.”

What about ethics and consent and the Digital Principles?

Are the Digital Principles are being used as a way to guide ethical, responsible and sustainable blockchain use in the humanitarian space, asked one Salon participant. The general impression in the room was that no. “Deep crypto in the private sector is a black hole in the blockchain space,” according to one person, and the gap between the world of blockchain in the private sector and the world of blockchain in the humanitarian sector is huge. (See this write up, for a taste of one segment of the crypto-world.) “The majority of private sector blockchain enthusiasts who are working on humanitarian issues have not heard of any principles. They are operating with no principles, and sometimes it’s largely for PR because the blockchain hype cycle means they will get a lot of good press from it. You get someone who read an article in Vice about a problem in a place they’ve never heard of, and they decide that blockchain is the solution…. They are often re-inventing the wheel, and fire, and also electricity — they think that no one has ever thought about this problem before.”

Most in the room considered that this type of uninformed application of blockchain is irresponsible, and that these parallel worlds and conversations need to come together. “The humanitarian space has decades of experience with things that have been tried and haven’t worked – but people on the tech side think no one has ever tried solving these problems. We need to improve the dialogue and communication. There is a wealth of knowledge to share, and a huge learning curve on both sides.”

Additionally, one Salon participant pointed out the importance of bringing ethics into the discussion. “It’s not about just using a blockchain. It’s about what the problem is that you’re trying to solve, and does blockchain help address that problem? There are a lot of problems that blockchain is not appropriate for. Do you have the technical capacity or an accessible online environment? That’s important.”

On top of that, “it’s important for people to know that their information is being used in a particular way by a particular technology. We need to grapple with that, or we end up experimenting on people who are already marginalized or vulnerable to begin with. How do we do that? It’s like the Facebook moment. That same thing for blockchain – if you don’t know what’s going on and how your information is being used, it’s problematic.”

A third point is the massive environmental disadvantage in a public blockchain. Currently, the computing power used to verify and validate transactions that happen on public chains is immense. That is part of the ethical challenge related to blockchain. “You can’t get around the massive environmental aspect. And that makes it ironic for blockchain to be used to track carbon offsets.” (Note: there are blockchain companies who say they are working on reducing the environmental impact of blockchain with “pilots coming very soon” but it remains to be seen whether this is true or whether it’s another part of the hype cycle.)

What should donors be doing?

In addition to taking into consideration the ethical, intellectual property, environmental, sustainability, ownership, and consent aspects mentioned above and being guided by the Digital Principles, it was suggested that donors make sure they do their homework and conduct thorough due diligence on potential partners and grantees. “The vetting process needs to be heightened with blockchain because of all the hype around it. Companies come and go. They are here one day and disappear the next.” There was deep suspicion in the room because of the many blockchain outfits that are hyped up and do not actually have the staff to truly do blockchain for humanitarian purposes and use this angle just to get investments.

“Before investing, It would be important to talk with someone like Larissa [our lead discussant] who has done vetting,” said one Salon participant.  “Don’t fall for the marketing. Do a lot of due diligence and demand evidence. Show us the evidence or we’re not funding you. If you’re saying you want to work with a vulnerable or marginalized population, do you have contact with them right now? Do you know them right now? Or did you just read about them in Vice?”

Recommendations outlined in the GAHI report include providing multi-year financing to humanitarian organizations to allow for the possibility of scaling, and asking for interoperability requirements and guidelines around transparency to be met so that there are not multiple silos governing the sector.

So, are we there yet?

Nope. But at least we’re starting to talk about evidence and learning!

Resources

In addition to the GAHI report, the following resources may be useful:

Salons run under Chatham House Rule, so no attribution has been made in this post. Technology Salons happen in several cities around the world. If you’d like to join a discussion, sign up here. If you’d like to host a Salon, suggest a topic, or support us to keep doing Salons in NYC please get in touch with me! 🙂

 

 

 

Read Full Post »

Karen Palmer is a digital filmmaker and storyteller from London who’s doing a dual residence at ThoughtWorks in Manhattan and TED New York to further develop a project called RIOT, described as an ‘emotionally responsive, live-action film with 3D sound.’ The film uses artificial intelligence, machine learning, various biometric readings, and facial recognition to take a person through a personalized journey during dangerous riot.

Karen Palmer, the future of immersive filmmaking, Future of Storytelling (FoST) 

Karen describes RIOT as ‘bespoke film that reflects your reality.’ As you watch the film, the film is also watching you and adapting to your experience of viewing it. Using a series of biometric readings (the team is experimenting with eye tracking, facial recognition, gait analysis, infrared to capture body temperature, and an emerging technology that tracks heart rate by monitoring the capillaries under a person’s eyes) the film shifts and changes. The biometrics and AI create a “choose your own adventure” type of immersive film experience, except that the choice is made by your body’s reactions to different scenarios. A unique aspect of Karen’s work is that the viewer doesn’t need to wear any type of gear for the experience. The idea is to make RIOT as seamless and immersive as possible. Read more about Karen’s ideas and how the film is shaping up in this Fast Company article and follow along with the project on the RIOT project blog.

When we talked about her project, the first thing I thought of was “The Feelies” in Aldous Huxley’s 1932 classic ‘Brave New World.’ Yet the feelies were pure escapism, and Karen’s work aims to draw people in to a challenging experience where they face their own emotions.

On Friday, December 15, I had the opportunity to facilitate a Salon discussion with a number of people from related disciplines who are intrigued by RIOT and the various boundaries it tests and explores. We had perspectives from people working in the areas of digital storytelling and narrative, surveillance and activism, media and entertainment, emotional intelligence, digital and immersive theater, brand experience, 3D sound and immersive audio, agency and representation, conflict mediation and non-state actors, film, artificial intelligence, and interactive design.

Karen has been busy over the past month as interest in the project begins to swell. In mid-November, at Montreal’s Phi Centre’s Lucid Realities exhibit, she spoke about how digital storytelling is involving more and more of our senses, bringing an extra layer of power to the experience. This means that artists and creatives have an added layer of responsibility. (Research suggests, for example, that the brain has trouble deciphering between virtual reality [VR] and actual reality, and children under the age of 8 have had problems differentiating between a VR experience and actual memory.)

At a recent TED Talk, Karen described the essence of her work as creating experiences where the participant becomes aware of how their emotions affect the narrative of the film while they are in it, and this helps them to see how their emotions affect the narrative of their life. Can this help to create new neural pathways in the brain, she asks. Can it help a person to see how their own emotions are impacting on them but also how others are reading their emotions and reacting to those emotions in real life?

Race and sexuality are at the forefront in the US – and the Trump elections further heightened the tensions. Karen believes it’s ever more important to explore different perspectives and fears in the current context where the potential for unrest is growing. Karen hopes that RIOT can be ‘your own personal riot training tool – a way to become aware of your own reactions and of moving through your fear.’

Core themes that we discussed on Friday include:

How can we harness the power of emotion? Despite our lives being emotionally hyper-charged, (especially right now in the US), we keep using facts and data to try to change hearts and minds. This approach is ineffective. In addition, people are less trusting of third-party sources because of the onslaught of misinformation, disinformation and false information. Can we use storytelling to help us get through this period? Can immersive storytelling and creative use of 3D sound help us to trust more, to engage and to witness? Can it help us to think about how we might react during certain events, like police violence? (See Tahera Aziz’ project [re]locate about the murder of Stephen Lawrence in South London in 1993). Can it help us to better understand various perspectives? The final version of RIOT aims to bring in footage from several angles, such as CCTV from a looted store, a police body cam, and someone’s mobile phone footage shot as they ran past, in an effort to show an array of perspectives that would help viewers see things in different lights.

How do we catch the questions that RIOT stirs up in people’s minds? As someone experiences RIOT, they will have all sorts of emotions and thoughts, and these will depend on a their identity and lived experiences. At one showing of RIOT, a young white boy said he learned that if he’s feeling scared he should try to stay calm. He also said that when the cop yelled at him in the film, he assumed that he must have done something wrong. A black teenager might have had an entirely different reaction to the police. RIOT is bringing in scent, haze, 3D sound, and other elements which have started to affect people more profoundly. Some have been moved to tears or said that the film triggered anger and other strong emotions for them.

Does the artist have a responsibility to accompany people through the full emotional experience? In traditional VR experiences, a person waits in line, puts on a VR headset, experiences something profound (and potentially something triggering), then takes off the headset and is rushed out so that the next person can try it. Creators of these new and immersive media experiences are just now becoming fully aware of how to manage the emotional side of the experiences and they don’t yet have a good handle on what their responsibilities are toward those who are going through them. How do we debrief people afterwards? How do we give them space to process what has been triggered? How do we bring people into the co-creation process so that we better understand what it means to tell or experience these stories? The Columbia Digital Storytelling Lab is working on gaining a better understanding of all this and the impact it can have on people.

How do we create the grammar and frameworks for talking about this? The technologies and tactics for this type of digital immersive storytelling are entirely new and untested. Creators are only now becoming more aware of the consequences of the experiences that they are creating ‘What am I making? Why? How will people go through it? How will they leave? What are the structures and how do I make it safe for them?’ The artist can open someone up to an intense experience, but then they are often just ushered out, reeling, and someone else is rushed in. It’s critical to build time for debriefing into the experience and to have some capacity for managing the emotions and reactions that could be triggered.

SAFE Lab, for example, works with students and the community in Chicago, Harlem, and Brooklyn on youth-driven solutions to de-escalation of violence. The project development starts with the human experience and the tech comes in later. Youth are part of the solution space, but along the way they learn hard and soft skills related to emerging tech. The Lab is testing a debriefing process also. The challenge is that this is a new space for everyone; and creation, testing and documentation are happening simultaneously. Rather than just thinking about a ‘user journey,’ creators need to think about the emotionality of the full experience. This means that as opposed to just doing an immersive film – neuroscience, sociology, behavioral psychology, and lots of other fields and research are included in the dialogue. It’s a convergence of industries and sectors.

What about algorithmic bias? It’s not possible to create an unbiased algorithm, because humans all have bias. Even if you could create an unbiased algorithm, as soon as you started inputting human information into it, it would become biased. Also, as algorithms become more complex, it becomes more and more difficult to understand how they arrive to decisions. This results in black boxes that are putting out decisions that even the humans that build them can’t understand. The RIOT team is working with Dr. Hongying Meng of Brunel University London, an expert in the creation of facial and emotion detection algorithms, to develop an open source algorithm for RIOT. Even if the algorithm itself isn’t neutral, the process by which it computes will be transparent.

Most algorithms are not open. Because the majority of private companies have financial goals rather than social goals in using or creating algorithms, they have little incentive for being transparent about how an algorithm works or what biases are inherent. Ad agencies want to track how a customer reacts to a product. Facebook wants to generate more ad revenue so it adjusts what news you see on your feed. The justice system wants to save money and time by using sentencing algorithms. Yet the biases in their algorithms can cause serious harm in multiple ways. (See this 2016 report from ProPublica). The problem with these commercial algorithms is that they are opaque and the biases in them are not shared. This lack of transparency is considered by some to be more problematic than the bias itself.

Should there be a greater push for regulation of algorithms? People who work in surveillance are often ignored because they are perceived as paranoid. Yet fears that AI will be totally controlled by the military, the private sector and tech companies in ways that are hidden and opaque are real and it’s imperative to find ways to bring the actual dangers home to people. This could be partly accomplished through narrative and stories. (See John Oliver’s interview with Edward Snowden) Could artists create projects that drive conversations around algorithmic bias, help the public see the risks, and push for greater regulation? (Also of note: the New York City government recently announced that it will start a task force to look more deeply into algorithmic bias).

How is the RIOT team developing its emotion recognition algorithm? The RIOT team is collecting data to feed into the algorithm by capturing facial emotions and labeling them. The challenge is that one person may think someone looks calm, scared, or angry and another person may read it a different way. They are also testing self-reported emotions to reduce bias. The purpose of the RIOT facial detection algorithm is to measure what the person is actually feeling and how others perceive that the person is feeling. For example, how would a police officer read your face? How would a fellow protester see you? The team is developing the algorithm with the specific bias that is needed for the narrative itself. The process will be documented in a peer-reviewed research paper that considers these issues from the angle of state control of citizens. Other angles to explore would be how algorithms and biometrics are used by societies of control and/or by non-state actors such as militia in the Middle East or by right wing and/or white supremacist groups in the US. (See this article on facial recognition tools being used to identify sexual orientation)

Stay tuned to hear more…. We’ll be meeting again in the new year to go more in-depth on topics such as responsibly guiding people through VR experiences; exploring potential unintended consequences of these technologies and experiences, especially for certain racial groups; commercial applications for sensory storytelling and elements of scale; global applications of these technologies; practical development and testing of algorithms; prototyping, ideation and foundational knowledge for algorithm development.

Garry Haywood of Kinicho from also wrote his thoughts up from the day.

Read Full Post »

This post is co-authored by Emily Tomkys, Oxfam GB; Danna Ingleton, Amnesty International; and me (Linda Raftree, Independent)

At the MERL Tech conference in DC this month, we ran a breakout session on rethinking consent in the digital age. Most INGOs have not updated their consent forms and policies for many years, yet the growing use of technology in our work, for many different purposes, raises many questions and insecurities that are difficult to address. Our old ways of requesting and managing consent need to be modernized to meet the new realities of digital data and the changing nature of data. Is informed consent even possible when data is digital and/or opened? Do we have any way of controlling what happens with that data once it is digital? How often are organizations violating national and global data privacy laws? Can technology be part of the answer?

Let’s take a moment to clarify what kind of consent we are talking about in this post. Being clear on this point is important because there are many synchronous conversations on consent in relation to technology. For example there are people exploring the use of the consent frameworks or rhetoric in ICT user agreements – asking whether signing such user agreements can really be considered consent. There are others exploring the issue of consent for content distribution online, in particular personal or sensitive content such as private videos and photographs. And while these (and other) consent debates are related and important to this post, what we are specifically talking about is how we, our organizations and projects, address the issue of consent when we are collecting and using data from those who participate in programs or monitoring, evaluation, research and learning (MERL) that we are implementing.

This diagram highlights that no matter how someone is engaging with the data, how they do so and the decisions they make will impact on what is disclosed to the data subject.

No matter how someone is engaging with data, how they do so and the decisions they make will impact on what is disclosed to the data subject.

This is as timely as ever because introducing new technologies and kinds of data means we need to change how we build consent into project planning and implementation. In fact, it gives us an amazing opportunity to build consent into our projects in ways that our organizations may not have considered in the past. While it used to be that informed consent was the domain of frontline research staff, the reality is that getting informed consent – where there is disclosure, voluntariness, comprehension and competence of the data subject –  is the responsibility of anyone ‘touching’ the data.

Here we share examples from two organizations who have been exploring consent issues in their tech work.

Over the past two years, Girl Effect has been incorporating a number of mobile and digital tools into its programs. These include both the Girl Effect Mobile (GEM) and the Technology Enabled Girl Ambassadors (TEGA) programs.

Girl Effect Mobile is a global digital platform that is active in 49 countries and 26 languages. It is being developed in partnership with Facebook’s Free Basics initiative. GEM aims to provide a platform that connects girls to vital information, entertaining content and to each other. Girl Effect’s digital privacy, safety and security policy directs the organization to review and revise its terms and conditions to ensure that they are ‘girl-friendly’ and respond to local context and realities, and that in addition to protecting the organization (as many T&Cs are designed to do), they also protect girls and their rights. The GEM terms and conditions were initially a standard T&C. They were too long to expect girls to look at them on a mobile, the language was legalese, and they seemed one-sided. So the organization developed a new T&C with simplified language and removed some of the legal clauses that were irrelevant to the various contexts in which GEM operates. Consent language was added to cover polls and surveys, since Girl Effect uses the platform to conduct research and for its monitoring, evaluation and learning work. In addition, summary points are highlighted in a shorter version of the T&Cs with a link to the full T&Cs. Girl Effect also develops short articles about online safety, privacy and consent as part of the GEM content as a way of engaging girls with these ideas as well.

TEGA is a girl-operated mobile-enabled research tool currently operating in Northern Nigeria. It uses data-collection techniques and mobile technology to teach girls aged 18-24 how to collect meaningful, honest data about their world in real time. TEGA provides Girl Effect and partners with authentic peer-to-peer insights to inform their work. Because Girl Effect was concerned that girls being interviewed may not understand the consent they were providing during the research process, they used the mobile platform to expand on the consent process. They added a feature where the TEGA girl researchers play an audio clip that explains the consent process. Afterwards, girls who are being interviewed answer multiple choice follow up questions to show whether they have understood what they have agreed to. (Note: The TEGA team report that they have incorporated additional consent features into TEGA based on examples and questions shared in our session).

Oxfam, in addition to developing out their Responsible Program Data Policy, has been exploring ways in which technology can help address contemporary consent challenges. The organization had doubts on how much its informed consent statement (which explains who the organization is, what the research is about and why Oxfam is collecting data as well as asks whether the participant is willing to be interviewed) was understood and whether informed consent is really possible in the digital age. All the same, the organization wanted to be sure that the consent information was being read out in its fullest by enumerators (the interviewers). There were questions about what the variation might be on this between enumerators as well as in different contexts and countries of operation. To explore whether communities were hearing the consent statement fully, Oxfam is using mobile data collection with audio recordings in the local language and using speed violations to know whether the time spent on the consent page is sufficient, according to the length of the audio file played. This is by no means foolproof but what Oxfam has found so far is that the audio file is often not played in full and or not at all.

Efforts like these are only the beginning, but they help to develop a resource base and stimulate more conversations that can help organizations and specific projects think through consent in the digital age.

Additional resources include this framework for Consent Policies developed at a Responsible Data Forum gathering.

Because of how quickly technology and data use is changing, one idea that was shared was that rather than using informed consent frameworks, organizations may want to consider defining and meeting a ‘duty of care’ around the use of the data they collect. This can be somewhat accomplished through the creation of organizational-level Responsible Data Policies. There are also interesting initiatives exploring new ways of enabling communities to define consent themselves – like this data licenses prototype.

screen-shot-2016-11-02-at-10-20-53-am

The development and humanitarian sectors really need to take notice, adapt and update their thinking constantly to keep up with technology shifts. We should also be doing more sharing about these experiences. By working together on these types of wicked challenges, we can advance without duplicating our efforts.

Read Full Post »

I used to write blog posts two or three times a week, but things have been a little quiet here for the past couple of years. That’s partly because I’ve been ‘doing actual work’ (as we like to say) trying to implement the theoretical ‘good practices’ that I like soapboxing about. I’ve also been doing some writing in other places and in ways that I hope might be more rigorously critiqued and thus have a wider influence than just putting them up on a blog.

One of those bits of work that’s recently been released publicly is a first version of a monitoring and evaluation framework for SIMLab. We started discussing this at the first M&E Tech conference in 2014. Laura Walker McDonald (SIMLab CEO) outlines why in a blog post.

Evaluating the use of ICTs—which are used for a variety of projects, from legal services, coordinating responses to infectious diseases, media reporting in repressive environments, and transferring money among the unbanked or voting—can hardly be reduced to a check-list. At SIMLab, our past nine years with FrontlineSMS has taught us that isolating and understanding the impact of technology on an intervention, in any sector, is complicated. ICTs change organizational processes and interpersonal relations. They can put vulnerable populations at risk, even while improving the efficiency of services delivered to others. ICTs break. Innovations fail to take hold, or prove to be unsustainable.

For these and many other reasons, it’s critical that we know which tools do and don’t work, and why. As M4D edges into another decade, we need to know what to invest in, which approaches to pursue and improve, and which approaches should be consigned to history. Even for widely-used platforms, adoption doesn’t automatically mean evidence of impact….

FrontlineSMS is a case in point: although the software has clocked up 200,000 downloads in 199 territories since October 2005, there are few truly robust studies of the way that the platform has impacted the project or organization it was implemented in. Evaluations rely on anecdotal data, or focus on the impact of the intervention, without isolating how the technology has affected it. Many do not consider whether the rollout of the software was well-designed, training effectively delivered, or the project sustainably planned.

As an organization that provides technology strategy and support to other organizations — both large and small — it is important for SIMLab to better understand the quality of that support and how it may translate into improvements as well as how introduction or improvement of information and communication technology contributes to impact at the broader scale.

This is a difficult proposition, given that isolating a single factor like technology is extremely tough, if not impossible. The Framework thus aims to get at the breadth of considerations that go into successful tech-enabled project design and implementation. It does not aim to attribute impact to a particular technology, but to better understand that technology’s contribution to the wider impact at various levels. We know this is incredibly complex, but thought it was worth a try.

As Laura notes in another blogpost,

One of our toughest challenges while writing the thing was to try to recognize the breadth of success factors that we see as contributing to success in a tech-enabled social change project, without accidentally trying to write a design manual for these types of projects. So we reoriented ourselves, and decided instead to put forward strong, values-based statements.* For this, we wanted to build on an existing frame that already had strong recognition among evaluators – the OECD-DAC criteria for the evaluation of development assistance. There was some precedent for this, as ALNAP adapted them in 2008 to make them better suited to humanitarian aid. We wanted our offering to simply extend and consider the criteria for technology-enabled social change projects.

Here are the adapted criteria that you can read more about in the Framework. They were designed for internal use, but we hope they might be useful to evaluators of technology-enabled programming, commissioners of evaluations of these programs, and those who want to do in-house examination of their own technology-enabled efforts. We welcome your thoughts and feedback — The Framework is published in draft format in the hope that others working on similar challenges can help make it better, and so that they could pick up and use any and all of it that would be helpful to them. The document includes practical guidance on developing an M&E plan, a typical project cycle, and some methodologies that might be useful, as well as sample log frames and evaluator terms of reference.

Happy reading and we really look forward to any feedback and suggestions!!

*****

The Criteria

Criterion 1: Relevance

The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.

Consider: are the activities and outputs of the project consistent with the goal and objectives? Was there a good context analysis and needs assessment, or another way for needs to inform design – particularly through participation by end users? Did the implementer have the capacity, knowledge and experience to implement the project? Was the right technology tool and channel selected for the context and the users? Was content localized appropriately?

Criterion 2: Effectiveness

A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.

Consider: In a technology-enabled effort, there may be one tool or platform, or a set of tools and platforms may be designed to work together as a suite. Additionally, the selection of a particular communication channel (SMS, voice, etc) matters in terms of cost and effectiveness. Was the project monitored and early snags and breakdowns identified and fixed, was there good user support? Did the tool and/or the channel meet the needs of the overall project? Note that this criterion should be examined at outcome level, not output level, and should examine how the objectives were formulated, by whom (did primary stakeholders participate?) and why.

Criterion 3: Efficiency

Efficiency measures the outputs – qualitative and quantitative – in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology approach (including both the tech itself, and what it takes to sustain and use it) possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools and processes have been adopted. SIMLab looks at the interplay of efficiency and effectiveness, and to what degree a new tool or platform can support a reduction in cost, time, along with an increase in quality of data and/or services and reach/scale.

Consider: Was the technology tool rollout carried out as planned and on time? If not, what were the deviations from the plan, and how were they handled? If a new channel or tool replaced an existing one, how do the communication, digitization, transportation and processing costs of the new system compare to the previous one? Would it have been cheaper to build features into an existing tool rather than create a whole new tool? To what extent were aspects such as cost of data, ease of working with mobile providers, total cost of ownership and upgrading of the tool or platform considered?

Criterion 4: Impact

Impact relates to consequences of achieving or not achieving the outcomes. Impacts may take months or years to become apparent, and often cannot be established in an end-of-project evaluation. Identifying, documenting and/or proving attribution (as opposed to contribution) may be an issue here. ALNAP’s complex emergencies evaluation criteria include ‘coverage’ as well as impact; ‘the need to reach major population groups wherever they are.’ They note: ‘in determining why certain groups were covered or not, a central question is: ‘What were the main reasons that the intervention provided or failed to provide major population groups with assistance and protection, proportionate to their need?’ This is very relevant for us.

For SIMLab, a lack of coverage in an inclusive technology project means not only failing to reach some groups, but also widening the gap between those who do and do not have access to the systems and services leveraging technology. We believe that this has the potential to actively cause harm. Evaluation of inclusive tech has dual priorities: evaluating the role and contribution of technology, but also evaluating the inclusive function or contribution of the technology. A platform might perform well, have high usage rates, and save costs for an institution while not actually increasing inclusion. Evaluating both impact and coverage requires an assessment of risk, both to targeted populations and to others, as well as attention to unintended consequences of the introduction of a technology component.

Consider: To what extent does the choice of communications channels or tools enable wider and/or higher quality participation of stakeholders? Which stakeholders? Does it exclude certain groups, such as women, people with disabilities, or people with low incomes? If so, was this exclusion mitigated with other approaches, such as face-to-face communication or special focus groups? How has the project evaluated and mitigated risks, for example to women, LGBTQI people, or other vulnerable populations, relating to the use and management of their data? To what extent were ethical and responsible data protocols incorporated into the platform or tool design? Did all stakeholders understand and consent to the use of their data, where relevant? Were security and privacy protocols put into place during program design and implementation/rollout? How were protocols specifically integrated to ensure protection for more vulnerable populations or groups? What risk-mitigation steps were taken in case of any security holes found or suspected? Were there any breaches? How were they addressed?

Criterion 5: Sustainability

Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable. For SIMLab, sustainability includes both the ongoing benefits of the initiatives and the literal ongoing functioning of the digital tool or platform.

Consider: If the project required financial or time contributions from stakeholders, are they sustainable, and for how long? How likely is it that the business plan will enable the tool or platform to continue functioning, including background architecture work, essential updates, and user support? If the tool is open source, is there sufficient capacity to continue to maintain changes and updates to it? If it is proprietary, has the project implementer considered how to cover ongoing maintenance and support costs? If the project is designed to scale vertically (e.g., a centralized model of tool or platform management that rolls out in several countries) or be replicated horizontally (e.g., a model where a tool or platform can be adopted and managed locally in a number of places), has the concept shown this to be realistic?

Criterion 6: Coherence

The OECD-DAC does not have a 6th Criterion. However we’ve riffed on the ALNAP additional criterion of Coherence, which is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented. We propose that evaluations of inclusive technology projects aim to critically assess the extent to which the technologies fit within the broader market, both local, national and international. This includes compliance with national and international regulation and law.

Consider: Has the project considered interoperability of platforms (for example, ensured that APIs are available) and standard data formats (so that data export is possible) to support sustainability and use of the tool in an ecosystem of other products? Is the project team confident that the project is in compliance with existing legal and regulatory frameworks? Is it working in harmony or against the wider context of other actions in the area? Eg., in an emergency situation, is it linking its information system in with those that can feasibly provide support? Is it creating demand that cannot feasibly be met? Working with or against government or wider development policy shifts?

Read Full Post »

2016-01-14 16.51.09_resized

Photo: Duncan Edwards, IDS.

A 2010 review of impact and effectiveness of transparency and accountability initiatives, conducted by Rosie McGee and John Gaventa of the Institute of Development Studies (IDS), found a prevalence of untested assumptions and weak theories of change in projects, programs and strategies. This week IDS is publishing their latest Bulletin titled “Opening Governance,” which offers a compilation of evidence and contributions focusing specifically on Technology in Transparency and Accountability (Tech for T&A).

It has a good range of articles that delve into critical issues in the Tech for T&A and Open Government spaces; help to clarify concepts and design; explore gender inequity as related to information access; and unpack the ‘dark side’ of digital politics, algorithms and consent.

In the opening article, editors Duncan Edwards and Rosie McGee (both currently working with the IDS team that leads the Making All Voices Count Research, Learning and Evidence component) give a superb in-depth review of the history of Tech for T&A and outline some of the challenges that have stemmed from ambiguous or missing conceptual frameworks and a proliferation of “buzzwords and fuzzwords.”

They unpack the history of and links between concepts of “openness,” “open development,” “open government,” “open data,” “feedback loops,” “transparency,” “accountability,” and “ICT4D (ICT for Development)” and provide some examples of papers and evidence that could help to recalibrate expectations among scholars and practitioners (and amongst donors, governments and policy-making bodies, one hopes).

The editors note that conceptual ambiguity continues to plague the field of Tech for T&A, causing technical problems because it hinders attempts to demonstrate impact; and creating political problems “because it clouds the political and ideological differences between projects as different as open data and open governance.”

The authors hope to stoke debate and promote the existing evidence in order to tone down the buzz. Likewise, they aim to provide greater clarity to the Tech for T&A field by offering concrete conclusions stemming from the evidence that they have reviewed and digested.

Download the Opening Governance report here.

 

 

 

 

Read Full Post »

Screen Shot 2016-01-12 at 10.17.25 AMSince I started looking at the role of ICTs in monitoring and evaluation a few years back, one concern that has consistently come up is: “Are we getting too focused on quantitative M&E because ICTs are more suited to gather quantitative data? Are we forgetting the importance of qualitative data and information? How can we use ICTs for qualitative M&E?”

So it’s great to see that Insight Share (in collaboration with UNICEF) has just put out a new guide for facilitators on using Participatory Video (PV) and the Most Significant Change (MSC) methodologies together.

 

The Most Significant Change methodology is a qualitative method developed (and documented in a guide in 2005) by Rick Davies and Jess Dart (described below):

Screen Shot 2016-01-12 at 9.59.32 AM

Participatory Video methodologies have also been around for quite a while, and they are nicely laid out in Insight Share’s Participatory Video Handbook, which I’ve relied on in the past to guide youth participatory video work. With mobile video becoming more and more common, and editing tools getting increasingly simple, it’s now easier to integrate video into community processes than it has been in the past.

Screen Shot 2016-01-12 at 10.00.54 AM

The new toolkit combines these two methods and provides guidance for evaluators, development workers, facilitators, participatory video practitioners, M&E staff and others who are interested in learning how to use participatory video as a tool for qualitative evaluation via MSC. The toolkit takes users through a nicely designed, step-by-step process to planning, implementing, interpreting and sharing results.

I highly recommend taking a quick look at the toolkit to see if it might be a useful method of qualitative M&E — enhanced and livened up a bit with video!

Read Full Post »

Last month I joined a panel hosted by the Guardian on the contribution of innovation and technology to the Sustainable Development Goals (SDGs). Luckily they said that it was fine to come from a position of ‘skeptical realism.’

To drum up some good skeptical realist thoughts, I did what every innovative person does – posted a question on Facebook. A great discussion among friends who work in development, innovation and technology ensued. (Some might accuse me of ‘crowdsourcing’ ideas for the panel, but I think of it as more of a group discussion enabled by the Internet.) In the end, I didn’t get to say most of what we discussed on Facebook while on the panel, so I’m summarizing here.

To start off, I tend to think that the most interesting thing about the SDGs is that they are not written for ‘those developing countries over there.’ Rather, all countries are supposed to meet them. (I’m still not sure how many people or politicians in the US are aware of this.)

Framing them as global goals forces recognition that we have global issues to deal with — inequality and exclusion happen within countries and among countries everywhere. This opens doors for a shift in the narrative and framing of  ‘development.’ (See Goal 10: Reduce inequality within and among countries; and Goal 16: Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels.)

These core elements of the SDGs — exclusion and inequality – are two things that we also need to be aware of when we talk about innovation and technology. And while innovation and technology can contribute to development and inclusion…by connecting people and providing more access to information; helping improve access to services; creating space for new voices to speak their minds; contributing in some ways to improved government and international agency accountability; improving income generation; and so on… it’s important to be aware of who is excluded from creating, accessing, using and benefiting from tech and tech-enabled processes and advances.

Who creates and/or controls the tech? Who is pushed off platforms because of abuse or violence? Who is taken advantage of through tech? Who is using tech to control others? Who is seen as ‘innovative’ and who is ignored? For whom are most systems and services designed? Who is an entrepreneur by choice vs. an informal worker by necessity? There are so many questions to ask at both macro and micro levels.

But that’s not the whole of it. Even if all the issues of access and use were resolved, there are still problems with framing innovation and technology as one of the main solutions to the world’s problems. A core weakness of the Millennium Development Goals (MDGs) was that they were heavy on quantifiable goals and weak on reaching the most vulnerable and on improving governance. Many innovation and technology solutions suffer the same problem.

Sometimes we try to solve the wrong problems with tech, or we try to solve the wrong problems altogether, without listening to and involving the people who best understand the nature of those problems, without looking at the structural changes needed for sustainable impact, and without addressing exclusion at the micro-level (within and among districts, communities, neighborhoods or households).

Often a technological solution is brought in for questionable reasons. There is too little analysis of the political economy in development work as DE noted on the discussion thread. Too few people are asking who is pushing for a technology solution. Why technology? Who gains? What is the motivation? As Ory Okollah asked recently, Why are Africans expected to innovate and entrepreneur our way out of our problems? We need to get past our collective fascination with invention of products and move onward to a more holistic understanding of innovation that involves sustainable implementation, change, and improvement over the longer term.

Innovation is a process, not a product. As MBC said on the discussion thread, “Don’t confuse doing it first with doing it best.” Innovation is not an event, a moment, a one-time challenge, a product, a simple solution. Innovation is technology agnostic, noted LS. So we need to get past the goal of creating and distributing more products. We need to think more about innovating and tweaking processes, developing new paradigms and adjusting and improving on ways of doing things that we already know work. Sometimes technology helps, but that is not always the case.

We need more practical innovation. We should be looking at old ideas in a new context (citing from Stephen Johnson’s Where Good Ideas Come From) said AM. “The problem is that we need systems change and no one wants to talk about that or do it because it’s boring and slow.”

The heretical IT dared suggest that there’s too much attention to high profile innovation. “We could do with more continual small innovation and improvements and adaptations with a strong focus on participants/end users. This doesn’t make big headlines but it does help us get to actual results,” he said.

Along with that, IW suggested we need more innovative thinking and listening, and less innovative technology. “This might mean senior aid officials spending a half a day per week engaging with the people they are supposed to be helping.”

One innovative behavior change might be that of overcoming the ‘expert knowledge’ problem said DE. We need to ensure that the intended users or participants in an innovation or a technology or technological approach are involved and supported to frame the problem, and to define and shape the innovation over time. This means we also need to rely on existing knowledge – immediate and documented – on what has worked, how and when and where and why and what hasn’t, and to make the effort to examine how this knowledge might be relevant and useful for the current context and situation. As Robert Chambers said many years ago: the links of modern scientific knowledge with wealth, power, and prestige condition outsiders to despise and ignore rural peoples’ own knowledge. Rural people’s knowledge and modern scientific knowledge are complementary in their strengths and weaknesses.

Several people asked whether the most innovative thing in the current context is simply political will and seeing past an election cycle, a point that Kentaro Toyama often makes. We need renewed focus on political will and capacity and a focus on people rather than generic tech solutions.

In addition, we need paradigm shifts and more work to make the current system inclusive and fit for purpose. Most of our existing institutions and systems, including that of ‘development’ carry all of the old prejudices and ‘isms’. We need more questioning of these systems and more thinking about realistic alternatives – led and designed by people who have been traditionally excluded and pushed out. As a sector, we’ve focused a LOT on technocratic approaches over the past several years, and we’ve stopped being afraid to get technical. Now we need to stop being afraid to get political.

In summary, there is certainly a place for technology and for innovation in the SDGs, but the innovation narrative needs an overhaul. Just as we’ve seen with terms like ‘social good’ and ‘user centered design’ – we’ve collectively imbued these ideas and methods with properties that they don’t actually have and we’ve fetishized them. Re-claiming the term innovation, said HL, and taking it back to a real process with more realistic expectations might do us a lot of good.

 

 

Read Full Post »

Screen Shot 2015-09-02 at 7.38.45 PMBack in 2010, I wrote a post called “Where’s the ICT4D distance learning?” which lead to some interesting discussions, including with the folks over at TechChange, who were just getting started out. We ended up co-hosting a Twitter chat (summarized here) and having some great discussions on the lack of opportunities for humanitarian and development practitioners to professionalize their understanding of ICTs in their work.

It’s pretty cool today, then, to see that in addition to having run a bunch of on-line short courses focused on technology and various aspects of development and social change work, TechChange is kicking off their first Diploma program focusing on using ICT for monitoring and evaluation — an area that has become increasingly critical over the past few years.

I’ve participated in a couple of these short courses, and what I like about them is that they are not boring one-way lectures. Though you are studying at a distance, you don’t feel like you’re alone. There are variations on the type and length of the educational materials including short and long readings, videos, live chats and discussions with fellow students and experts, and smaller working groups. The team and platform do a good job of providing varied pedagogical approaches for different learning styles.

The new Diploma in ICT and M&E program has tracks for working professionals (launching in September of 2015) and prospective Graduate Students (launching in January 2016). Both offer a combination of in-person workshops, weekly office hours, a library of interactive on-demand courses, access to an annual conference, and more. (Disclaimer – you might see some of my blog posts and publications there).

The graduate student track will also have a capstone project, portfolio development support, one-on-one mentorship, live simulations, and a job placement component. Both courses take 16 weeks of study, but these can be spread out over a whole year to provide maximum flexibility.

For many of us working in the humanitarian and development sectors, work schedules and frequent travel make it difficult to access formal higher-level schooling. Not to mention, few universities offer courses related to ICTs and development. The idea of incurring a huge debt is also off-putting for a lot of folks (including me!). I’m really happy to see good quality, flexible options for on-line learning that can improve how we do our work and that also provides the additional motivation of a diploma certificate.

You can find out more about the Diploma program on the TechChange website  (note: registration for the fall course ends September 11th).

 

 

 

Read Full Post »

Older Posts »