Feeds:
Posts
Comments

Posts Tagged ‘Salon’

Karen Palmer is a digital filmmaker and storyteller from London who’s doing a dual residence at ThoughtWorks in Manhattan and TED New York to further develop a project called RIOT, described as an ‘emotionally responsive, live-action film with 3D sound.’ The film uses artificial intelligence, machine learning, various biometric readings, and facial recognition to take a person through a personalized journey during dangerous riot.

Karen Palmer, the future of immersive filmmaking, Future of Storytelling (FoST) 

Karen describes RIOT as ‘bespoke film that reflects your reality.’ As you watch the film, the film is also watching you and adapting to your experience of viewing it. Using a series of biometric readings (the team is experimenting with eye tracking, facial recognition, gait analysis, infrared to capture body temperature, and an emerging technology that tracks heart rate by monitoring the capillaries under a person’s eyes) the film shifts and changes. The biometrics and AI create a “choose your own adventure” type of immersive film experience, except that the choice is made by your body’s reactions to different scenarios. A unique aspect of Karen’s work is that the viewer doesn’t need to wear any type of gear for the experience. The idea is to make RIOT as seamless and immersive as possible. Read more about Karen’s ideas and how the film is shaping up in this Fast Company article and follow along with the project on the RIOT project blog.

When we talked about her project, the first thing I thought of was “The Feelies” in Aldous Huxley’s 1932 classic ‘Brave New World.’ Yet the feelies were pure escapism, and Karen’s work aims to draw people in to a challenging experience where they face their own emotions.

On Friday, December 15, I had the opportunity to facilitate a Salon discussion with a number of people from related disciplines who are intrigued by RIOT and the various boundaries it tests and explores. We had perspectives from people working in the areas of digital storytelling and narrative, surveillance and activism, media and entertainment, emotional intelligence, digital and immersive theater, brand experience, 3D sound and immersive audio, agency and representation, conflict mediation and non-state actors, film, artificial intelligence, and interactive design.

Karen has been busy over the past month as interest in the project begins to swell. In mid-November, at Montreal’s Phi Centre’s Lucid Realities exhibit, she spoke about how digital storytelling is involving more and more of our senses, bringing an extra layer of power to the experience. This means that artists and creatives have an added layer of responsibility. (Research suggests, for example, that the brain has trouble deciphering between virtual reality [VR] and actual reality, and children under the age of 8 have had problems differentiating between a VR experience and actual memory.)

At a recent TED Talk, Karen described the essence of her work as creating experiences where the participant becomes aware of how their emotions affect the narrative of the film while they are in it, and this helps them to see how their emotions affect the narrative of their life. Can this help to create new neural pathways in the brain, she asks. Can it help a person to see how their own emotions are impacting on them but also how others are reading their emotions and reacting to those emotions in real life?

Race and sexuality are at the forefront in the US – and the Trump elections further heightened the tensions. Karen believes it’s ever more important to explore different perspectives and fears in the current context where the potential for unrest is growing. Karen hopes that RIOT can be ‘your own personal riot training tool – a way to become aware of your own reactions and of moving through your fear.’

Core themes that we discussed on Friday include:

How can we harness the power of emotion? Despite our lives being emotionally hyper-charged, (especially right now in the US), we keep using facts and data to try to change hearts and minds. This approach is ineffective. In addition, people are less trusting of third-party sources because of the onslaught of misinformation, disinformation and false information. Can we use storytelling to help us get through this period? Can immersive storytelling and creative use of 3D sound help us to trust more, to engage and to witness? Can it help us to think about how we might react during certain events, like police violence? (See Tahera Aziz’ project [re]locate about the murder of Stephen Lawrence in South London in 1993). Can it help us to better understand various perspectives? The final version of RIOT aims to bring in footage from several angles, such as CCTV from a looted store, a police body cam, and someone’s mobile phone footage shot as they ran past, in an effort to show an array of perspectives that would help viewers see things in different lights.

How do we catch the questions that RIOT stirs up in people’s minds? As someone experiences RIOT, they will have all sorts of emotions and thoughts, and these will depend on a their identity and lived experiences. At one showing of RIOT, a young white boy said he learned that if he’s feeling scared he should try to stay calm. He also said that when the cop yelled at him in the film, he assumed that he must have done something wrong. A black teenager might have had an entirely different reaction to the police. RIOT is bringing in scent, haze, 3D sound, and other elements which have started to affect people more profoundly. Some have been moved to tears or said that the film triggered anger and other strong emotions for them.

Does the artist have a responsibility to accompany people through the full emotional experience? In traditional VR experiences, a person waits in line, puts on a VR headset, experiences something profound (and potentially something triggering), then takes off the headset and is rushed out so that the next person can try it. Creators of these new and immersive media experiences are just now becoming fully aware of how to manage the emotional side of the experiences and they don’t yet have a good handle on what their responsibilities are toward those who are going through them. How do we debrief people afterwards? How do we give them space to process what has been triggered? How do we bring people into the co-creation process so that we better understand what it means to tell or experience these stories? The Columbia Digital Storytelling Lab is working on gaining a better understanding of all this and the impact it can have on people.

How do we create the grammar and frameworks for talking about this? The technologies and tactics for this type of digital immersive storytelling are entirely new and untested. Creators are only now becoming more aware of the consequences of the experiences that they are creating ‘What am I making? Why? How will people go through it? How will they leave? What are the structures and how do I make it safe for them?’ The artist can open someone up to an intense experience, but then they are often just ushered out, reeling, and someone else is rushed in. It’s critical to build time for debriefing into the experience and to have some capacity for managing the emotions and reactions that could be triggered.

SAFE Lab, for example, works with students and the community in Chicago, Harlem, and Brooklyn on youth-driven solutions to de-escalation of violence. The project development starts with the human experience and the tech comes in later. Youth are part of the solution space, but along the way they learn hard and soft skills related to emerging tech. The Lab is testing a debriefing process also. The challenge is that this is a new space for everyone; and creation, testing and documentation are happening simultaneously. Rather than just thinking about a ‘user journey,’ creators need to think about the emotionality of the full experience. This means that as opposed to just doing an immersive film – neuroscience, sociology, behavioral psychology, and lots of other fields and research are included in the dialogue. It’s a convergence of industries and sectors.

What about algorithmic bias? It’s not possible to create an unbiased algorithm, because humans all have bias. Even if you could create an unbiased algorithm, as soon as you started inputting human information into it, it would become biased. Also, as algorithms become more complex, it becomes more and more difficult to understand how they arrive to decisions. This results in black boxes that are putting out decisions that even the humans that build them can’t understand. The RIOT team is working with Dr. Hongying Meng of Brunel University London, an expert in the creation of facial and emotion detection algorithms, to develop an open source algorithm for RIOT. Even if the algorithm itself isn’t neutral, the process by which it computes will be transparent.

Most algorithms are not open. Because the majority of private companies have financial goals rather than social goals in using or creating algorithms, they have little incentive for being transparent about how an algorithm works or what biases are inherent. Ad agencies want to track how a customer reacts to a product. Facebook wants to generate more ad revenue so it adjusts what news you see on your feed. The justice system wants to save money and time by using sentencing algorithms. Yet the biases in their algorithms can cause serious harm in multiple ways. (See this 2016 report from ProPublica). The problem with these commercial algorithms is that they are opaque and the biases in them are not shared. This lack of transparency is considered by some to be more problematic than the bias itself.

Should there be a greater push for regulation of algorithms? People who work in surveillance are often ignored because they are perceived as paranoid. Yet fears that AI will be totally controlled by the military, the private sector and tech companies in ways that are hidden and opaque are real and it’s imperative to find ways to bring the actual dangers home to people. This could be partly accomplished through narrative and stories. (See John Oliver’s interview with Edward Snowden) Could artists create projects that drive conversations around algorithmic bias, help the public see the risks, and push for greater regulation? (Also of note: the New York City government recently announced that it will start a task force to look more deeply into algorithmic bias).

How is the RIOT team developing its emotion recognition algorithm? The RIOT team is collecting data to feed into the algorithm by capturing facial emotions and labeling them. The challenge is that one person may think someone looks calm, scared, or angry and another person may read it a different way. They are also testing self-reported emotions to reduce bias. The purpose of the RIOT facial detection algorithm is to measure what the person is actually feeling and how others perceive that the person is feeling. For example, how would a police officer read your face? How would a fellow protester see you? The team is developing the algorithm with the specific bias that is needed for the narrative itself. The process will be documented in a peer-reviewed research paper that considers these issues from the angle of state control of citizens. Other angles to explore would be how algorithms and biometrics are used by societies of control and/or by non-state actors such as militia in the Middle East or by right wing and/or white supremacist groups in the US. (See this article on facial recognition tools being used to identify sexual orientation)

Stay tuned to hear more…. We’ll be meeting again in the new year to go more in-depth on topics such as responsibly guiding people through VR experiences; exploring potential unintended consequences of these technologies and experiences, especially for certain racial groups; commercial applications for sensory storytelling and elements of scale; global applications of these technologies; practical development and testing of algorithms; prototyping, ideation and foundational knowledge for algorithm development.

Garry Haywood of Kinicho from also wrote his thoughts up from the day.

Advertisements

Read Full Post »

Our April 16th Technology Salon Brooklyn, co-hosted with the Brooklyn Community Foundation (BCF) and AfroLatin@ Project explored the issue of tenant rights within the wider context of structural discrimination. We aimed to think about how new technology and social media might be a tool for helping community organizations to support Brooklyn residents to know their rights and report violations. We were also curious about how better use of data (and ‘big data’) might help housing rights activists and community organizations to more successfully engage residents and advocate for change.

Our lead discussant was David Reiss from Brooklyn Law School, who provided an overview of the wider housing market and challenges in New York City as well as information on some applications that are helping landlords do a better job of keeping properties up to standard. We also heard from Tynesha McHarris (BCF) and Amilcar Priestly (AfroLatin@ Project).

Brooklyn: lots of cool, lots of inequality

Kicking off the Salon, one discussant talked about the essence of Brooklyn. “What do you think of when you hear ‘Brooklyn’?” she asked. “It’s incredibly ‘cool,’ yes. But it’s also incredibly inequitable and there is incredibly inequality, mainly for people of color.” Brooklyn is the hub of New York’s tech industry, yet it’s also where tenants are being displaced, harassed and finding it difficult to live. “We want to see how tech can be used as a tool for, not a tool against,” she said, “how can we support folks to understand, advocate and organize around their rights, how we can use tech to organize in as well as across communities, because these issues don’t just affect some people, they affect all of us who live here.”

She noted that technology is a tool with potential, and donors could be funding projects that use tech to help organize and advocate on tenant rights, but there is insufficient evidence to know how to approach it. To date technology has not really been part of the bigger picture.

Another discussant talked about the housing market as a whole in New York City, citing that there available affordable housing has not kept up with the huge influx of population over the past several years. “Technology will not fix the underlying problem,” he noted. “It can’t expand the supply of affordable housing.” The real potential for technology is more in helping protect the rights of current tenants.

Some examples of how tech is supporting housing rights include applications and portals aimed at improving communications between landlords and tenants, so that problems can be more easily reported by either side, and record is kept of complaints, he commented. Incentives for landlords include free advertising of their units on the site and some reduced legal fees for things like rent stabilization approval. An interesting aspect of these sites is that the data can be analyzed to see where the largest number of complaints are coming from, and in this way patterns can be found and identified. For example, who are the bad landlords? Other sites offer lots of data for those who are interested in purchasing units, and this same type of data could be repurposed and made more accessible for lower-income and less technologically savvy residents.

One participant noted that gentrification and policing are very connected. “As we talk about legal rights and landlord-to-tenant conversations,” she noted, “we need to also bring in aspects of policing and racial justice. These are closely linked.“ As neighborhoods gentrify, newer residents often call for a greater police presence, and this can lead to harassment of long-time residents.

What other roles could technology play in strengthening existing work?

Connecting people and organizations

Lots of community organizations are working on the issues of tenant rights and gentrification, and there is a desire to build a network across these organizations. Tech could help to bring them together and to support stronger advocacy and organization. People don’t always know where they can go for help. One idea was to map organizations in different neighborhoods where people can go for help on housing issues. People also may think that they are the only tenants in a building who are having trouble with a landlord. Improved communication via tech might help let residents know they are not alone and to reduce the fear of reporting and speaking out about housing violations. One idea was to use the new system of NYC neighborhood domains to provide local information, including housing rights and specific information on buildings and their histories.

Transferring tactics from one movement to another

We’ve seen the huge role that mobile video has played in raising awareness on the issue of police violence, noted one discussant. “Technology has become a very powerful tool for communication and accountability, look at the case of Walter Scott (who died at the hands of a volunteer policeman). The young man who filmed Scott’s death knew just what to do. He pointed his camera and captured it. How can we transfer this kind of action over to the housing movement? How can we get people to use their cameras and record housing violations and landlord harassment?”

Offering new, potentially more effective ways to report housing violations

Tech can offer different dissemination channels for different people – for example, in Detroit the elderly are particularly vulnerable to housing violations, said one Salon participant. One organization encourages people to report housing harassment via SMS. They included a call-back option to cater to older people who did not feel comfortable with SMS. Stories are also an important part of campaigns and public awareness, noted another participant. Sandy Storyline created a way to share text plus a photo via SMS for those who wanted to communicate stories but who were offline. This type of application could serve as a way of keeping record of housing violations, when/where they are reported and what the outcomes are.

Tracking housing violations

One way that tech is already helping is by tracking whether public housing buildings have heat and water. Sensors are attached to the buildings, and the information is sent to journalists who then write stories when building violations happen, mentioned one Salon participant. This could be accompanied by text messages out to residents of these buildings to inform them of the status of their building. Knowing that they are not the only ones noticing problems could help residents feel more confident about speaking out and confronting bad landlords. “It’s information that says to someone: ‘this message it not only for you, it’s for everyone in your building, and here is the number you can call to get support or if you fear retribution for reporting.’” Media attention puts pressure on landlords and can help bring violations to light and make people feel safer reporting them.

Encouraging local politicians to get involved

A study in Kenya found that youth tend to bypass local politicians and pay more attention to national government and governance. Similar trends are found in the US where although local political decisions may impact more directly on residents fewer are involved in or aware of local political processes than national ones. Tech could play a role in helping connect residents to local representatives who could take action to support fair housing, address bad landlords, and support longer-term solutions as well. Some local political offices have been very open to integrating technology into their work, said one participant, and these offices might be good places to think about partnering on initiatives that use technology to better connect with their constituencies.

Tracking and predicting trends and population movements and displacement

Mapping and big data sets are providing investors with incredible amounts of information on where to purchase and invest. How can organizations and advocates better use this information, not just to identify movement and displacement and conduct research on it, but also to predict it, prepare for it, and fight it together with residents? How is information that data scientists and research institutes have, as well as open data sets on New York City used by local organizations, some wondered, and where could it be better brought to bear? “Rather than coming up with parallel studies, how can we advocate for more and better open data from New York City on housing?” asked one participant.

Other recommendations

Don’t forget about the legalities of videotaping and sharing

Some people and politicians are pushing to make things like police videotaping illegal. This happened recently in Spain with the so-called “Citizen Security” law that has made it illegal to videotape a police officer in some cases. One discussant mentioned that some US Senators are also trying to restrict the rights of citizens to film police, and that advocates of social justice need to fight to keep these rights to document authorities.

Use the right technology for the audience

One participant noted that you can create great apps with all kinds of data and patterns, but the question is more about who will access and use them, and who is benefiting from them. Wealthy white men and already-privileged people will likely find it very simple to find and use the information and these applications, giving them an advantage in terms of finding good apartments at lower prices, with good landlords. The best way to reach lower income people, he said, as personally experienced from working on political campaigns, is knocking on doors and reaching out personally and directly. “We need to see how to marry community organization and technology.”

Understand the landscape

In order to understand what tech tools might be useful, it we need to understand the communication and technology landscape in which we are working. Though Salon participants mentioned the importance of certain print publications, community radio stations in various languages, and increasing use of smart phones by young people, no one was aware of any current and widespread information on the information and communication habits of residents of Brooklyn that could help to target particular outreach efforts to different groups who were at risk of housing violations.

SMS is not a silver bullet – and trust is key

SMS can be extremely accessible, and there are many examples where it has worked very well. But experience shows that SMS works best where there are already strong networks in place, and trust is hugely important. One participant cautioned, “People need to trust where the text message is coming from. They need to know who is sending the text.” SMS also has limits because it is hyper local. “You won’t find it working across an entire Borough,” said one participant.

Local organizations are key

Along with the issue of trust is the critical component of local organizations. As one participant reminded us, “especially faith-based organizations – temples, churches, mosques. They know everyone in the neighborhood and what’s going on. They tend to know how to walk a fine line on local politics.”

Youth could play a role

Because youth around the world, including in Brooklyn, tend to be up on the latest technology, they could play a role in helping parents and grandparents with housing rights violations, especially in communities where older people are not comfortable with English or where they may fear the police due to undocumented status or other past experiences. One idea was bridging the technology and age gap by engaging young people, who could help older people find out about their rights, legal support services and where to find help. Some research has shown that young people are starting to rely on technology as an institution, said one participant, with technology and online institutions replacing physical ones for many of them.

Be careful about creating demand without adequate response capacity

As with any tech project, creating demand for services and informing people about the existence of those services is often an easier task than building and sustaining the capacity to provide quality support. Any efforts to generate greater demand need to be accompanied by capacity and funding so that people do not become apathetic or feel that they’ve been tricked if they report a violation and do not receive the support they expect or were promised. Previous experiences with service providers or legal institutions will also impact whether people trust these efforts, even if they come through new channels like technology.

Figure out how community organizations and technology partners can work together

An important thing to work out is what a relationship between community organizations and technology partners might look like. “Community organizations don’t need to become technology experts, we could partner and work together on resolving some of these challenges,” said one participant, “but we need to figure out what something like that would look like.” In some cases, community organizations in Brooklyn have low capacity and extremely poor infrastructure due to limited funding, commented one participant. “How can we reach out and engage with them and ask if they are interested in working with tech partners? How can we find out from them what tech would be supportive for them in their work?”

Think about short and long-term efforts

It will be important to look at both supporting residents and community organizations in the immediate term, and thinking about how to use data and information to help address the long term and the wider structural issues that are playing a role in housing rights violations and differential impacts of the housing situation on specific groups, for example, the elderly and people of color. It’s also important to try to address some of the root causes – for example, as one participant asked, “Who is funding predatory landlords? Who are the investors in these vulture funds?”

***

In conclusion, participants expressed their interest in continuing discussions and a desire for greater participation by community organizations in future Salons. The hope is that the Salon can help to connect community organizations and those in the tech space in order to work together to address some of the issues that Brooklyn residents face.

If you’d like to join us for our next Salon, sign up here.

Many thanks to the Brooklyn Community Foundation for their fabulous hosting and AfroLatin@ Project for helping make the Salon happen!

 

 

 

Read Full Post »

Last week’s Technology Salon New York City touched on ethics in technology for democracy initiatives. We heard from lead discussants Malavika Jayaram, Berkman Center for Internet and SocietyIvan Sigal, Global Voices; and Amilcar Priestley, Afrolatin@ Project. Though the topic was catalyzed by the Associated Press’ article on ‘Zunzuneo’ (a.k.a. ‘Cuban Twitter’) and subsequent discussions in the press and elsewhere, we aimed to cover some of the wider ethical issues encountered by people and organizations who implement technology for democracy programs.

Salons are off the record spaces, so no attribution is made in this post, but I’ve summarized the discussion points here:

First up: Zunzuneo

The media misinterpreted much of the Zunzuneo story. Zunzuneo was not a secret mission, according to one Salon participant, as it’s not in the remit of USAID to carry out covert operations. The AP article conflated a number of ideas regarding how USAID works and the contracting mechanisms that were involved in this case, he said. USAID and the Office of Transition Initiatives (OTI) frequently disguise members, organizations, and contractors that work for it on the ground for security reasons. (See USAID’s side of the story here). This may still be an ethical question, but it is not technically “spying.” The project was known within the OTI and development community, but on a ‘need to know’ basis. It was not a ‘fly by night’ operation; it was more a ‘quietly and not very effectively run project.’

There were likely ethics breaches in Zunzuneo, from a legal standpoint. It’s not clear whether the data and phone numbers collected from the Cuban public for the project were obtained in a legal or ethical way. Some reports say they were obtained through a mid-level employee (a “Cuban engineer who had gotten the phone list” according to the AP article). (Note: I spoke separately to someone close to the project who told me that user opt-in/opt-out and other standard privacy protocols were in place). It’s also not entirely clear whether, as the AP states, the user information collected was being categorized into segments who were loyal or disloyal to the Cuban government, information which could put users at risk if found out.

Zunzuneo took place in a broader historical and geo-political context. As one person put it, the project followed Secretary Clinton’s speeches on Internet Freedom. There was a rush to bring technology into the geopolitical space, and ‘the articulation of why technology was important collided with a bureaucratic process in USAID and the State Department (the ‘F process’) that absorbed USAID into the State Department and made development part of the State Department’s broader political agenda.’ This agenda had been in the works for quite some time, and was part of a wider strategy of quietly moving into development spaces and combining development, diplomacy, intelligence and military (defense), the so-called 3 D’s.

Implementers failed to think through good design, ethics and community aspects of the work. In a number of projects of this type, the idea was that if you give people technology, they will somehow create bottom up pressure for political social change. As one person noted, ‘in the Middle East, as a counter example, the tech was there to enable and assist people who had spent 8-10 years building networks. The idea that we can drop tech into a space and an uprising will just happen and it will coincidentally push the US geopolitical agenda is a fantasy.’ Often these kinds of programs start with a strategic communications goal that serves a political end of the US Government. They are designed with the idea that a particular input equals some kind of a specific result down the chain. The problem comes when the people doing the seeding of the ideas and inputs are not familiar with the context they will be operating in. They are injecting inputs into a space that they don’t understand. The bigger ethical question is: Why does this thought process prevail in development? Much of that answer is found in US domestic politics and the ways that initiatives get funded.

Zunzuneo was not a big surprise for Afrolatino organizations. According to one discussant, Afrolatino organizations were not surprised when the Zunzuneo article came out, given the geopolitical history and the ongoing presence of the US in Latin America. Zunzuneo was seen as a 21st Century version of what has been happening for decades. Though it was criticized, it was not seen as particularly detrimental. Furthermore, the Afrolatino community (within the wider Latino community) has had a variety of relationships with the US over time – for example, some Afrolatino groups supported the Contras. Many Afrolatino groups have felt that they were not benefiting overall from the mestizo governments who have held power. In addition, much of Latin America’s younger generation is less tainted by the Cold War mentality, and does not see US involvement in the region as necessarily bad. Programs like Zunzuneo come with a lot of money attached, so often wider concerns about their implications are not in the forefront because organizations need to access funding. Central American and Caribbean countries are only just entering into a phase of deeper analysis of digital citizenship, and views and perceptions on privacy are still being developed.

Perceptions of privacy

There are differences in perception when it comes to privacy and these perceptions are contextual. They vary within and across countries and communities based on age, race, gender, economic levels, comfort with digital devices, political perspective and past history. Some older people, for example, are worried about the privacy violation of having their voice or image recorded, because the voice, image and gaze hold spiritual value and power. These angles of privacy need to be considered as we think through what privacy means in different contexts and adapt our discourse accordingly.

Privacy is hard to explain, as one discussant said: ‘There are not enough dead bodies yet, so it’s hard to get people interested. People get mad when the media gets mad, and until an issue hits the media, it may go unnoticed. It’s very hard to conceptualize the potential harm from lack of privacy. There may be a chilling effect but it’s hard to measure. The digital divide comes in as well, and those with less exposure may have trouble understanding devices and technology. They will then have even greater trouble understanding beyond the device to data doubles, disembodied information and de-anonymization, which are about 7 levels removed from what people can immediately see. Caring a lot about privacy can get you labeled as paranoid or a crazy person in many places.’

Fatalism about privacy can also hamper efforts. In the developing world, many feel that everything is corrupt and inept, and that there is no point in worrying about privacy and security. ‘Nothing ever works anyway, so even if the government wanted to spy on us, they’d screw it up,’ is the feeling. This is often the attitude of human rights workers and others who could be at greatest risk from privacy breaches or data collection, such as that which was reportedly happening within Zunzuneo. Especially among populations and practitioners who have less experience with new technologies and data, this can create large-scale risk.

Intent, action, context and consequences

Good intentions with little attention to privacy vs data collection with a hidden political agenda. Where are the lines when data that are collected for a ‘good cause’ (for example, to improve humanitarian response) might be used for a different purpose that puts vulnerable people at risk? What about data that are collected with less altruistic intentions? What about when the two scenarios overlap? Data might be freely given or collected in an emergency that would be considered a privacy violation in a ‘development’ setting, or the data collection may lead to a privacy violation post-emergency. Often, slapping the ‘obviously good and unarguably positive’ label of ‘Internet freedom’ on something implies that it’s unquestionably positive when it may in fact be part of a political agenda with a misleading label. There is a long history of those with power collecting data that helps them understand and/or control those with less power, as one Salon participant noted, and we need to be cognizant of that when we think about data and privacy.

US Government approaches to political development often take an input/output approach, when, in fact, political development is not the same as health development. ‘In political work, there is no clear and clean epidemiological goal we are trying to reach,’ noted a Salon participant. Political development is often contentious and the targets and approaches are very different than those of health. When a health model and rhetoric is used to work on other development issues, it is misleading. The wholesale adoption of these kinds of disease model approaches leaves people and communities out of the decision making process about their own development. Similarly, the rhetoric of strategic communications and its inclusion into the development agenda came about after the War on Terror, and it is also a poor fit for political development. The rhetoric of ‘opening’ and ‘liberating’ data is similar. These arguments may work well for one kind of issue, but they are not transferable to a political agenda. One Salon participant pointed out the rhetoric of the privatization model also, and explained that a profound yet not often considered implication of the privatization of services is that once a service passes over to the private sector, the Freedom of Information Act (FOIA) does not apply, and citizens and human rights organizations lose FOIA as a tool. Examples included the US prison system and the Blackwater case of several years ago.

It can be confusing for implementers to know what to do, what tools to use, what funding to accept and when it is OK to bring in an outside agenda. Salon participants provided a number of examples where they had to make choices and felt ethics could have been compromised. Is it OK to sign people up on Facebook or Gmail during an ICT and education project, given these companies’ marketing and privacy policies? What about working on aid transparency initiatives in places where human rights work or crime reporting can get people killed or individual philanthropists/donors might be kidnapped or extorted? What about a hackathon where the data and solutions are later given to a government’s civilian-military affairs office? What about telling LGBT youth about a social media site that encourages LGBT youth to connect openly with one another (in light of recent harsh legal penalties against homosexuality)? What about employing a user-centered design approach for a project that will eventually be overlaid on top of a larger platform, system or service that does not pass the privacy litmus test? Is it better to contribute to improving healthcare while knowing that your software system might compromise privacy and autonomy because it sits on top of a biometric system, for example? Participants at the Salon face these ethical dilemmas every day, and as one person noted, ‘I wonder if I am just window dressing something that will look and feel holistic and human-centered, but that will be used to justify decisions down the road that are politically negative or go against my values.’ Participants said they normally rely on their own moral compass, but clearly many Salon participants are wrestling with the potential ethical implications of their actions.

What we can do? Recommendations from Salon participants

Work closely with and listen to local partners, who should be driving the process and decisions. There may be a role for an outside perspective, but the outside perspective should not trump the local one. Inculcate and support local communities to build their own tools, narratives, and projects. Let people set their own agendas. Find ways to facilitate long-term development processes around communities rather than being subject to agendas from the outside.

Consider this to be ICT for Discrimination and think in every instance and every design decision about how to dial down discrimination. Data lead to sorting, and data get lumped into clusters. Find ways during the design process to reduce the discrimination that will come from that sorting and clustering process. The ‘Do no harm’ approach is key. Practitioners and designers should also be wary of the automation of development and the potential for automated decisions to be discriminatory.

Call out hypocrisy. Those of us who sit at Salons or attend global meetings hold tremendous privilege and power as compared to most of the rest of the world. ‘It’s not landless farmers or disenfranchised young black youth in Brazil who get to attend global meetings,’ said one Salon attendee. ‘It’s people like us. We need to be cognizant of the advantage we have as holders of power.’ Here in the US, the participant added, we need to be more aware of what private sector US technology companies are doing to take advantage of and maintain their stronghold in the global market and how the US government is working to allow US corporations to benefit disproportionately from the current Internet governance structure.

Use a rights-based approach to data and privacy to help to frame these issues and situations. Disclosure and consent are sometimes considered extraneous, especially in emergency situations. People think ‘this might be the only time I can get into this disaster or conflict zone, so I’m going to Hoover up as much data as possible without worrying about privacy.’ On the other hand, sometimes organizations are paternalistic and make choices for people about their own privacy. Consent and disclosure are not new issues; they are merely manifested in new ways as new technology changes the game and we cannot guarantee anonymity or privacy any more for research subjects. There is also a difference between information a person actively volunteers and information that is passively collected and used without a person’s knowledge. Framing privacy in a human rights context can help place importance on both processes and outcomes that support people’s rights to control their own data and that increase empowerment.

Create a minimum standard for privacy. Though we may not be able to determine a ceiling for privacy, one Salon participant said we should at least consider a floor or a minimum standard. Actors on the ground will always feel that privacy standards are a luxury because they have little know-how and little funding, so creating and working within an ethical standard should be a mandate from donors. The standard could be established as an M&E criterion.

Establish an ethics checklist to decide on funding sources and create policies and processes that help organizations to better understand how a donor or sub-donor would access and/or use data collected as part of a project or program they are funding. This is not always an easy solution, however, especially for cash-strapped local organizations. In India, for example, organizations are legally restricted from receiving certain types of funding based on government concerns that external agencies are trying to bring in Western democracy and Western values. Local organizations have a hard time getting funding for anti-censorship or free speech efforts. As one person at the Salon said, ‘agencies working on the ground are in a bind because they can’t take money from Google because it’s tainted, they can’t take money from the State Department because it’s imperialism and they can’t take money from local donors because there are none.’

Use encryption and other technology solutions. Given the low levels of understanding and awareness of these tools, more needs to be done so that more organizations learn how to use them, and they need to be made simpler, more accessible and user-friendly. ‘Crypto Parties’ can help get organizations familiar with encryption and privacy, but better outreach is needed so that organizations understand the relevance of encryption and feel welcome in tech-heavy environments.

Thanks to participants and lead discussants for the great discussions and to ThoughtWorks for hosting us at their offices!

 If you’d like to attend future Salons, sign up here!

Read Full Post »

At the November 8th Technology Salon in New York City, we looked at the role of ICTs in communication for development (C4D) initiatives with marginalized adolescent girls. Lead discussants Kerida McDonald and Katarzyna Pawelczyk discussed recent UNICEF reports related to the topic, and John Zoltner spoke about FHI360’s C4D work in practice.

To begin, it was pointed out that C4D is not donor communications or marketing. It is the use of communication approaches and methodologies to achieve influence at various levels —  e.g., family, institutional and policy —  to change behavior and social norms. C4D is one approach that is being used to address the root causes of gender inequality and exclusion.

Screen Shot 2013-10-11 at 7.24.48 AMAs the UNICEF report on ICTs and C4D* notes, girls may face a number of situations that contribute to and/or are caused by their marginalization: early pregnancy, female genital cutting, early marriage, high rates of HIV/AIDS, low levels of education, lack of control over resources. ICTs alone cannot resolve these, because there is a deep and broad set of root causes. However, ICTs can be integrated systematically into the set of C4D tools and approaches that contribute to positive change.

Issues like bandwidth, censorship and electricity need to be considered when integrating ICTs into C4D work, and approaches that fit the context need to be developed. Practitioners should use tools that are in the hands of girls and their communities now, yet be aware of advances in access and new technologies, as these change rapidly.

Key points:

Interactivity is more empowering than one-way messaging:  Many of the ICT solutions being promoted today focus on sending messages out via mobile phones. However C4D approaches aim for interactivity and multi-channel, multi-directional communication, which has proven more empowering.

Content: Traditional media normally goes through a rigorous editorial process and it is possible to infuse it with a gender balance. Social media does not have the same type of filters, and it can easily be used to reinforce stereotypes about girls. This is something to watch and be aware of.

Purpose: It’s common with ICT-related approaches to start with the technology rather than starting with the goals. As one Salon participant asked “What are the results we want to see for ourselves? What are the results that girls want to see? What are the root causes of discrimination and how are we trying to address them? What does success look like for girls? For organizations? Is there a role for ICTs in helping achieve success? If so, what is it?” These questions need to be the starting point, rather than the technology.

Participation: One Salon participant mentioned a 2-year project that is working together with girls to define their needs and their vision of success. The process is one co-design, and it is aimed at understanding what girls want. Many girls expressed a feeling of isolation and desire for connection, and so the project is looking at how ICTs can help them connect. As the process developed, the diversity of needs became very clear and plans have changed dramatically based on input from a range of girls from different contexts. Implementors need to be prepared to change, adapt and respond to what girls say they want and to local realities.

****

Screen Shot 2013-11-23 at 10.41.22 PMA second study commissioned by UNICEF explores how young people use social media. The researchers encountered some challenges in terms of a strong gender approach for the study. Though a gender lens was used for analysis, there is little available data disaggregated by sex. The study does not focus on the most marginalized, because it looks at the use of social media, which normally requires a data connection or Internet access, which the most marginalized youth usually do not have.

The authors of the report found that youth most commonly used the Internet and social media for socializing and communicating with friends. Youth connected less often for schoolwork. One reason for this may be that in the countries/contexts where the research took place, there is no real integration of ICTs into the school system. It was emphasized that the  findings in the report are not comparable or nationally representative, and blanket statements such as “this means x for the whole developing world” should be avoided.

Key points:

Self-reporting biases. Boys tend to have higher levels of confidence and self-report greater ICT proficiency than girls do. This may skew results and make it seem that boys have higher skill levels.

Do girls really have less access? We often hear that girls have less access than boys. The evidence gathered for this particular report found that “yes and no.” In some places, when researchers asked “Do you have access to a mobile,” there was not a huge difference between urban and rural or between boys and girls. When they dug deeper, however, it became more complex. In the case of Zambia, access and ownership were similar for boys and girls, but fewer girls were connecting at all to the Internet as compared to boys. Understanding connectivity and use was quite complicated.

What are girls vs. boys doing online? This is an important factor when thinking about what solutions are applicable to which situation(s). Differences came up here in the study. In Argentina, girls were doing certain activities more frequently, such as chatting and looking for information, but they were not gaming. In Zambia, girls were doing some things less often than boys; for example, fewer girls than boys were looking for health information, although the number was still significant. A notable finding was that both girls and boys were accessing general health information more often than they were accessing sensitive information, such as sexual health or mental health.

What are the risks in the online world? A qualitative portion of the study in Kenya used focus groups with girls and boys, and asked about their uses of social media and experience of risk. Many out-of-school girls aged 15-17 reported that they used social media as a way to meet a potential partner to help them out of their financial situation. They reported riskier behavior, contact with older men, and relationships more often than girls who were in school. Girls in general were more likely to report unpleasant online encounters than boys, for example, request for self-exposure photos.

Hiding social media use. Most of the young people that researchers spoke with in Kenya were hiding social media use from their parents, who disapproved of it. This is an important point to note in C4D efforts that plan on using social media, and program designers will want to take parental attitudes about different media and communication channels into consideration as they design C4D programs.

****

When implementing programs, it is noteworthy how boys and girls tend to use ICT and media tools. Gender issues often manifest themselves right away. “The boys grab the cameras, the boys sit down first at the computers.” If practitioners don’t create special rules and a safe space for girls to participate, girls may be marginalized. In practical ICT and media work, it’s common for boys and girls to take on certain roles. “Some girls like to go on camera, but more often they tend to facilitate what is being done rather than star in it.” The gender gap in ICT access and use, where it exists, is a reflection of the power gaps of society in general.

In the most rural areas, even when people have access, they usually don’t have the resources and skills to use ICTs.  Very simple challenges can affect girls’ ability to participate in projects, for example, oftentimes a project will hold training at times when it’s difficult for girls to attend. Unless someone systematically goes through and applies a gender lens to a program, organizations often don’t notice the challenges girls may face in participating. It’s not enough to do gender training or measure gender once a year; gendered approaches needs to be built into program design.

Long-terms interventions are needed if the goal is to emancipate girls, help them learn better, graduate, postpone pregnancy, and get a job. This cannot be done in a year with a simple project that has only one focus, because girls are dealing with education, healthcare, and a whole series of very entrenched social issues. What’s needed is to follow a cohort of girls and to provide information and support across all these sectors over the long-term.

Key points:

Engaging boys and men: Negative reactions from men are a concern if and when girls and women start to feel more empowered or to access resources. For example, some mobile money and cash transfer programs direct funds to girls and women, and some studies have found that violence against women increases when women start to have more money and more freedom. Another study, however, of a small-scale effort that provides unconditional cash transfers to girls ages 18-19 in rural Kenya, is demonstrating just the opposite: girls have been able to say where money is spent and the gender dynamics have improved. This raises the question of whether program methodologies need to be oriented towards engaging boys and men and involving them in changing gender dynamics, and whether engaging boys and men can help avoid an increase in violence. Working with boys to become “girl champions” was cited as a way to help to bring boys into the process as advocates and role models.

Girls as producers, not just consumers. ICTs are not only tools for sending content to girls. Some programs are working to help girls produce content and create digital stories in their own languages. Sometimes these stories are used to advocate to decision makers for change in favor of girls and their agendas. Digital stories are being used as part of research processes and to support monitoring, evaluation and accountability work through ‘real-time’ data.

ICTs and social accountability. Digital tools are helping young people address accountability issues and inform local and national development processes. In some cases, youth are able to use simple, narrow bandwidth tools to keep up to date on actions of government officials or to respond to surveys to voice their priorities. Online tools can also lead to offline, face-to-face engagement. One issue, however, is that in some countries, youth are able to establish communication with national government ministers (because there is national-level capacity and infrastructure) but at local level there is very little chance or capability for engagement with elected officials, who are unprepared to respond and engage with youth or via social media. Youth therefore tend to bypass local government and communicate with national government. There is a need for capacity building at local level and decentralized policies and practices so that response capacity is strengthened.

Do ICTs marginalize girls? Some Salon participants worried that as conversations and information increasingly move to a digital environment, ICTs are magnifying the information and communication divide and further marginalizing some girls. Others felt that the fact that we are able to reach the majority of the world’s population now is very significant, and the inability to reach absolutely everyone doesn’t mean we should stop using ICTs. For this very reason – because sharing of information is increasingly digital – we should continue working to get more girls online and strengthen their confidence and abilities to use ICTs.

Many thanks to UNICEF for hosting the Salon!

(Salons operate under Chatham House Rule, thus no attribution has been given in the above summary. Sign up here if you’d like to attend Salons in the future!)

*Disclosure: I co-authored this report with Keshet Bachan.

Read Full Post »

I recently had the honor of leading a group of tech, development and gender folks in a discussion around Girls and ICTs at the Technology Salon.  The conversation revolved around 5 aspects I wrote about in an earlier blog post On Girls and ICTs:

  • Tension between participation and protection
  • Online behavior is an extension of, and a potential amplifier of offline behavior
  • Qualifying the digital divide
  • Girls’ involvement in developing and designing ICT solutions for their own needs
  • Research on Girls and ICTs

Check out the Technology Salon’s page for a round-up of our discussions!

Photo:  Informal evening one-on-one ICT time at a Youth Empowerment through Arts and Media (YETAM) project workshop in Cameroon.

——————-
Related post on Wait… What?
On Girls and ICTs

Putting Cumbana on the Map:  with Ethics
Being a Girl in Cumbana

Girl Power and the CGI

Read Full Post »

At the Technology Salon hosted by the UN Foundation’s Technology Partnership with Vodafone Foundation on Jan 28, 2010, some folks from the DC area (and beyond) will gather to share experiences around girls and ICTs.

This conversation is an important one, given that gaps exist around discussion, practice and research.  The information and ideas shared at the Technology Salon will feed into the contents of the Girls and ICTs chapter for Plan’s upcoming 2010 “Because I am a Girl” Report (currently in the works).

There are a few points that I hope will be considered in the Technology Salon discussion:

Tension between participation and protection.

There are many examples of ICTs being used for increased participation and connection:  mobile phones for citizen journalism;  Twitter revolution in Iran; girls using mobiles to ask questions about sexuality and to get information to help them improve their sexual and reproductive health; girls married off early or those living in protective societies using mobile phones to maintain contact with friends; new media tools opening up possibilities for youth engagement in important conversations that normally they would be shut out of.

However, due to the very real problems of on-line child pornography, child trafficking, child harassment, and cyber bullying, there is also a strong push for more control, more restrictions on on-line use in the name of protecting children.  The tension between child participation and child protection is a very real one.

As we look at how technology and international development communities can support girls’ development, I hope it’s kept in mind that the more knowledge that girls have about the internet and ICTs in general, the more practical use they are allowed, the more coaching to help them understand implications of their actions, then the better prepared they will be to navigate these realms and to keep themselves safe.  Increasing their knowledge, abilities and desire to protect themselves may be more effective than setting strict external limitations.

Actively engaging girls in this as part of an educational process can be better than restricting their use – and being open to young people’s own ideas and ways of using ICTs is critical.  Adult involvement in this area is important, but it is probably more worthwhile to coach than to control.  Good communication and trust between children, youth and their adult mentors and guides is critical in this process. (Excellent resources on Child Online Protection (COP) for children, youth, educators and parents came out on October 2009 and are well worth the read.)

Online behavior mirrors offline behavior.

I remember getting an obscene phone call when I was around 8 years old.  My mother did not blame the telephone, however. She blamed the pervert that was calling.  She made sure to teach me how to be prepared in case it happened again. She emphasized not giving out information on the phone and hanging up immediately if I felt uncomfortable or didn’t know who was calling.  She did not prohibit me from ever touching the phone again or blame me, but she was vigilant for awhile.

In the same way, new ICTs themselves cannot be blamed for negative and twisted behaviors.  ICTs are tools that exacerbate and extend already existing human behaviors, and the blame lies with those who are using ICTs for child trafficking, cyber bullying and the other evils associated with the internet. It’s important to address underlying behaviors. Research shows that kids who are bullied offline are often also bullied online.  Girls who are vulnerable offline are likely also vulnerable online.  Online is a manifestation of offline, and the root causes of girls’ vulnerabilities online cannot be blamed only on the ICT tools themselves.

I have my own daughter now, and have discussed with her many times how to keep herself safe online and on the phone.  It’s important for her to know this before something happens, not during or after.  She will probably be using the internet and the phone for the rest of her life, so prohibiting them is not an option, and the benefits of using these tools obviously outweigh the risks.  My own involvement and use of social networking sites, texting, etc. is an excellent way for me to know how these sites are used and what security holes there are for my children on the sites. How can parents and teachers in areas with limited use of ICTs be involved and engaged to serve as coaches and leaders in on-line protection together with children?  How can communities help identify existing vulnerabilities in girls (or young people in general) that might manifest themselves online and offer support to prevent exploitation?

Digital gender divide.

There are some amazing examples of ICTs helping women and girls to improve their livelihoods; for example, women selling mobile telephone services; birth attendance being improved by using mobile phones to connect women to midwives, ambulances and other medical services; educational content being  expanded using internet; youth media and youth radio programs bringing girls voices and gender topics into the mix for community discussion and dialogue.

However, in places where boys and men dominate women and girls, boys and men likely also dominate the use of available ICTs.  Men may control the family’s mobile phone and take it with them, or monitor women’s calls. In places where boys are more favored, their confidence to try new things is higher meaning they may rush in to use mobiles, cameras, radio equipment in projects while girls shy back.  In some cases girls report that boys hog and monopolize the computers and equipment, and access is denied.  I’ve seen boys criticize, scorn and ridicule girls who are using equipment for the first time, and girls become too timid to try again.  In many developing countries, just getting girls to attend school is difficult.  If girls are assumed to be less intelligent or less worthy than boys, and their secondary school attendance (where ICT training might be offered) is not a priority, girls will have a very difficult time ever accessing and using ICTs.

Underlying issues surrounding girls’ participation in general need to be addressed.  We need to think about how ICTs can be used to help girls’ inclusion, participation and self esteem increase in general.

Girls involvement in developing and designing ICT solutions for their own needs.

Studies in several countries have shown that girls and boys use technology in different ways and for different things.  What specific ICT needs do underprivileged girls in ‘developing’ countries have? Is anyone asking? What processes or solutions already exist that take girls’ ICT needs into account?  What environments are necessary for girls to engage in defining, deciding and creating ICT solutions? Where are they already engaging, and how can communities, schools, organizations and businesses support and recreate those environments?  How can processes and products be designed together with girls?

Tech is still a field heavily dominated by males.  In the US, for example, some women in tech have pulled together to question this and to advocate for more opportunities for women to break into the male dominated worlds of publication owners, conference speakers, businesses, well known innovators, and “best of” lists.  There have been protests against prominent companies for promoting “Booth Babes” and in one case last year, strippers, at tech conferences.  This brings the question – in places without female tech role models and respectful environments, how can girls see themselves as leaders in this field?

Specific research on girls and ICTs.

There is not a lot of information on the impact of ICTs on the lives of girls in ‘developing’ countries, especially studies that go beyond establishment of computer centers.  There is  anecdotal evidence of positive impact of mobile phones on women. There are studies on the digital gender divide for women; on child trafficking and other negative aspects of the internet; and on use of internet and technology among youth in the US, UK, Australia, etc.  It’s been difficult to find a lot of information on the use of ICTs by girls in the “South.”  It would be interesting for more research to be done on girls and ICTs in the “South” and for some good practices to be shared. Hopefully someone at the Technology Salon will be able to share some insight on this.

Read Full Post »