Some 30 people gathered at our November 21st Technology Salon NYC, hosted by Open Society Foundations, to discuss the effects of deepfakes on humanitarian work with Sam Gregory, Program Director, WITNESS; Andrew Gully, Technical Research Manager, Jigsaw at Google; and Tara Susman-Peña, Senior Technical Advisor, IREX.
What are deepfakes? Deepfakes result when someone uses algorithms and deep neural networks to create a video where the likeness of an individual does or says something that they haven’t actually done or said. (Wikipedia has a more in-depth explanation). The original definition has expanded to include other types of misrepresentation, such as when audio is faked. Deepfakes are part of a wider global context of false information, disinformation and misinformation, often (but not always) propagated via social media.
Key takeaways from the Salon discussion:
Deepfakes are hard to do quickly or at scale, making them the “one hit wonder” of the disinformation world. They generally require access to several hundred photos or a few minutes of video. The algorithms for deepfakes are also difficult to train, finicky, and require a good deal of finessing to pass visual scrutiny. “Deepfakes are scary,” commented one discussant, “but disinformation and misinformation in other areas are already really effective.” Memes, for example, are faster and more agile. So, while a deepfake could have huge impact at the right time and place, it is currently hard to sustain an ongoing deepfake campaign.
Deepfakes tend to target women and girls, not election processes. Though most of us are worried about deepfakes affecting elections or being used in political situations, a recent study reported that 97% of deepfakes are non-consensual porn, with women as the target. The effects of this type of deepfake, even when poorly done, are particularly devastating to women, especially in contexts where their ‘virtue’ is closely guarded.
The deepfake arms race is on. Every advance in detection of deepfakes is met with a corresponding technique for avoiding detection. Much of the energy and expertise in the deepfake space could potentially be directed towards more productive ways of finding and detecting authenticity of video and images. A few firms are working on this and have created business models to make it work. Various platforms also have teams and systems internally working on this issue. What doesn’t exist is a publicly accessible version of any of these tools.
Platform solutions won’t save us. Platforms are struggling with how to address the issue of deepfakes. Should they be removed? Should users be informed that media has been manipulated or is fake? Should there be a warning box before a deepfake plays on a particular channel? How would that be done? What are the different ways that false information can be flagged? Is prevention better than response? Policy responses on platforms are currently two-fold: content takedown or removal, and content notification, noted one Salon discussant. And platform solutions to deepfakes bring the same challenges as platform solutions to other kinds of mis- and disinformation — privacy, intrusion, ethics, censorship, and political manipulation. Platform solutions also run into problems in the face of “dark channels” such as WhatsApp, or in cases where information is shared off platform, for example, via bluetooth as is common in many African countries.
Current proposed solutions leave out important players. Small media outlets that often contribute to the spread of mis- and disinformation might be overlooked if we rely on platforms to address deepfakes. Platform responses also don’t do enough to support community journalists and human rights workers who want to prove veracity of their media but don’t have the capacity to conduct forensics. If the response involves requiring additional embedded metadata to prove veracity, human rights workers and whistleblowers could be compromised. “We need to road test technological solutions in different sociopolitical contexts and make them available,” advocated one discussant, “and to create tools that work for people in different contexts.” Private, off-platform technological solutions for deepfake detection are also not resolving the issue because they are too costly for small organizations or individual journalists.
Deepfakes have hacked the notion of what is real, and this is the biggest problem. Since the veracity of visual content has been called into question, it’s becoming easier for people to claim things are fake. The existence of deepfakes also makes for an environment where people feel like they can’t trust anything, and they don’t know what is real. The generally hyped up and reactive response to deepfakes is contributing to this feeling of blurry truth. “We should try to find solutions, not be alarmist,” said one discussant, especially because certain political leaders are taking advantage of the fact that truth is being widely questioned.
Our reptile brains are getting in the way. The human brain is evolving more slowly than technology. “The way that our brains are wired complicates things. Our brains take shortcuts all the time. We don’t see our nose in front of our face because our brains fill it in. We have cognitive bias. Confirmation bias. Selective recall. All of this makes us vulnerable to misinformation and manipulation…. So, we are creating technology that our brains are not equipped to deal with,” said one discussant. Traditional media literacy approaches are not necessarily sufficient to address our susceptibility to deepfakes.
We’re not helpless in the face of deepfakes, however. Some ideas that Salon participants suggested for dealing with deepfakes and other synthetic media and dis- or misinformation include:
- Change the narrative. Deepfakes are relevant, but we should stop adding to the hype and creating alarm over them because we contribute to the sense that truth is eroding. The harm from the narrative around deepfakes could actually be greater than the deepfakes themselves.
- Focus on where the biggest harm is now. We should look at who is the most vulnerable and affected right now and deal with those cases. Right now the biggest threat is deepfake nonconsensual porn and it is affecting women and girls who are the targets. Let’s put more energy into addressing that.
- Educate our own staff. We should get ahead of this by improving media literacy and critical thinking skills among our staff so that we are not contributing to the spread of disinformation, misinformation or false information.
- Make truth as compelling as falsehood. People are drawn to highly emotional, highly visual, easy to share content. Because humanitarian agencies and their donors don’t invest in high quality communication strategies and content, we can’t compete with bad actors. This will become a bigger issue in terms of rumor creation as deepfake technology becomes more accessible. We need to get better at communicating with the people we are working with in the ways that they like so that we can capture their attention.
- Address the root causes that give rise to misinformation and disinformation. The root of deepfake creation in the case of nonconsensual porn, for example, is misogyny. So, we should be starting there. What other root causes might we need to tackle to reduce the environment that gives rise to questioning of shared truths?
- Help people understand the intention behind false information. If we can help people see how they are being ‘played’ and how they are falling into a trap of manipulation, we might be able to make them less willing to share false information. Libraries could play a role in engaging people in media literacy, especially when it comes to rural populations in small towns. Organizations should also be more creative in developing counter-narratives.
- Learn from decades of research and experience. False information, misinformation, and disinformation are not new phenomena. In many ways we are simply seeing the evolution of propaganda, which has been around forever. It’s coming through new channels, but we should be learning from the past on this.
Related reading and resources are available here.
Technology Salons run under Chatham House Rule, so no attribution has been made in this post. Salons happen in several cities around the world. If you’d like to join one, sign up here. If you’d like to suggest a topic or support us to keep doing Salons in NYC please get in touch! 🙂
Leave a Reply