Karen Palmer is a digital filmmaker and storyteller from London who’s doing a dual residence at ThoughtWorks in Manhattan and TED New York to further develop a project called RIOT, described as an ‘emotionally responsive, live-action film with 3D sound.’ The film uses artificial intelligence, machine learning, various biometric readings, and facial recognition to take a person through a personalized journey during dangerous riot.
Karen Palmer, the future of immersive filmmaking, Future of Storytelling (FoST)
Karen describes RIOT as ‘bespoke film that reflects your reality.’ As you watch the film, the film is also watching you and adapting to your experience of viewing it. Using a series of biometric readings (the team is experimenting with eye tracking, facial recognition, gait analysis, infrared to capture body temperature, and an emerging technology that tracks heart rate by monitoring the capillaries under a person’s eyes) the film shifts and changes. The biometrics and AI create a “choose your own adventure” type of immersive film experience, except that the choice is made by your body’s reactions to different scenarios. A unique aspect of Karen’s work is that the viewer doesn’t need to wear any type of gear for the experience. The idea is to make RIOT as seamless and immersive as possible. Read more about Karen’s ideas and how the film is shaping up in this Fast Company article and follow along with the project on the RIOT project blog.
When we talked about her project, the first thing I thought of was “The Feelies” in Aldous Huxley’s 1932 classic ‘Brave New World.’ Yet the feelies were pure escapism, and Karen’s work aims to draw people in to a challenging experience where they face their own emotions.
On Friday, December 15, I had the opportunity to facilitate a Salon discussion with a number of people from related disciplines who are intrigued by RIOT and the various boundaries it tests and explores. We had perspectives from people working in the areas of digital storytelling and narrative, surveillance and activism, media and entertainment, emotional intelligence, digital and immersive theater, brand experience, 3D sound and immersive audio, agency and representation, conflict mediation and non-state actors, film, artificial intelligence, and interactive design.
Karen has been busy over the past month as interest in the project begins to swell. In mid-November, at Montreal’s Phi Centre’s Lucid Realities exhibit, she spoke about how digital storytelling is involving more and more of our senses, bringing an extra layer of power to the experience. This means that artists and creatives have an added layer of responsibility. (Research suggests, for example, that the brain has trouble deciphering between virtual reality [VR] and actual reality, and children under the age of 8 have had problems differentiating between a VR experience and actual memory.)
At a recent TED Talk, Karen described the essence of her work as creating experiences where the participant becomes aware of how their emotions affect the narrative of the film while they are in it, and this helps them to see how their emotions affect the narrative of their life. Can this help to create new neural pathways in the brain, she asks. Can it help a person to see how their own emotions are impacting on them but also how others are reading their emotions and reacting to those emotions in real life?
Race and sexuality are at the forefront in the US – and the Trump elections further heightened the tensions. Karen believes it’s ever more important to explore different perspectives and fears in the current context where the potential for unrest is growing. Karen hopes that RIOT can be ‘your own personal riot training tool – a way to become aware of your own reactions and of moving through your fear.’
Core themes that we discussed on Friday include:
How can we harness the power of emotion? Despite our lives being emotionally hyper-charged, (especially right now in the US), we keep using facts and data to try to change hearts and minds. This approach is ineffective. In addition, people are less trusting of third-party sources because of the onslaught of misinformation, disinformation and false information. Can we use storytelling to help us get through this period? Can immersive storytelling and creative use of 3D sound help us to trust more, to engage and to witness? Can it help us to think about how we might react during certain events, like police violence? (See Tahera Aziz’ project [re]locate about the murder of Stephen Lawrence in South London in 1993). Can it help us to better understand various perspectives? The final version of RIOT aims to bring in footage from several angles, such as CCTV from a looted store, a police body cam, and someone’s mobile phone footage shot as they ran past, in an effort to show an array of perspectives that would help viewers see things in different lights.
How do we catch the questions that RIOT stirs up in people’s minds? As someone experiences RIOT, they will have all sorts of emotions and thoughts, and these will depend on a their identity and lived experiences. At one showing of RIOT, a young white boy said he learned that if he’s feeling scared he should try to stay calm. He also said that when the cop yelled at him in the film, he assumed that he must have done something wrong. A black teenager might have had an entirely different reaction to the police. RIOT is bringing in scent, haze, 3D sound, and other elements which have started to affect people more profoundly. Some have been moved to tears or said that the film triggered anger and other strong emotions for them.
Does the artist have a responsibility to accompany people through the full emotional experience? In traditional VR experiences, a person waits in line, puts on a VR headset, experiences something profound (and potentially something triggering), then takes off the headset and is rushed out so that the next person can try it. Creators of these new and immersive media experiences are just now becoming fully aware of how to manage the emotional side of the experiences and they don’t yet have a good handle on what their responsibilities are toward those who are going through them. How do we debrief people afterwards? How do we give them space to process what has been triggered? How do we bring people into the co-creation process so that we better understand what it means to tell or experience these stories? The Columbia Digital Storytelling Lab is working on gaining a better understanding of all this and the impact it can have on people.
How do we create the grammar and frameworks for talking about this? The technologies and tactics for this type of digital immersive storytelling are entirely new and untested. Creators are only now becoming more aware of the consequences of the experiences that they are creating — ‘What am I making? Why? How will people go through it? How will they leave? What are the structures and how do I make it safe for them?’ The artist can open someone up to an intense experience, but then they are often just ushered out, reeling, and someone else is rushed in. It’s critical to build time for debriefing into the experience and to have some capacity for managing the emotions and reactions that could be triggered.
SAFE Lab, for example, works with students and the community in Chicago, Harlem, and Brooklyn on youth-driven solutions to de-escalation of violence. The project development starts with the human experience and the tech comes in later. Youth are part of the solution space, but along the way they learn hard and soft skills related to emerging tech. The Lab is testing a debriefing process also. The challenge is that this is a new space for everyone; and creation, testing and documentation are happening simultaneously. Rather than just thinking about a ‘user journey,’ creators need to think about the emotionality of the full experience. This means that as opposed to just doing an immersive film – neuroscience, sociology, behavioral psychology, and lots of other fields and research are included in the dialogue. It’s a convergence of industries and sectors.
What about algorithmic bias? It’s not possible to create an unbiased algorithm, because humans all have bias. Even if you could create an unbiased algorithm, as soon as you started inputting human information into it, it would become biased. Also, as algorithms become more complex, it becomes more and more difficult to understand how they arrive to decisions. This results in black boxes that are putting out decisions that even the humans that build them can’t understand. The RIOT team is working with Dr. Hongying Meng of Brunel University London, an expert in the creation of facial and emotion detection algorithms, to develop an open source algorithm for RIOT. Even if the algorithm itself isn’t neutral, the process by which it computes will be transparent.
Most algorithms are not open. Because the majority of private companies have financial goals rather than social goals in using or creating algorithms, they have little incentive for being transparent about how an algorithm works or what biases are inherent. Ad agencies want to track how a customer reacts to a product. Facebook wants to generate more ad revenue so it adjusts what news you see on your feed. The justice system wants to save money and time by using sentencing algorithms. Yet the biases in their algorithms can cause serious harm in multiple ways. (See this 2016 report from ProPublica). The problem with these commercial algorithms is that they are opaque and the biases in them are not shared. This lack of transparency is considered by some to be more problematic than the bias itself.
Should there be a greater push for regulation of algorithms? People who work in surveillance are often ignored because they are perceived as paranoid. Yet fears that AI will be totally controlled by the military, the private sector and tech companies in ways that are hidden and opaque are real and it’s imperative to find ways to bring the actual dangers home to people. This could be partly accomplished through narrative and stories. (See John Oliver’s interview with Edward Snowden) Could artists create projects that drive conversations around algorithmic bias, help the public see the risks, and push for greater regulation? (Also of note: the New York City government recently announced that it will start a task force to look more deeply into algorithmic bias).
How is the RIOT team developing its emotion recognition algorithm? The RIOT team is collecting data to feed into the algorithm by capturing facial emotions and labeling them. The challenge is that one person may think someone looks calm, scared, or angry and another person may read it a different way. They are also testing self-reported emotions to reduce bias. The purpose of the RIOT facial detection algorithm is to measure what the person is actually feeling and how others perceive that the person is feeling. For example, how would a police officer read your face? How would a fellow protester see you? The team is developing the algorithm with the specific bias that is needed for the narrative itself. The process will be documented in a peer-reviewed research paper that considers these issues from the angle of state control of citizens. Other angles to explore would be how algorithms and biometrics are used by societies of control and/or by non-state actors such as militia in the Middle East or by right wing and/or white supremacist groups in the US. (See this article on facial recognition tools being used to identify sexual orientation)
Stay tuned to hear more…. We’ll be meeting again in the new year to go more in-depth on topics such as responsibly guiding people through VR experiences; exploring potential unintended consequences of these technologies and experiences, especially for certain racial groups; commercial applications for sensory storytelling and elements of scale; global applications of these technologies; practical development and testing of algorithms; prototyping, ideation and foundational knowledge for algorithm development.
Garry Haywood of Kinicho from also wrote his thoughts up from the day.
[…] Read this article: Sensory storytelling: what are artists’ responsibilities when creating immersive digital… […]