Which groups have a voice in the AI debate? Who is most adversely affected by the deployment and use of these technologies? And who may be left behind entirely in an AI future? On Thursday 17th September, the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge joined forces with the Present Futures Forum at the Technische Universität Berlin to conduct a workshop devoted to German scholarship on ‘AI Visions and Narratives’. Showcasing the work of 20 researchers, academics and practitioners working in the field of AI, the AI Visions and Narratives workshop, occurring against a backdrop of rising awareness of social inequalities, highlighted many of the hidden inequalities surrounding AI.
With the impact of AI expected to be felt in every corner of the world, it is vital to understand how different worldviews consider AI, as well as how they relate to, inform and interrogate the Western narratives which are given primacy in the global AI debate. The debate within the German transdisciplinary AI community - focusing heavily on shared responsibility, shaping a positive AI future and considering German AI research in a global context - offers a reflexive view of the AI discourse both within and outside of Germany.
Utopias and Dystopias
Across the workshop’s four panels, many of the speakers highlighted different ways in which we think about AI as either overtly optimistic or inherently pessimistic - and our AI future as being distinctly utopian or dystopian.
Whilst many fictional AI narratives are predominantly focused on the US and UK anglosphere, Germany occasionally makes an appearance as a setting in fictional imaginings of the future. With a dramatic scene set in Berlin, William Gibson’s Neuromancer explored ideas around futuristic high-tech cities. However, panellist Wenzel Mehnert cautions us, Gibson’s vision is one which is reliant on AI as a technology of control and coercion. However high-tech the future might be, if it is predicated on the AI-driven exploitation of vulnerable people the veneer of technological advancement cannot fail to hide the technodystopian reality.
The prospect of technological exploitation might seem a distant (and perhaps impossible) future, but AI driving human behaviour is something already occurring today. Aljoscha Burchardt suggested that chatbots, through the limitations of their conversational abilities, are designed to lead or dominate a conversation to maintain the interaction on a level that they can perform - leaving the human interlocutor as a passive, rather than active, participant. This level of perceived dominance by AI raises fears about the ability of future AI developments to overtake humanity - either through a Terminator-esque robot uprising, or simply through human capabilities being left unable to match the advanced technological capabilities of AI.
Many speakers highlighted the problematic nature of narratives about AI centered on fictional ideas of the technology rather than the reality we face, and interact with, today. Whilst science fiction imaginings of AI in particular offer a form of escapism through which the public can explore and enjoy the virtually endless possibilities of intelligent machines, this escapism can come at the price of understanding the reality of our AI enabled lives today. Instead of raising awareness of the complex social, political, and ethical issues associated with the use of these technologies, science fiction narratives could shift the focus from AI as a technology of reality to AI as a caricature of every Terminator, HAL 9000, or C-3PO we watch or read about. Isabella Hermann argues that the distinction between fictional representations of AI as a narrative applicable to the real world and as a metaphor for technological possibility is not clear for many who rely on these representations as a way of thinking about AI technologies.
One relatively new topic, discussed by Thomas Küber, was the prevalence of algorithmic over-optimisation, a phenomenon which can lead to inhumane outcomes from the use of AI. As an example, he showed delivery drivers being given no time for breaks unless customers ‘tip’ break times, demonstrating a dislocation between our optimistic visions for AI technology and the reality of its application in the world.
AI might not work for everyone within a given society. Thinking about the way we conceptualise the idea of fairness resulted in a great deal of debate which explored different ways in which we can think about ‘fairness’ and ‘unfairness’ in relation to AI. Jessica De Jesus De Pinho Pinhal asks why we should adapt AI algorithms to compensate for unfairness rather than tackling structural unfairness in society, given that AI can help to identify structural unfairness, and that technological solutions for bias tend to obfuscate problems rather than solve the actual issue. We also considered the broader question of what the societal goals of AI should be - do we want AI to be actively good, or simply not harmful?
Who Speaks?
One of the biggest topics of the workshop cut across all four panels, and concerned the public debate and discourse around AI. Rather than focusing on the content of this debate, we considered the way in which the debate is shaped, and who drives it. Christian Katzenbach identified the big tech companies as being heavily influential in shaping the debate around AI - far more so, it appears, than the governments, civil society groups or research institutes that aim to drive policy in this area. Big tech also appears to have a hand in driving media narratives around AI, with prominent techno-utopianists and transhumanists, many of whom are associated with Silicon Valley, framing much of the discussion. Christopher Coenen argued that the key message these people drive in the media is that humans merging with machines is a prerequisite for ‘taming’ AI, preventing any future AGI seeing humans as a threat, and ensuring a positive outcome for humanity.
Participants agreed that dominant power structures have framed the AI debate for too long; a select group of predominantly white, western, English-speaking, educated men have influenced every aspect of the AI debate, across the techno-industrial, political, academic and artistic spheres. This lack of diversity in the AI discussion hinders the exploration of other perspectives and marginalises those likely to be most impacted by AI. Maya Indira Ganesh advocated for rethinking who shapes this discourse and how we can bring new voices into the debate.
The workshop emphasized the value of artistic endeavours in thinking about the role of AI in our lives, with presentations from a number of artists and performers. Artistic and performative interventions that interrogate how we interact with AI technology demonstrate how this medium can ask us to rethink issues of autonomy, privacy and agency with regard to the so-called ‘black box’ problem - which was reinterpreted as a literal black box by Diana Serbanescu, filled with messages and secrets. This shows how artistic engagement with AI visions and narratives can create a feedback loop of knowledge between the roles of artist and computer scientist. In a similar vein, Michelle Christensen and Florian Conradi explored the role of performance as a powerful tool for critique and investigation of our engagement with technology. By elevating a virtual assistant such as Siri or Alexa to a full member of the household, with whom people can share conversations and household routines, their project questions the boundaries of technology and humanity, and what it means to be considered intelligent.
The workshop addressed in depth how the German community considers their own position in the global AI debate. There was a definite sense that both the German AI ecosystem and the German public in general are skeptical of the predominantly US-UK axis of technological positivism driven by big tech - a so-called ‘tech-lash’ against a perception that monopolistic big business is driving changes in their lives. While pragmatism is a driving force, and people still use big tech companies due to either simple ease or a lack of choice, there is a definite sense that people begin to move away from this, making choices based on ethical ideals and principles at both an individual and organisational level.
Envisioning an AI Future
A third line of inquiry dealt with visions of desirable futures of, and with, AI. Linking to concerns around a German ‘tech-lash’, there was a feeling that this skepticism of AI could lead to an abandonment of the technology in the long term. Rainer Stark suggested that engineers could seek to avoid this by ensuring AI remains understandable and controllable. To support this, digital literacy across the German population is vital, but Aljoscha Burchardt argued that current plans to educate young people specifically could leave behind older generations that will equally be impacted by AI. Furthermore, even within such a transdisciplinary community, it became clear that knowledge about AI remains contained in disciplinary silos which are difficult to break from. However, knowledge sharing across disciplines and collaboration between research and industry is essential in ensuring positive developments in AI.
Many of the topics addressed are as relevant for the global AI debate as for the German context. In questioning the future possibilities of human-AI interaction, Maya Indira Ganesh explored the issue of our reliance on human capabilities as a benchmark for everything we ask AI to do. AI, as a non-biological technology, could have potential far beyond our human-centric perspective, and we should start to consider possibilities outside of this box. With a more positive vision of an AI future, Stephanie Hankey suggested that AI can be used as a tool to future-proof the world from humanity - to remedy some of the damage that humanity has done to the planet. This view of AI as a possible technological panacea for humanity contrasts with the view that humans must ensure they do not become redundant with the advent of artificial general intelligence. Diana Neranti considered the concept of what a human AI future might look like through performance art, and how this format can be used to explore what we, as humans, might lose or gain from replacing human interaction with purely human-AI interaction, and raising questions about what our lives might look like if AI really does ‘take over’.
The Global AI Narratives (GAIN) project is an interdisciplinary, cross-cultural initiative seeking to understand local interpretations of artificial intelligence (AI) in different countries and communities around the world. The project is funded by the Templeton World Charity Foundation and DeepMind Ethics and Society. The Present Futures Forum is a new platform aimed at multidirectional knowledge transfer and exchange to take on the great challenges of our future through fostering relationships between traditional science and technology studies and practitioners in the science and technology community.
The Global AI Narratives and Present Futures Forum teams would like to thank all of the wonderful speakers and participants for their contributions to this workshop, and TU Berlin’s International Office for additional financial support.