GAIN China III
How the Chinese Public View AI
Beneficial and effective development of artificial intelligence will require trust and collaboration on a global scale. However, different cultures see AI through different lenses and hold different hopes and fears for the technology’s future. If ignored, this variation in cultural perspectives could interfere with international AI ethics and governance efforts. Inversely, seeking to understand other perspectives lays the groundwork for cooperation.
Toward this end, “How the Chinese Public View AI” was the third and final instalment of the Global AI Narratives: China workshop series, a collaboration between the Berggruen Institute China Center at Peking University and the Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge. Following on from two previous workshops, “AI Narratives in Chinese Classics and their Influence on Society Today” and “AI Narratives in Contemporary Chinese Science Fiction”, this third workshop aimed to elicit a better understanding of the ways the Chinese public understand artificial intelligence; how the media, industry, and government influence the Chinese publics’ perceptions of AI; and what the West can learn from approaches to understanding and discussing AI in China. The workshop featured three speakers researching public perceptions of AI in China: Fang Wu (吴舫), Jing Zeng (曾靖), and Yishu Mao (毛逸舒).
How does the Chinese public understand artificial intelligence?
Fang Wu (吴舫) opened the workshop with her presentation The Influence of Media Use on Public Perceptions of Artificial Intelligence in China: Evidence from an Online Survey. . With support from government and business, AI is growing quickly in China, which aims to become the world’s primary AI innovation center by 2030. However, little is known about the Chinese public’s perception of AI, and how media use influences those perceptions. Because user demand will influence AI development, it is important to understand how users perceive and feel about AI. Based on a national online survey, Wu and her colleagues explored the linkages between media use and people’s risk perception, benefit perception, and policy support of AI. Results show that respondents overwhelmingly perceive artificial intelligence as more beneficial than risky. Newspaper use was negatively associated with benefit perception and policy support, whereas television and public discourse on the social medium WeChat predicted a positive attitude toward both. People for whom AI development and applications are personally relevant are more critical of media representations of AI.
An audience member asked whether there were any patterns of positive/negative emotions held towards AI at different levels of society in China (i.e. negative emotions associated more with the individual level vs. positive ones with the national or cultural level). Wu commented that, based on her own observation, the general attitude towards AI is positive at both the individual and national level. Perceptions at both levels are significantly influenced by the Chinese media, and the media, in turn, is largely shaped by the Chinese government’s upbeat attitude towards AI.
How do Chinese perspectives on AI compare to views in the West?
Kanta Dihal provided a contrast to Fang Wu’s talk about the perception of AI in China by presenting the paper ‘Scary Robots’: Examining Public Responses to AI, on public perceptions of AI in the UK. This 2019 study, by Stephen Cave, Kate Coughlan, and Kanta Dihal, discusses the results of a nationally representative survey of the UK population on their perceptions of AI. Dihal explained that existing narratives of AI in the English-speaking world tend to veer towards extremes – either wildly utopian or horrendously dystopian. The survey solicited responses to eight of these common narratives about AI (four optimistic, four pessimistic), plus views on what AI is, how likely it is to impact in respondents’ lifetimes, and whether they can influence it. Of the narratives presented, those associated with automation were best known, followed by the idea that AI would become more powerful than humans. Overall results showed that the most common visions AI elicit significant anxiety. Respondents felt they had no control over AI’s development, citing the power of corporations or government, or versions of technological determinism.
The audience raised some interesting questions contrasting Wu’s and Dihal’s presentations: is it true that Western publics have a more negative perception of AI than Chinese publics, and if so, why? Dihal and Wu responded that this is certainly true, and the main reason might be the effect of the media. In China, the media tends to portray more of AI’s benefits. In contrast, tabloids in the UK tend to use a language of extremes to create negative perceptions towards AI.
How do the media, industry, and government influence the Chinese public’s perception of AI?
Jing Zeng (曾靖) opened the second half of the workshop with her presentation on Contested Chinese Dreams of AI? Public Discourse about Artificial Intelligence on WeChat and People’s Daily Online. Zeng observed that media discourse around AI in China is often sensationalized, industry-driven, and politicized. Her study analyses public discourse about AI in China by comparing the media presentation of AI in the Chinese newspaper People’s Daily Online with public discussion about AI on the social medium WeChat. Zeng and her colleagues hypothesized that media coverage in People’s Daily would reflect the government’s overwhelmingly positive view of AI, emphasizing the technology’s economic potential, while WeChat may provide a space for critical debate where official views could be challenged. However, their findings reveal that AI-related discourse tends to be positive on both WeChat and People’s Daily Online. Zeng noted that public discourse on WeChat is becoming increasingly homogenous as discussions are dominated by actors from policy and industry, such as government agencies and technology companies. Furthermore, social media plays a limited role as a counter-public sphere for competing against the country’s official narrative of the economic and political potential of AI.
Are there uniquely Chinese perspectives on AI ethics?
Finally, Yishu Mao (毛逸舒) shared insights from her research in a presentation titled Online Public Discourse on Artificial Intelligence (AI) and Ethics in China: Context, Content, and Implications. While previous speakers focused on public perceptions, Mao took a slightly different approach, framing her presentation around discourse on AI ethics. Mao noted that the societal and ethical implications of AI have garnered increasing attention and sparked debates among academics, policymakers and the general public around the world. Largely unnoticed, however, are the similarly vibrant discussions around AI ethics taking place in China. Mao and her colleagues analyzed a large sample of these discussions on two popular Chinese social media platforms, WeChat and Zhihu. Mao’s findings suggest that the participants of the discussions are diverse, including scholars, IT industry practitioners, journalists, and the general public. They address a broad range of ethical issues, with the philosophy of AI ethics, the impact of AI on labour, and autonomous vehicles being the most popular ones.
Overall, Mao pointed out that AI ethics discourse in China broadly coheres with AI ethics discourse on the global stage, however there are a couple of notable differences in emphasis and framing. First, AI ethics discussions in China seem to focus more on long-term risks for humanity rather than imminent risks of specific applications for individuals. Second, discussions in China often reflect the view from Daoist and Confucian philosophy that change in the word is constant, and that we must learn and discover how to respond to the changes we face. The development of artificial intelligence is one such change, and accordingly, in China there is a greater emphasis on learning how to live with AI, than on discussing how to restrict AI or whether to build AI systems in the first place. Mao and her colleagues argue that online discourse offers valuable ground to understand the future trajectory of AI governance in China and contextualizes the Chinese societal perspectives on AI within the global discourse.
Audience questions probed further into how AI ethics is generally defined and understood as a topic of discussion in China. In an insightful response, Mao noted that in China AI ethics is most often discussed under the heading “AI morality”. “Ethics” (伦理), Mao explained, is a term imported from the West, which is understood to embody ”reason, science and general will”, while “morality” (道德) stemming from Chinese language and philosophy, embodies “temperament, humanity and personal cultivation”. In this way, AI ethics/morality discourse in China is framed by Chinese philosophy and tradition.
Another audience question enquired about Chinese public opinion of AI on social media in the context of censorship. Zeng commented that the consumers are very powerful in China, so if there is a demand from the consumer side, then there will be a direct impact on the development of AI. Mao also added that the Chinese government has set certain goals for AI scientists to engage and educate the general public to instill in them a basic scientific understanding of AI.
Workshop Conclusion: AI narratives and lessons for the West
The workshop concluded with a discussion in which the panellists addressed questions such as: What are the most popular AI narratives in China? Is there an equivalent to The Terminator in China? Zeng pointed out that the most prominent AI narrative in China is that AI makes money, and that there is space for imagination about AI capabilities. When asked which one single thing could be changed to improve AI’s image in Europe, Dihal responded that while she would not recommend AI-related media be intentionally curated to promote a positive image, China shows it is possible to have a public debate about AI without evoking the image of the Terminator. Dihal suggests that the West would benefit from a more moderate discourse around AI that is not so heavily influenced by the narrative extremes of AI utopias and dystopias.
As this workshop series concludes, we should reflect on the localized narratives of technology and artificial intelligence not just in China, but also in the wider world. Doing so is crucial to building a nuanced global perspective on how artificial intelligence will shape our shared future.