Home
UiB AI
Seminar

UiB AI #9 Trustworthy AI

How do we work with trust in artificial intelligence? Meet our researchers and participate in our next UiB AI seminar!

word cloud
Photo:
UiB AI

Main content

Artificial intelligence today is maybe not yet human-centric, but it is definitely human collaborative. That means that people use AI to augment their abilities: the spam filter saves me time, the route finder helps me orient, the speech to text helps me caption videos, the video editor helps me change from vertical to horizontal format without losing the important objects in the video. Also, AI requires constant human support: to clean, label and overall preprocess data for learning algorithms, to identify and correct mistakes, to do the unusual non-typical tasks that an AI cannot handle. Much has been said about the trustworthiness of AI in recent years. Trust is a relational property between people that can facilitate or hinder collaboration. Trustworthiness is a value that we would AI to be aligned with.

The department of Information Science and Media Studies and the faculty of social sciences is concerned with the social and collaborative aspects and properties of AI. In this edition of the UiB AI seminar series, we showcase four research examples on how we work with trust and AI. The short presentations will be followed by a panel debate.

Program

Trustworthy journalism through AI by Andreas Lothe Opdahl

Quality journalism has become more important than ever due to the need for quality and trustworthy media outlets that can provide accurate information to the public and help to address and counterbalance the wide and rapid spread of disinformation. At the same time, quality journalism is under pressure due to loss of revenue and competition from alternative information providers. This talk discusses and gives examples of how recent advances in Artificial Intelligence (AI) provide opportunities for - but also pose threats to - production of high-quality journalism.

Trust but verify by Rustam Galimullin

The notion of an intelligent agent is central in AI, and it encompasses such entities as autonomous vehicles, healthcare robots, and us, humans. For agents and groups thereof to be efficiently adopted, we cannot rely purely on unconditional trust, and we should require that their behaviour is reliable and safe. This can be done by employing the mathematical techniques of formal specification and verification that were initially developed for ensuring correct execution of computer programs. In the talk, we will argue that such formal techniques lead to better, and ultimately safer, AI agents.

Creating Embodied Artificial Trustworthiness by Ragnhild Mølster (presenter) and Jens Elmelund Kjeldsen

The creation of embodied artificial trustworthiness: Embodied AI in social, political, and journalistic communication.Since humans first began contemplating on who one could believe and trust, the most important characteristics of trustworthiness have been the true, the real, and the authentic. The untrustworthy, on the other hand, both in the past and the present, is that which is untrue, fake, and inauthentic. This is claimed both by ancient philosophers and rhetoricians, but also from contemporary research in social psychology, persuasion, and rhetoric. In our contemporary world, sometimes referred to as post-human, the advent and increasing prevalence of artificial intelligence turns the traditional understanding of the trustworthy on its head. Artificiality is, per definition, untrue, fake, and inauthentic, and not least without physical bodies. Still, in communication and social interaction humans increasingly rely on, believe in, and trust, embodied artificial intelligence. This is a great dilemma: History, culture, and research has taught to believe and trust the real, the actual physical people in front of us; so why do people in our time believe and trust the artificial? This raises other important questions: What is it that makes us believe and trust AI? Is it because it is perceived as real? Because the voices and “bodies” of AI appear real to the user? Or is the kind of credibility and trust we attribute different from the kind we historically have attributed to real living people with bodies? In brief: why and how do we trust and believe bodily artificial intelligence? This presentation addresses this question by examining selected examples of embodied artificial intelligence.

Power to the Platform? AI and the Image Economy by Richard Misek

Generative text-to-video models, such as OpenAI’s recently-announced Sora, are on the verge of becoming widespread tools for video production. Once refined and made public, Sora and similar models are likely turn the image economy on its head, creating a seismic shift in the balance of power between creators, consumers, and tech platforms. Though AI companies emphasise how their products will empower consumers, the risk is that they will disempower creators and turn AI platforms into economic ‘chokepoints’ through which Big Tech will extract high profit margins at the expense of nations’ creative economies.  But there remains one significant check on the spread of text-to-video models: their hunger for high quality training data. Most trawlable video is relatively low quality and copyrighted. Most high quality video is owned by media owners (notably commercial archives) and held behind paywalls. This has resulted over the last year in a complex dynamic of interactions between ‘legacy’ media organisations and tech companies that encompasses both litigation and collaboration, with everyone jostling for dominance of the emergent AI image economy.  In this context, it is not surprising that OpenAI refuses to say where Sora’s training data came from. But if OpenAI cannot even be trusted to identify its source media, can it be trusted to control the global generative AI economy? Rather than focusing on the trustworthiness of images and media, this talk explores how far we can trust the corporate players whose current actions will shape the future media industry. This talk explores this question with particular attention to three key current players in this field: OpenAI, NVIDIA, and Getty Images.  

Summary by Isabell Stinessen Haugen

Moderator: Marija Slavkovik