Home
Center for Digital Narrative
ALGOFOLK Seminar

ALGOtalk #1: Yucong Lao

Hosted by the ALGOFOLK project, Yucong Lao (University of Oulu) will give a talk titled "Young people’s practices of information credibility assessment of AI-generated video content", in which she presents the results of a study on how youths develop strategies to make sense of synthetic media like deepfakes.

A laptopogram based on a neutral background and populated by scattered squared portraits, all monochromatic, grouped according to similarity. The groupings vary in size, ranging from single faces to overlapping collections of up to twelve.
This image shows face images from the Olivetti faces dataset, created at AT&T Laboratories Cambridge in the early 1990s. The portraits are scattered across the image and grouped by similarity. The image is a ‘laptopogram’, created by exposing photographic paper using a computer screen and developed in the artist’s bathtub. The process preserves a digital artifact of AI research in silver crystals, returning a physical dimension to sterile data. Dust, scratches, and the marks left by the artist’s hands draw a connection to the role of the researchers' subjectivity in making AI.
Photo:
Philipp Schmitt & AT&T Laboratories Cambridge / Better Images of AI / Data flock (faces) / CC-BY 4.0

Main content

Yucong Lao graduated from the Social Science Faculty at Lund University with a master degree in Media and Communication Studies, and is currently working as a PhD researcher at the University of Oulu, Finland, where she researches AI literacy among Gen Z youths. The TMS-funded ALGOFOLK project invites her to give a talk titled: "Young people’s practices of information credibility assessment of AI-generated video content".

Abstract:

The breakthroughs in generative artificial intelligence (AI) technologies have enabled the creation of hyper-realistic media content that alarmingly mimics human beings. As AI-generated media (AGM) become more accessible on social media platforms, concerns about young people’s capabilities to deal with mis- and disinformation are rising. This study aims to investigate young people’s practices to assess the credibility of AGM. By utilising interviews with an experiment that offered two selected videos, we explored young people’s strategies for verifying the authenticity of media content in AI-generated deepfake videos and their reflections on the implications of such media. The results show how young people verified the authenticity of AGM based on image, sound, narrative and intuition, and how they sensed mis- and disinformation from three perspectives: techniques, consequences and associated people. This study contributes to the exploration of AGM from the perspective of young people’s media and information literacy practices.

This talk will be in English. A Q&A and informal discussion will follow.