Fake Content is Just a Click Away. How Well Can You Detect It?
With today's technology, the production of fake content, including "deepfakes", has become easier, while simultaneously more challenging to detect. - Soon it will be nearly impossible to distinguish real from manipulation with the naked eye, warns Duc Tien Dang Nguyen.
Hovedinnhold
A few weeks ago, Denmark's Prime Minister Mette Frederiksen and Finland's President Alexander Stubb received a lot of attention, when "deepfake" videos of them both began to circulate on the Internet.
Deepfakes are manipulated images, sounds, or videos generated with the help of deep computer neural networks.
Technology has made it significantly easier and cheaper to make these. At the same time, the quality of the manipulated content has steadily improved, making it more difficult to detect them. Duc Tien Duc Dang Nguyen, associate professor at Infomedia and researcher in multimedia forenscis says the following:
- The challenge of "deepfakes" is intensyfying, and soon it will be almost impossible to distinguish real from fake with the naked eye.
The 2024 Election Year
This year, a record number of voters will head to the polls globally, with elections in 64 countries and the European Union, representing approximately 49% of the world's population.
The outcomes of these elections will significantly impact the political landscape and provide insight into the state of democracy. Consequently, there is growing concern about the spread of misinformation, including deepfakes, which can influence voter perceptions and trust in political candidates.
Although it is uncertain how many political fakes have been produced with artificial intelligence, experts report an increase in election-related "deepfakes" globally. They note also that it is particularly easy to create deepfakes of politicians or other famous people due to the large number of available images and videos.
Dang Nguyen explains:
- The more famous a person is, the easier it can be to create a convincing deepfake of that person. This is because there is often plenty of available material, such as videos and images, which can be used as a basis for creating a realistic forgery.
How can the average person detect manipulated content? The Infomedia researcher emphasizes several key indicators::
- Many AI-generated images still encounter issues with light, shadows, and reflections. By carefully examining these elements, we can detect signs of manipulationAI-generated images still have problems with light, shadows and reflections. By examining carefully at these elements, we can detect signs of manipulation
- Pay attention to details such as ears and pupils in faces made by AI, as well hands and feet
- Take your time and thoroughly research the image's source, metadata, location and time, as well as the motivation of the person who uploaded it
- It is important not to assess the image in isolation, but to place it in a larger tion or context
You can also take advantage of various verification tools, for example the Invid plugin or OSINT tools. Authentic content can be also confirmed by newly developed tools such as Origin Verify, which was developed as part of the Origin project in which Media City Bergen and researchers at Infomedia are also involved.
Combating Misinformation
Globally, legislators are working on laws to curb the spread of disinformation. Researchers, including those at the University of Bergen, are also contributing to efforts against fake content. At Infomedia, the MediaFutures Research Center focuses on developing responsible media technology in collaboration with industry partners to find solutions against fabricated news. The NORDIS consortium, another initiative involving Infomedia researchers, aims to develop theories, practices, and models to counter digital information disorder, with a particular focus on climate change and the 2024 European Elections.