Your digital life – smart, or monitored?
Do you have full control over your digital life? At the “Machine Vision” exhibition, you can experience and assess the ethical implications of AI technologies.
Main content
New technologies are evolving so rapidly that society is unable to develop ethical guidelines and regulations fast enough. Therefore, we all need some training in making ethical assessments about the use of new technologies.We are constantly leaving digital traces, and many want to use the information they contain. The president who wants to retain their power. The Minister of Health who wants to stop a pandemic. The entrepreneur, the villain and the environmentalist who wants to protect the climate. And you, who want to own and have control over your own data.
When you visit the “Machine Vision” exhibition, you walk through an experiential labyrinth where you have to decide on a series of ethical challenges related to these new technologies.
"The goal is to create experiential interactions with machine vision technologies, so that visitors can assess situations and make ethical choices," says Professor of digital culture Jill Walker Rettberg. The exhibition is based on the Machine Vision research project, which investigates how machine vision technologies affect us culturally.
What is machine vision?
Machines that see are everywhere in our everyday lives. The term “machine vision” refers to the many ways in which machines – smartphones, computers, apps – use cameras and other sensors to see, understand and visualize the world around them.
Machines scan barcodes, tag our holiday photos, give us diagnoses, and help us find our way when we are in an unfamiliar place.
You can unlock your smartphone with your thumb because it uses a kind of machine vision that can read your fingerprint. When you select a filter on Snapchat, it is machine vision that makes the app understand where your eyes and mouth are. We can play and have fun with machine vision, but machine vision can be also used for more troubling purposes, such as surveillance, social control and warfare.
Examples of ethical problems in the exhibition:
- Students are evaluated in terms of knowledge and effort by their schools. Now it is also possible to measure attitudes and feelings with face recognition technology. Do we want this?
- Surveillance cameras can detect and warn about suspicious behavior. But who defines what is “suspicious?”
- Technology developed with white men as training subjects can lead to discrimination based on gender and race. What can this lead to?
Arworks on display in the exhibition:
- The Battle of Ilovaisk by Forensic architecture
- The Normalizing machine by Mushon Zer-Aviv, Dan Stavy, Eran Weissenstern
- Cloud indx by James Bridle
- YHB Pocket Protest Shield by Leo Selvaggio
- AI ain't I a woman by Joy Buolamwini
- suspicious behavior by Kairus (Linda Kronman and Andreas Zingerle)
- The Ongoing Moment by Weiyi Li
You are welcome to visit this instructive, challenging and eye-opening exhibition. As an audience, you are guaranteed to experience some aha! moments, for better or worse.
About the Machine Vision exhibition
In 2019, the ERC project Machine Vision received funding from the Norwegian Research Council's FORSTERK program to support the societal impact of Horizon 2020-funded research projects in Norway. Thanks to this funding, and in collaboration with the University Museum of Bergen, the project developed an exhibition titled "Machine Vision" which will be open to the public from March 19th to August 28th, 2021.
The "Machine Vision" exhibition is designed as an experiential labyrinth challenging audiences with a series of ethical challenges. The main goal of this exhibition is to increase the public's knowledge of machine vision technologies (including, for example, face recognition, object detection and autonomous cars) and their societal implications. Visitors will be able to familiarize themselves with the basic concepts of machine vision, explore the research project's findings, and interact with thought-provoking artworks.
We are excited to share our work with the public, and hope that this exhibition will be a successful experiment in science communication combining academic research, contemporary art and critical thinking about new technologies. We hope that you will join us at the University Museum of Bergen - but in the meantime, you can start your visit by reading the exhibition's introduction, and take a look at the featured artworks:
Machines that see are all around us, and machine vision is central to technologies we use every day. From microcameras to satellites, vision machines help us scan barcodes, unlock our smartphones, tag our holiday pictures, get accurate diagnoses, and find our way in an unknown place. At the same time, the automation of vision challenges established structures and customs, enabling unprecedented possibilities for surveillance, policing, and control. The Machine Vision exhibition showcases the ERC-funded research project Machine Vision in Everyday Life, charting the impact of machine vision on three scales: the individual, the social, and the world. How do machines see us as individuals? Which changes do vision machines bring to society? And what kind of worlds are made possible by machine vision technologies?
Read more about the featured artworks
Scale: Augmenting Worlds
Battle of Ilovaisk (2019) by Forensic Architecture
Military forces around the world are not the only ones using machine vision for intelligence. New technologies and platforms are developed by non-governmental-organizations, citizen journalists and activists who use machine vision to watch back on those in power. One example of this is Forensic Architecture, a research agency investigating violence committed by states, police forces, militaries, and corporations. Using machine learning to generate evidence from vast amounts images is one open-source intelligence method they developed. Open here refers to all data being publicly available. In the Battle of Ilovaisk investigation, commissioned by European Human Rights Advocacy Centre (EHRAC), Forensic Architecture builds upon the work of dozens of open-source investigators and reporters. By using machine learning and computer vision they patch together evidence that Russian military personnel and hardware indeed were present in Ukraine. Charges that Russia has denied.
Cloud Index (2016) by James Bridle
In a world of uncertainties, humans have looked up into the skies to predict futures. We developed technologies to predict the weather; expectedly this was one of the first tasks assigned to digital computers. Today’s predictions are performed by machine learning, powered by data from “the Cloud”. As a type of artificial intelligence, neural networks, mimicking the structure of the brain, are used for predictions in autonomous vehicles and to diagnose diseases. In Cloud Index, James Bridle uses AI to predict weather scenarios that correlate with various outcomes of the Brexit vote. For this, a neural network was trained with 15,000 satellite images and six years of polling data. Reflecting on the results, Bridle describes how the machine produces certainties: “If we wish to change the future, we must change the weather.” Uncertainty is part of the complexity of the world. What are the consequences of producing predictions and who is in power to control our futures?
Scale: Surveillance and the City
The Ongoing Moment (2020) by Li Weiyi 李维伊
"The Ongoing Moment", an online work by Chinese artist Li Weiyi 李维伊, generates custom augmented reality (AR) selfie filters from the answers to a whimsical survey. As a playful commentary on the popularity of online tests and camera filters in apps like Snapchat, Instagram and Facebook, this artwork lets you find out “who you really are at this moment”. Scan the QR code with your mobile device to experience this work.
Suspicious Behavior (2020) by Kairus (Linda Kronman & Andreas Zingerle)
Intelligent machine vision systems able to recognize objects, faces or behaviour are trained with large numbers of images called training sets. These images play a fundamental role in how machines see. A considerable amount of human labour goes into collecting, categorizing and labelling these images. The tedious work of annotating images is done by crowdworkers on platforms such as Amazons Mechanical Turk. How do these workers interpret the images they are labelling? How is their work shaping machine vision?
In the artwork Suspicious Behavior you are given the role of an image annotator performing Human Intelligence Tasks or (HITs) for a crowdworking platform. In this fictional annotation tutorial, your work to detect and label suspicious behavior is needed to train “smart” surveillance cameras. As a crowdworker you are not paid well, time is money, and you need to decide fast. Nevertheless, the artwork raises questions: how is suspicious behaviour defined? Who defines it? Are humans training the machine or, eventually the machine training the humans about behaviour?
The YHB Pocket Protest Shield (2020)by Leonardo Selvaggio
Law enforcement is known to use facial recognition to identify protestors. The YHB Pocket Protest Shield represents one several do-it-yourself tech-hacks, in addition to waring different makeup styles, scarfs and other wearables, that can be used to trick basic facial recognition algorithms. In addition to distracting surveillance technology, the shield gives extra protection when citizens need to get their voice heard during COVID19. The abbreviation YHB recognizes artists and designers whose work inspired the creation of this shield: Tokujin Yoshioka, Adam Harvey, and John Baldessari.
Scale: Classifying Humans
AI, Ain't I A Woman (2018)by Joy Buolamwinih
Joy Buolamwini began to challenge bias in AI when she discovered that her face was not detected by custom facial recognition software until she wore a white mask. In AI, Ain't I a Woman Buolamwini demonstrates how a subcategory of facial recognition technologies, gender classification tools, repeatedly misinterpret iconic Black women’s faces as male. It turns out that most facial recognition technologies are very accurate when measured on male faces with lighter skin tones. However, the error rate is considerably higher on darker skinned females, due to a lack of diversity in training data. An increasing body of research shows that machine vision built to classify humans discriminates against already marginalized populations causing considerable harm including false arrests. This raises the question: what role does gender, race, disabilities or other attributes play in how machine vision is experienced?
The Normalizing Machine (2018) by Mushon Zer-Aviv, Stavy Dan, Eran Weissenstern
Teaching vision machines how to recognize individuals is often presented as a futuristic innovation, but is in fact part of a long genealogy of techniques to identify, monitor and police other human beings. The design and functioning of today’s most advanced face recognition systems bear striking similarities to pseudosciences like physiognomy and phrenology, which sought to interpret individual character and criminal predispositions from a human’s facial features or skull shape. Given this history, it is not surprising that machine vision perpetuates racial and gendered biases.
The Normalizing Machine invites to take part in a machine learning experiment to define what a normal person looks like. The artwork references to “Le Portrait Parle” a pioneering forensic system for categorizing faces. Police officer and biometrics researcher Alphonse Bertillon developed it in the late 1800’s with the intention to identify criminals. The statistical system was later widely adopted by both the Eugenics movement and by the Nazis to criminalize the face. There is a danger embedded in classifying humans. If a system is trained on what is normal it will also recognize what is abnormal.