Scale: Augmenting Worlds
Battle of Ilovaisk (2019) by Forensic Architecture
Military forces around the world are not the only ones using machine vision for intelligence. New technologies and platforms are developed by non-governmental-organizations, citizen journalists and activists who use machine vision to watch back on those in power. One example of this is Forensic Architecture, a research agency investigating violence committed by states, police forces, militaries, and corporations. Using machine learning to generate evidence from vast amounts images is one open-source intelligence method they developed. Open here refers to all data being publicly available. In the Battle of Ilovaisk investigation, commissioned by European Human Rights Advocacy Centre (EHRAC), Forensic Architecture builds upon the work of dozens of open-source investigators and reporters. By using machine learning and computer vision they patch together evidence that Russian military personnel and hardware indeed were present in Ukraine. Charges that Russia has denied.
Cloud Index (2016) by James Bridle
In a world of uncertainties, humans have looked up into the skies to predict futures. We developed technologies to predict the weather; expectedly this was one of the first tasks assigned to digital computers. Today’s predictions are performed by machine learning, powered by data from “the Cloud”. As a type of artificial intelligence, neural networks, mimicking the structure of the brain, are used for predictions in autonomous vehicles and to diagnose diseases. In Cloud Index, James Bridle uses AI to predict weather scenarios that correlate with various outcomes of the Brexit vote. For this, a neural network was trained with 15,000 satellite images and six years of polling data. Reflecting on the results, Bridle describes how the machine produces certainties: “If we wish to change the future, we must change the weather.” Uncertainty is part of the complexity of the world. What are the consequences of producing predictions and who is in power to control our futures?
Scale: Surveillance and the City
The Ongoing Moment (2020) by Li Weiyi 李维伊
"The Ongoing Moment", an online work by Chinese artist Li Weiyi 李维伊, generates custom augmented reality (AR) selfie filters from the answers to a whimsical survey. As a playful commentary on the popularity of online tests and camera filters in apps like Snapchat, Instagram and Facebook, this artwork lets you find out “who you really are at this moment”. Scan the QR code with your mobile device to experience this work.
Suspicious Behavior (2020) by Kairus (Linda Kronman & Andreas Zingerle)
Intelligent machine vision systems able to recognize objects, faces or behaviour are trained with large numbers of images called training sets. These images play a fundamental role in how machines see. A considerable amount of human labour goes into collecting, categorizing and labelling these images. The tedious work of annotating images is done by crowdworkers on platforms such as Amazons Mechanical Turk. How do these workers interpret the images they are labelling? How is their work shaping machine vision?
In the artwork Suspicious Behavior you are given the role of an image annotator performing Human Intelligence Tasks or (HITs) for a crowdworking platform. In this fictional annotation tutorial, your work to detect and label suspicious behavior is needed to train “smart” surveillance cameras. As a crowdworker you are not paid well, time is money, and you need to decide fast. Nevertheless, the artwork raises questions: how is suspicious behaviour defined? Who defines it? Are humans training the machine or, eventually the machine training the humans about behaviour?
The YHB Pocket Protest Shield (2020)by Leonardo Selvaggio
Law enforcement is known to use facial recognition to identify protestors. The YHB Pocket Protest Shield represents one several do-it-yourself tech-hacks, in addition to waring different makeup styles, scarfs and other wearables, that can be used to trick basic facial recognition algorithms. In addition to distracting surveillance technology, the shield gives extra protection when citizens need to get their voice heard during COVID19. The abbreviation YHB recognizes artists and designers whose work inspired the creation of this shield: Tokujin Yoshioka, Adam Harvey, and John Baldessari.
Scale: Classifying Humans
AI, Ain't I A Woman (2018)by Joy Buolamwinih
Joy Buolamwini began to challenge bias in AI when she discovered that her face was not detected by custom facial recognition software until she wore a white mask. In AI, Ain't I a Woman Buolamwini demonstrates how a subcategory of facial recognition technologies, gender classification tools, repeatedly misinterpret iconic Black women’s faces as male. It turns out that most facial recognition technologies are very accurate when measured on male faces with lighter skin tones. However, the error rate is considerably higher on darker skinned females, due to a lack of diversity in training data. An increasing body of research shows that machine vision built to classify humans discriminates against already marginalized populations causing considerable harm including false arrests. This raises the question: what role does gender, race, disabilities or other attributes play in how machine vision is experienced?
The Normalizing Machine (2018) by Mushon Zer-Aviv, Stavy Dan, Eran Weissenstern
Teaching vision machines how to recognize individuals is often presented as a futuristic innovation, but is in fact part of a long genealogy of techniques to identify, monitor and police other human beings. The design and functioning of today’s most advanced face recognition systems bear striking similarities to pseudosciences like physiognomy and phrenology, which sought to interpret individual character and criminal predispositions from a human’s facial features or skull shape. Given this history, it is not surprising that machine vision perpetuates racial and gendered biases.
The Normalizing Machine invites to take part in a machine learning experiment to define what a normal person looks like. The artwork references to “Le Portrait Parle” a pioneering forensic system for categorizing faces. Police officer and biometrics researcher Alphonse Bertillon developed it in the late 1800’s with the intention to identify criminals. The statistical system was later widely adopted by both the Eugenics movement and by the Nazis to criminalize the face. There is a danger embedded in classifying humans. If a system is trained on what is normal it will also recognize what is abnormal.