Home
Faculty of Humanities
NEW RESEARCH

The complex system of facial expressions in sign language

In the near future deaf people and hearing people may be able to communicate in real time, using automatic translation systems based on computer vision technologies. Now researchers are carefully studying facial expressions used in sign language to express both grammatical information and emotions.

Openpose
NEW RESEARCH: Vadim Kimmelman from UiB has just published a study showing how facial expressions affect both grammatical information and emotions in sign language.
Photo:
Vadim Kimmelman

Main content

We have already seen examples of robot-projects that can convert text into sign language, but it has proven more challenging to translate from sign language into spoken languages.

Facial expressions and sign language

"In sign language, facial expressions are used to express both linguistic information and emotions. For example: eyebrow raise is necessary to mark general questions in most sign languages. At the same time, signers use the face to express emotions – either their own, or when quoting someone else. What we need to know more about, is what happens when grammar and emotions needs to be expressed at the same time. Will a computer software be able to capture the correct meaning?"  Vadim Kimmelman asks.

Kimmelman is Associate Professor at the University of Bergen, where he works as a linguist, primarily on the grammar of sign languages, specifically Russian Sign Language (RSL) and Sign Language of the Netherlands (NGT). In his latest study published in PLOS ONE , Kimmelman is looking at what happens when different emotions are combined with different sentence types, using 2D video analysis software to track eyebrow position in video recordings.  Kimmelman and his colleagues at Nazarbayev University in Kazakhstan investigated these questions for Kazakh-Russian Sign Language, which is the sign language used in Kazakhstan.

Tracking hand, body and facial features

"In our study we asked nine native signers to produce the same sentences as statements and two types of questions (e.g. The girl fell – Did the girl fall? – Where did the girl fall?). The signer posed the questions three times, adding three different emotions (neutral, angry, surprised). Based on research on other sign languages, we had expected that both emotions and grammar would affect eyebrow position, and that we might find some interactions", Kimmelman explains.

The main novelty in this study was the use of the OpenPose software, which allows automatic tracking of hands, body, and facial features in 2D videos.

OpenPose

"Using this software, we were able to precisely and objectively measure eyebrow positions across different conditions, and conduct quantitative analysis. The study showed as expected, that both emotions and grammar affected eyebrow position in Kazakh-Russian Sign Language. For instance, general questions are marked with raised eyebrows, surprise is marked with raised eyebrows, and anger is marked with lowered eyebrows".

Complex facial expressions

Kimmelman also found that emotional and grammatical marking can be combined:
 
"We found that with surprised general questions the eyebrows raise even higher than with neutral questions. In addition, we found some complex interactions between factors influencing eyebrow positions, indicating the need for future research. In future, we will also investigate how eyebrow movement aligns with specific signs in the sentence, in addition to how average eyebrow position is affected, and we will investigate other facial features and head and body position".

The results of this study show some real practical implications. The evolution of new technologies has clearly contributed to improve and extend the communication opportunities of hearing impaired people, Kimmelman says:

Showcasing new possibilities

"First, students learning Kazakh-Russian Sign Language (e.g. future interpreters) should be aware of how both emotions and grammar are affecting facial expressions. Second, our findings will have an impact on projects on automatic recognition of sign language, as it is clear that both grammatical information and emotions should be considered by recognition models when applied to facial expressions".

"Last but not least, our study is a showcase of the possibilities that new technologies, such as computer vision, offer to scientific research of sign languages" Kimmelman says.