Home
Faculty of Social Sciences
Research talk

«A formal, methodical approach to artificial intelligence suits me well»

Rustam Galimullin is one of several experts in artificial intelligence at UiB. Learn more about him and his research here.

Mann med briller og mørkt hår
The mathematical models created by Galimullin are used to determine whether robots can address the communication challenges they will encounter.
Photo:
Melanie Burford

Main content

«I have always been fascinated by the unknown aspects of artificial consciousness and the boundaries within. A formal, methodical approach to artificial intelligence (AI) suits me well,» says Rustam Galimullin.

He is a postdoctoral researcher at the Department of Information Science and Media Studies (UiB).

Though his research is abstract, it carries practical implications for artificial intelligence practices. Galimullin creates mathematical models to assess whether future robots can communicate more effectively than current ones.

Making robots socially intelligent 

Why invest time and research funds in something like that, some might ask. Galimullin believes there are at least two compelling reasons.

Reason number one:

Current robots excel at solving well-defined tasks with clear objectives in limited physical environments. They also excel at what humans find challenging. Already in 1997, a chess computer defeated the world champion in chess. However, what robots currently struggle with is handling situations that require the ability to interpret social context and see the world from others' perspectives.

«This means that before we can safely employ robots in more open contexts, we must teach them at least a semblance of social intelligence. An essential aspect of social intelligence is the ability to communicate with others and act based on new information. Therefore, it is crucial to determine how we can program robots to become better communicators,» says Galimullin.

Mathematical models come in handy

Reason number two:

Enhancing the social intelligence of robots is just one aspect of the job. Researchers must also test the functionality and safety of the robots.

«Before we release them into the world, we must ensure that they operate as intended. We need to know they are capable of performing the tasks we assign them and, most importantly, that it is safe to have them interact with humans. This validation process can be accomplished using mathematical models,» Galimullin explains.

The postdoctoral researcher highlights several instances where robots currently lack street-smarts:

For instance, there is a case from a Danish hospital where sophisticated and expensive robots, intended to navigate large sections of the hospital premises for various tasks, had to be parked. They struggled to cope with the unpredictability of human behavior, leading to potentially hazardous situations.

An autonomous role

Originally from Russia, Galimullin specializes in symbolic AI and blockchain technology.

So, what is it like to be a postdoctoral researcher at UiB?

«It's both exciting and challenging. I appreciate the freedom to cultivate my research interests without the pressure of a looming dissertation deadline, says Galimullin. He continues - As a postdoctoral researcher, I've expanded my professional network, explored new research directions, and gained valuable insights into academia».

Communication skills are crucial

The mathematical models created by Galimullin are used to determine whether robots — or more precisely, their computer programs —can address the communication challenges they will encounter.

He emphasizes that we are largely discussing programs designed for hypothetical robots that have not yet been constructed.

«We can already develop robust models to assess whether robots can effectively communicate with each other to solve a given task. However, we observe that communication becomes significantly more challenging for robots when we consider social settings that also involve humans,» says Galimullin.

As an example, he highlights a scenario in which a robot detects an impending dangerous situation that the human with whom the robot is interacting is unaware of.

«Instilling in the robot an understanding that humans may not share the same level of information can be quite tricky. There are multiple layers of situational understanding involved, and it is challenging to convey to the robot that the human may not have observed the same information and, as a result, must be warned about the imminent danger,» Galimullin explains.