Home
Department of Informatics
Interview

"Even after years of research, you may have no idea where it will end up."

Professor of informatics Fedor Fomin explains what he does for a living and talks about the current challenges in algorithmic theory.

Fedor Fomin sitting in chair in front of black board
Photo:
Randi H Eilertsen

Main content

How did you become an informatician?

Informatics was by no means an obvious path. I was interested in both literature and history, and as a music enthusiast, becoming a rock star seemed like a more appealing option. But I was good at math, and since the math faculty was only a twenty-minute walk from my house, I decided to enroll there.

My interest in maths gradually grew into a passion for algorithms, which eventually became a career in computer science.

When you’re tackling a problem – whether you succeed or fail – it leads you to a new problem. It is both addictive and unpredictable; even after years of research, you may have no idea where it will end up.

What problems are you tackling?

Much of my research tackles how to systematically classify computational problems based on their intrinsic complexity and then design algorithms that exploit specific structures to solve these problems more efficiently.

To put it extremely simply: In computer science, “intrinsic complexity” is how hard it is to do something. Tasks can be hard in different ways. By finding out which way a task is hard, you can change the method to do it more efficiently.

With modern algorithmic developments, it’s becoming clear that worst-case complexity often fails to adequately predict real-world performance.

What are the current challenges in algorithm theory?

There’s a discrepancy in how researchers judge algorithms in theory versus how algorithms work in practice. A lot of algorithms that are theoretically hard actually work very well in practice.

This is because computational scientists are ruled by pessimism, or as they like to call it: Worst-case time complexity.

Algorithms tend to be judged by how hard the computer needs to work, and worst-case time complexity is a way of measuring an algorithm by its “worst-case” scenario.

With modern algorithmic developments, it’s becoming clear that worst-case complexity often fails to adequately predict real-world performance.

For example, in machine learning, nearly every model which are routinely used and work well in practice is worst-case computationally hard.

In medicine, for example, algorithms can be used to analyze MRI scans. In theory, this should be a task that computers are very bad at. In reality, computers can identify different groups of tissue in the body and spot irregularities such as scar tissue or cancer with high accuracy.

This reveals a significant gap in our understanding of computation. The predictions we derive from worst-case analysis frequently don’t align with how algorithms typically perform. In my view, reducing this gap is the most pressing challenge in algorithm theory today.