Hjem
UiB AI
Seminar UiB AI #2

UiB AI #2 But, why? - make AI answer!

Kunstig intelligens (AI) hjelper oss med å ta beslutninger hver dag, men kan AI forklare sine egne forslag? I dette seminaret vil Samia Touileb og Ghazaal Sheikhi snakke om hvordan AI ser på verden og tar beslutningene som er med på å forme den. Seminaret vil foregå på engelsk.

Samia Touileb and Ghazaal Sheikhi
Foto/ill.:
UiB

Hovedinnhold

Velkommen til det andre seminaret i seminarserien til UiB AI. Seminaret er åpent for alle ansatte og studenter ved UiB. Registrer deg nå og bli med oss for en interesssant prat og deilig lunsj i Universitetsaulaen.

Registrer deg her.

Artificial intelligence (AI) is increasingly involved in decision making. Decision-making can be a monotonous task that involves a lot of routine operations and the processing of vast amounts of information. Involving artificial intelligence is a possibility to spare human time and allow for it to be used in a more meaningful way, both for society and for the individuals involved.

Decisions by AI can mean many different things. On one hand, machine learning is used to identify who is most likely to pass the exam, how much your house will be appraised for, or how likely malignant that spot is on your mammography image.  On the other hand, automated decision-makers are built to follow a specific set of rules. When a decision affects our life, we would like to know how and why that decision was made. Knowing why helps scientists and engineers improve the automated decision-making tools. It also helps individuals to retain their autonomy. If you do not know why, you cannot possibly do anything to change a decision. Not knowing why makes the personal experience the same as being subjected to a roll of a dice deciding the value of your property and the quality of your life.

To explain means to provide information about a process that is meaningful, useful and understandable to the person for whom it is intended. Not all AI methods `shed’ enough information for a meaningful explanation to be feasible. It is not that the why exists somewhere and the AI method would not admit to it. Machine learning algorithms produce models of correlations in the data. The data is a numerical representation of the real world. There might be a reason in the real world why two phenomena are related. A machine learning model can correctly identify that relation without having access to, or making use of, the reasons for it. AI methods that rely on symbolic representations by design produce `reason based’ decisions. However, those reasons are not explanations, just the material from which explanations are built.

How do we build AI that explains its decisions? There are numerous challenges to be addressed both in providing material for explanations and constructing explanations. Ultimately, some AI approaches would always be more explanation friendly than others. One can break a walnut with a sledgehammer, but we do not use sledgehammers for this purpose because they tend to destroy the walnut. Analogously, an AI approach can be used for a decision-making purpose, but its explainability should be matched with a consideration of what impact do the produced decisions have. Otherwise, we risk breaking something we cherish.

In this seminar we will describe how AI - sees the world and makes decisions. We will elucidate what happens when we say the AI reasons and the AI learns. We will discuss how researchers are trying to change different AI methods to gain more explainability from AI.

---

Samia Touileb is a researcher at MediaFutures: Research Centre for Responsible Media Technology & Innovation working on Norwegian Language Technologies. Her main research interests are information extraction, sentiment analysis, bias and fairness in NLP, and applications of NLP and machine learning methods to tasks within social science research. She holds a PhD in Information Science with a focus on Natural Language Processing (NLP) from the University of Bergen, was a Postdoc at the Language Technology Group at the University of Oslo, and has been working within research in and applications of AI and NLP for almost a decade.

Ghazaal Sheikhi is a Postdoctoral Fellow at MediaFutures: Research Centre for Responsible Media Technology & Innovation. Her research interests revolve around machine learning, natural language processing and textual content analysis. She holds a PhD in Computer engineering (Machine Learning) from Eastern Mediterranean University, North Cyprus and a master's degree in Biomedical Engineering from Amirkabir University of Technology, Teheran, Iran.