It's hard to make AI behave!
The 12th UiB AI seminar gathered researchers, students and other interested to hear about the challenges of adapting systems of artificial intelligence to human values and goals. There seems to be no quick fix.
Main content
The seminar was a collaboration between Universitetsfondet and UiB AI. Samia Touileb, Accociate Professor at Infomedia, was the scientifically responsible.
You can see the recording of UiB AI #12: Aligning AI with Human Values, here.
Jan Broersen from Utrecht University in the Netherlands talked about the ‘disconnect’ between the knowledge that goes into large language models (like ChatGPT) on the one hand, and what comes out when you give prompts on the other. For example, ChatGPT knows the rules of chess down to the last detail, but if you try to play with it, it makes illegal moves - and doesn't ‘realise it’. This disconnect makes it difficult - or perhaps impossible - to produce language models that give you texts on an ethical basis, however we choose to define such a basis.
Jan Broersen from Utrecht University.
Emily C. Collins from the University of Manchester raised the question of who should be responsible for the consequences of using artificial intelligence technology. She also showed examples from working life of how perception of and trust in AI depends a lot on who asks you to, or requires you to, use the technology. And on what the purpose is: To use technology to give you a better and less routine working day, or to generate ‘money money cash cash’?
Emily C. Collins from University of Manchester.
Marija Slavkovik from the University of Bergen addressed why it is so hard to get artificial intelligence to "behave". She showed an example of how ChatGPT refuses to answer how to slaughter a chicken, but is happy to provide recipes on how to cook an (already dead) chicken for dinner. The example illustrates how attempts have been made, with the help of moral filters, to reduce the potential unethical text that such language models can produce. But for us, as users, it's unclear where this ‘morality’ comes from. The point of machine ethics, according to Slavkovik, is that we need to understand how technology works and that morality should be part of our technology design itself.
Marija Slavkovik from the Universitety of Bergen.
The next seminar in the UiB AI series happens on January 24. 2025, and is about quantum technology. You will find information about it and can register for UiB AI #13 here.