Home
Employee Pages
Employee Pages

Tools based on artificial intelligence in education

This website gives a brief introduction about what generative artificial intelligence is, and the principles for using these tools at UiB.

Illustrasjonen viser et eksempel på ChatGPT.
Photo:
Colourbox

Main content

The use of the language model ChatGPT and similar services has become widespread since the launch of ChatGPT in late 2022. Generative artificial intelligence brings many opportunities, but also some challenges. 

What is generative artificial intelligence?  

  • Collective term: Generative artificial intelligence is a collective term for various models that can produce text, images, videos, and more, based on some given input data. ChatGPT is one of many such tools that use generative AI to produce text. As a user, you can ask tools like ChatGPT questions and get seemingly credible answers back. As ChatGPT is the most used service for producing text using generative AI, it will be used as an example here. Other tools like Bing chat from Microsoft and Bard from Google can function similarly to ChatGPT. 
     

  • Needs to be trained: For generative AI to be able to produce content, it first needs to be trained. This training takes place by feeding the model and teaching it using data. During training, correlations in the data are identified, and it is these correlations that allow the model to produce content that is perceived as innovative or original. 
     

  • Not necessarily a reliable source: It is important to emphasize that it is models that produce content. Even if a chatbot, for example, is trained with extensive amounts of data from around the world, it does not necessarily mean that it is a reliable source of information. Every time you ask the chatbot a question, it will generate several different answers in the background, calculate the probability that the answer is optimal and return this. Therefore, the chatbot does not function as a search engine with access to all the information it has been trained on and should not be confused with searching for information in a database. 
     

  • Great potential: In addition, there is great potential in these models, which was thematized during UiB AI’s seminar UiB AI #5 ChatGPT – trussel eller mulighet I forskning og utdanning? (youtube.com) The seminar dealt with the significance of developments within AI tools for research and education. 

Pitfalls when using generative models 

You never know what data the model is trained on 

As mentioned above, every generative model must be trained on a set of data. The data used during training will greatly influence the results you get, and this is something you must be aware of when receiving a response from the generative model. Just as our opinions and attitudes can be shaped by the information we have; generative models will produce content in line with the data they are trained on.  

As a user of these services, you can never be certain what data has been used in training, unless the services have made this available in a transparent way. Even though generative AI can be useful tools, domain expertise in the relevant field is necessary to be able to assess the reliability of the content these tools produce. 

For more information on how data can and has influenced AI to make false conclusions, see for example Rise of AI Puts Spotlight on Bias in Algorithms - WSJ

The model is a result of the data used in training - Garbage in = garbage out  

Even if you know what data a model is trained on, it does not necessarily mean that it produces sensible results. Generalising a bit, one can say that a model that uses high-quality data will usually produce high-quality content. Similarly, a model that is trained on low-quality data will produce low-quality content. 

As a user of these services, you do not know the quality of the data or how they are processed. If you blindly trust the result of a generative model, you therefore risk using unreliable answers. If you ask ChatGPT who was the rector at UiB in 1955, you get Asbjørn Øverås as answer, and not Erik Waaler who is correct. This is an example of factual errors, something you can read more about in Professor Jill Walker Rettberg’s article in NRK

Bildet viser et eksempel på faktafeil ved bruk av ChatGPT.
Photo:
UiB

You do not control the data you send in 

By using generative models on the internet, you send information to servers whose location you do not necessarily know. The EU has a very strict set of rules that regulate what companies can and cannot do with your information, but for web-based services there is no guarantee that they operate in the EU/EEA according to these regulations. 

Unless the company that delivers the model you use has committed to handling your data in a lawful way, you risk that the data is used to train other models or end up astray. It is therefore very important that you are aware of what data you send, like most other services on the internet. 

For example, students’ work such as exam answers are not allowed to be sent to generative models, as exam answers are considered personal information. In all cases where UiB is to send personal information to a third party, there must exist a data processing agreement. 

Language models have a limited scope 

At first glance, it may seem like ChatGPT and similar language models work very much like humans when it comes to thinking and reasoning. However, there are several things that ChatGPT simply cannot do because it is a language model. For example, it cannot remember facts and will often present factual errors (“hallucinations”) in a very convincing way. ChatGPT also cannot perform calculations, assess, reason, or think logically. 

What does this mean for you as an educator and employee? 

Digitalization, technology, and artificial intelligence are changing our subjects and the working life across industries. In the future, the combination of specialized subject knowledge and digital understanding will be crucial in both work and civic life. 

A recent survey was conducted on the use and view of Chatbots, and other tools based on artificial intelligence among students in Sweden. The majority of students are positive to such tools and believe they make the learning process more efficient. At the same time, more than half express concern about the impact of chatbots on future education but are less concerned about other AI-based language tools. More than sixty percent of students believe that the use of chatbots during exams is cheating, but the majority of students are against a ban on AI-based tools in education. Read more in the summary of the survey

The emergence of new and accessible tools has different implications for different subjects and disciplines. It is therefore important that the academic communities themselves assess the use of such tools in their subjects and study programs. Common to all is that legal frameworks, regulations for plagiarism and cheating, as well as citation must be respected. 

The impact of artificial intelligence tools for learning and teaching 

Generative models have a wide range of ways they can be used in a teaching and learning situation. The academic communities must themselves assess how and to what extent the tool can be used in their own subjects and study programs, and clarify the premises for use towards the students in course descriptions and relevant communication channels. Regardless of whether generative models are allowed in assessment, there are different ways students can use such models to acquire knowledge, skills, and general competence. Students can, for example, use such tools to summarise their lecture notes into coherent text, make flash cards, and as a sparring partner who can explain different parts of the curriculum. Other examples can be that ChatGPT or similar can help a student by asking questions from a known curriculum or improving texts. 

In the example above, the students will have used generative models to acquire knowledge during self-study, which can then be used in an assessment. This can be compared to collaborating with other students, which is generally encouraged as part of self-study, but which is usually not allowed in assessment. In addition, the students will acquire skills in the use of this type of tool. 

To what extent students choose to use such tools in self-study largely depends on the students' skills and interest in these. To facilitate the students acquiring such skills, the teaching can actively include tools based on artificial intelligence. 

Through an exploratory approach to such tools, one can gain insight into positive effects on learning and constructive use of technology, as well as critical reflection and thinking around the use of such tools. 

In several subjects at UiB, ChatGPT has been used in teaching. In the spring of 2023, the lecturer in INF100 spent time presenting what ChatGPT is, namely a language model, and that one should be careful to call it artificial intelligence. The lecturer showed the students good and bad answers to questions asked to ChatGPT, and the students were allowed to use ChatGPT in their answers as long as they cited the use and were certain they knew the subject well enough to assess the tool's accuracy. 

In INF250, the students who took the course in the spring of 2023 were asked to generate code using ChatGPT to visualize a dataset. The students were asked to deliver the code that ChatGPT produced, as well as a reflection on the experience of using ChatGPT, such as how easy/difficult it was to get the desired result, whether the code was correct, what the visualization looked like, etc. 

In the spring of 2023, the students in DIKULT304 worked with AI. Here, the students were made aware that ChatGPT and other AI models are tools with limitations. They discussed, for example, source criticism and referencing in artificial intelligence, as well as the AI model's tendency to hallucinate things that have not been said or done. In several subjects on Digital Culture, they use Spannagel's "Rules for Tools" as a guide for the use of ChatGPT and other AI models in their subjects. 

In the subject MUTP104, the students in the spring of 2023 worked on what questions one can ask such tools with a strong focus on source criticism. Does it agree with the students' own perception, and are there other sources that can substantiate this? 

In several subjects at the Faculty of Medicine, AI chatbots have been used to create suggestions for colloquium tasks, exam tasks, and quizzes. A specific example from the Faculty of Medicine is MED6, where the lecturer has generally addressed how artificial intelligence can be used in medicine, both what advantages it has, and what challenges it can pose. 

The impact of artificial intelligence tools in assessment 

The wide range of use of generative models becomes most evident in assessment situations. At the extremes, one finds on the one hand "no use" of generative models, and on the other hand "extensive use", where the generative model has produced the entire content. Depending on which stance one takes in one's own subject, there may arise grey areas for what is acceptable use. The use must in any case respect the current regulations for academic integrity and cheating, as well as citation

If one takes a stance where generative models are used to some extent as part of the assessment, some grey areas may potentially arise. What happens if a student has written a text without using generative models, but uses a tool to cut down on the number of words to stay within the word limit? To what extent is the text still the student's own independent work? If the model only removes and does not add, the student will have used generative models, but can one say that the student has produced the content on their own? 

Another example can be to use generative models to correct linguistic and grammatical errors. In an assessment situation where language and grammar are being assessed, one will probably not allow this, but in cases where there are other aspects of the students' learning outcomes that are being assessed, there may be room to allow the use of generative models for this. 

A third way the student can use generative models is to ask the model to criticize or give feedback on submissions. If, as a teacher, you have published assessment criteria for a task, the students can use the assessment criteria along with what they themselves have written, and feed this into a model. The students can then ask the model to use the assessment criteria, the course description, and other relevant information and give feedback on form and content of an assignment. 

Depending on how one sets up the teaching and assessment in a subject, there will be very different issues one needs to think through. If generative models are allowed in a subject, should the students have to attach the conversations they have had with the models? When is this eventually to be done? And does the conversation log with the chatbot also become part of the basis for assessment? 

If one chooses to go for an assessment where students should not be able to use generative models, one must think through which form of assessment to use, and how to detect cheating. Since generative models require internet access, one can control the use of these by using a school exam without internet access. Although using a school exam can be a good measure to prevent the use of generative models, it is not necessarily desirable if one has ambitions to use, for example, formative forms of assessment. 

For other forms of assessment than school exams, one must therefore think through what tasks one gives the students, so that these cannot easily be answered by a generative model. The University of Oslo has highlighted important implications this technology has for higher education and made suggestions for good assessment practices in the short and long term that cannot be answered by generative AI like ChatGPT. Read about the suggestions on this page: AI and implications for education 

In the short term, it is suggested to emphasize critical reflection that brings out the students' views, reveals thought processes, and that emphasizes creating something new. By linking tasks specifically to the curriculum or current issues, it becomes more difficult to use ChatGPT to answer the tasks. Assessment forms linked to different formats such as video, presentations and oral assessments, project work, portfolio assessments or forms of assessment where students receive feedback along the way and improve their work can also limit the possibilities of using Chat GPT. In the long term, it is suggested to change the focus from control of learning to the process of learning and to explore how tools based on artificial intelligence can be included to support learning. 

What tools based on AI can I use? 

As part of the Microsoft package at UiB, all employees have access to Microsoft Copilot (formerly Bing Chat Enterprise), Microsoft's chatbot that uses Open AI's GPT 4 model. This is the same model used in the paid version of ChatGPT and allows both text and images to be used as input, or "prompts", when using the service. 

Because UiB has a data processing agreement with Microsoft, it is now possible to use tools based on artificial intelligence in a safe manner. However, the solution is not approved for confidential or strictly confidential information. 

Read more about Microsoft Copilot (fomerly Bing Chat Enterprise) in the news story from På høyden

Citation of generative models 

In general, software used in academic work should be referenced when the software impacts the results, analyses, or findings presented. AI tools should be considered as software and not as co-authors since any attribution of authorship implies responsibility for the work, and AI tools cannot assume such responsibility. 

An AI generated text is not possible for others to recreate. In academic works, it should therefore be clear how the generator was used. For example, the time and scope, and how the result was included in the text. Often it is relevant to reflect the input used in the chat. For a long answer from the text generator, this can be added as an attachment. Be aware that there may be subject-specific guidelines for how the use should be documented as AI tools can have different uses in different subjects. 

The example below shows how you cite a AI generated text in the bibliography using the referencing style APA7 

  • In line: text (OpenAI, 2023)  

  • In the bibliography: OpenAI. (2023). ChatGPT (April 20th version) [Large language model]. https://chat.openai.com/. 

DIGI courses at UiB 

Since the autumn of 2022, many students at UiB have taken advantage of the opportunity to build digital understanding, knowledge, and competence through the DIGI course package. From the autumn of 2023, all employees can also take the courses.  

DIGI101 - Digital Source Criticism has been updated with a new module on “Science in Artificial Intelligence”, which addresses the use of chatbots. The module provides an overview of what characterizes information generated by AI tools and how to relate to this information. The course is only available in Norwegian.