Tools based on artificial intelligence in education
This website provides an overview of the most common tools based on generative artificial intelligence and the framework for the use of these tools for employees at UiB.

Main content
The use of text and conversation tools like ChatGPT and similar services has become widespread since the launch of ChatGPT in late 2022. Tools based on generative artificial intelligence (AI) bring many opportunities but also some challenges.
What do we mean by AI-based tools?
Generative Artificial Intelligence: This is a general term for tools that can generate text, images, sound, videos, etc. Generative AI can also create useful structures such as proteins, enzymes, building constructions, and solution proposals, thereby contributing to scientific advancements. On this page, we only refer to text and conversation tools. ChatGPT is an example of a tool that generates text based on user input, and tools like Microsoft Copilot and Google Gemini work in a similar way.
Training the Model: To create content, the AI tool must be trained by learning from data. The model on which the tool is based finds patterns in the data that allow it to provide good overviews and summaries, and often create content that appears to be new.
Reliability: It is important to be aware that these tools are not reliable sources of information. They generate answers based on probabilities and do not function like a regular search engine. They can misconnect information that is not initially related, resulting in so-called "hallucinations.
- What potential do AI tools have:Methods and tools based on artificial intelligence bring opportunities for significant scientific advancements, especially in natural sciences, technology, and medicine. Examples of such advancements are continuously showcased and discussed in the UiB AI #5 ChatGPT – trussel eller mulighet I forskning og utdanning? (Norwegian).
Pitfalls of using text and conversation tools based on AI
You never know what data the model is trained on
Any generative AI model that a text and conversation tool is based on must be trained on a set of data. The data used during training can greatly influence the results we get, and this is something we must be aware of when receiving a response from the generative model.
Just as our opinions and attitudes can be shaped by the information we have, generative models will produce content in line with the data they are trained on. As a user of these services, you can never be sure what data has been used in training unless the services have made this available in a reviewable manner.
Although AI can provide us with useful tools, knowledge and expertise in the relevant field are essential to assess the reliability of the content these tools produce.
For more information on how data can and has influenced AI to make false conclusions, see, for example, Rise of AI Puts Spotlight on Bias in Algorithms - WSJ.
The model is a result of the data it is trained on
Even if we know what data a model is trained on, it does not necessarily mean it will produce sensible results. Simply put, a model that uses high-quality data will most often produce high-quality content. Similarly, a model trained on low-quality data will most often produce low-quality content.
As users of these services, we do not know the quality of the data or how it has been processed. If we blindly trust the results of a generative model, we risk using unreliable answers. For example, if you ask ChatGPT who was the rector at UiB in 1955, you will get the answer that it was Asbjørn Øverås, and not Erik Waaler, who is correct. This is an example of factual errors, something you can read more about in Professor Jill Walker Retberg’s article in NRK (Norwegian).
You do not control the data you dubmit
By using generative models on the internet, you send information to servers you do not necessarily know the location of. The EU has very strict regulations that govern what companies can and cannot do with your information, but for online services, there is no guarantee that they operate in the EU/EEA in accordance with these regulations.
Unless the company providing the model you use has committed to handling your data legally and ethically, you risk that the data will be used to train other models or for various reasons may be compromised. It is therefore very important that we are aware of what data we send, similar to most other internet services.
For example, student work such as exam answers is not allowed to be sent to generative models, as exam answers are considered personal data. In all cases where UiB sends personal data to a third party, a data processing agreement must be in place.
Language models have a limited scope
Digitalization, technology, and artificial intelligence are changing our fields and the workforce across industries. Artificial intelligence, also in forms other than generative models, is being used in many contexts. In the future, the combination of specialized expertise and digital understanding, including knowledge of artificial intelligence, will be crucial in both work and social life.
The emergence of new and accessible tools has different implications for different fields and disciplines, and different faculties and departments may therefore have different approaches to their use. Regardless of this, there are legal frameworks, regulations for plagiarism and cheating, and for citation that must be respected.
What does this mean for you as an educator and employee?
Digitalization, technology, and artificial intelligence are changing our subjects and working life across industries. In the future, the combination of specialized subject knowledge and digital understanding will be crucial in both work and civic life.
A survey conducted on the use and view of Chatbots and other tools based on artificial intelligence among students in Sweden found that the majority of students are positive about such tools and believe they make the learning process more efficient. At the same time, more than half expressed concern about the impact of chatbots on future education but were less concerned about other AI-based language tools. More than 60 per cent of students believe that the use of chatbots during exams is cheating, but the majority of students are against a ban on AI-based tools in education. Read more in the summary of the survey.
The emergence of new and accessible tools has different implications for different subjects and disciplines. It is, therefore, important that the academic communities themselves assess the use of such tools in their subjects and study programs. Common to all is that legal frameworks, regulations for plagiarism and cheating, and citations must be respected.
The impact of artificial intelligence tools for learning and teaching
Generative models can be used in a wide range of ways in teaching and learning. The academic communities must themselves assess how and to what extent the tool can be used in their own subjects and study programs and clarify the premises for use towards the students in course descriptions and relevant communication channels. Regardless of whether generative models are allowed in assessment, there are different ways students can use such models to acquire knowledge, skills, and general competence. Students can, for example, use such tools for summarising their lecture notes into coherent text, make flashcards, and as a sparring partners who can explain different parts of the curriculum. Other examples can be that ChatGPT or similar can help a student by asking questions from a known curriculum or improving texts.
In the example above, the students will have used generative models to acquire knowledge during self-study, which can then be used in an assessment. This can be compared to collaborating with other students, which is generally encouraged as part of self-study but usually not allowed in assessment. In addition, the students will acquire skills in the use of this type of tool.
To what extent students choose to use such tools in self-study largely depends on the students skills and interest in these. To facilitate the students acquiring such skills, the teaching can actively include tools based on artificial intelligence. Through an exploratory approach to such tools, one can gain insight into the positive effects on learning and constructive use of technology, as well as critical reflection and thinking around the use of such tools.
ChatGPT has been used in teaching in several subjects at UiB. In the spring of 2023, the lecturer in INF100 spent time presenting what ChatGPT is, namely a language model, and that one should be careful to call it artificial intelligence. The lecturer showed the students good and bad answers to questions asked to ChatGPT, and the students were allowed to use ChatGPT in their answers as long as they cited the use and were certain they knew the subject well enough to assess the tool's accuracy.
In INF250, the students who took the course in the spring of 2023 were asked to generate code using ChatGPT to visualize a dataset. The students were asked to deliver the code that ChatGPT produced, as well as a reflection on the experience of using ChatGPT, such as how easy/difficult it was to get the desired result, whether the code was correct, what the visualization looked like, etc.
In the spring of 2023, the students in DIKULT304 worked with AI. Here, they were made aware that ChatGPT and other AI models are tools with limitations. They discussed, for example, source criticism and referencing in artificial intelligence and the AI model's tendency to hallucinate things that have not been said or done. In several subjects on Digital Culture, they use Spannagel's "Rules for Tools" as a guide for using ChatGPT and other AI models in their subjects.
In the subject MUTP104, the students worked on what questions one can ask such tools in the spring of 2023, with a strong focus on source criticism. Does this agree with the students' own perception, and are there other sources that can substantiate this?
In several subjects at the Faculty of Medicine, AI chatbots have been used to create suggestions for colloquium tasks, exam tasks, and quizzes. A specific example from the Faculty of Medicine is MED6, where the lecturer has generally addressed how artificial intelligence can be used in medicine, both its advantages and its challenges.
The impact of artificial intelligence tools on assessment
The wide range of use of generative models becomes most evident in assessment situations. At the extremes, one finds, on the one hand, "no use" of generative models, and on the other hand, "extensive use", where the generative model has produced the entire content. Depending on which stance one takes in one's own subject, grey areas may arise regarding what is acceptable use. The use must, in any case, respect the current regulations for academic integrity and cheating, as well as citation.
If one takes a stance where generative models are used to some extent as part of the assessment, some grey areas may potentially arise. What happens if a student has written a text without using generative models but uses a tool to cut down on the number of words to stay within the word limit? To what extent is the text still the student's own independent work? If the model only removes and does not add, the student will have used generative models, but can one say that the student has produced the content on their own?
Another example is using generative models to correct linguistic and grammatical errors. In an assessment situation where language and grammar are being assessed, this will probably not be allowed, but in cases where other aspects of the students' learning outcomes are being assessed, there may be room to allow the use of generative models.
A third way the student can use generative models is to ask the model to criticize or give feedback on submissions. If, as a teacher, you have published assessment criteria for a task, the students can use the assessment criteria along with what they themselves have written and feed this into a model. The students can then ask the model to use the assessment criteria, the course description, and other relevant information and give feedback on the form and content of a submission.
Depending on how one sets up the teaching and assessment in a subject, there will be very different issues one needs to think through. If generative models are allowed in a subject, should the students have to attach the conversations they have had with the models? When is this eventually to be done? And does the conversation log with the chatbot also become part of the basis for assessment?
If one chooses to go for an assessment where students should not be able to use generative models, one must think through which form of assessment to use and how to detect cheating. Since generative models require internet access, one can control the use of these by using a school exam without internet access. Although using a school exam can be a good measure to prevent the use of generative models, it is not necessarily desirable if one has ambitions to use, for example, formative forms of assessment.
For forms of assessment other than school exams, one must therefore think through what tasks one gives the students, so that these cannot easily be answered by a generative model. The University of Oslo has highlighted important implications this technology has for higher education and made suggestions for good assessment practices in the short and long term that cannot be answered by generative AI like ChatGPT. You can read more about this on their website.
In the short term, it is suggested to emphasize critical reflection that brings out the students' views, reveals thought processes, and that emphasizes creating something new. By linking tasks specifically to the curriculum or current issues, it becomes more difficult to use ChatGPT to answer the tasks. Assessment forms linked to different formats such as video, presentations, oral assessments, project work, portfolio assessments, or forms of assessment where students receive feedback along the way and improve their work can also limit the possibilities of using Chat GPT. In the long term, it is suggested to change the focus from the control of learning to the process of learning and to explore how tools based on artificial intelligence can be included to support learning.
What tools based on AI can I use?
As part of the Microsoft package at UiB, all employees and students have access to Microsoft Copilot for web, Microsoft's chatbot that uses Open AI's GPT 4 model. This is the same model used in the paid version of ChatGPT, and it allows both text and images to be used as input, or "prompts," when using the service.
Because UiB has a data processing agreement with Microsoft, it is now possible to use tools based on artificial intelligence in a safe manner. However, the solution is not approved for confidential or strictly confidential information.
Read more about Microsoft Copilot for web (fomerly Bing Chat Enterprise) in the news story from På høyden.
Legal Frameworks AI tools and GDPR
One of the challenges with using artificial intelligence tools like ChatGPT and the General Data Protection Regulation (GDPR) is that the data used to train the tools contains personal information. Personal information is defined as information in any form that can be linked, directly or indirectly, to an individual. This personal information is collected into the tool from various sources, both from the internet and from users, with or without their knowledge.
It is very uncertain whether this personal information has been collected legally, and data protection authorities from several countries have been skeptical of such services for this reason. According to GDPR, it is very difficult to consider that such collection of personal information, known as data scraping, can be legal. This is especially true for sensitive personal information such as information about race, ethnicity, religion, health, political opinions, or sexual orientation.
Users must know that solutions based on artificial intelligence use what is written or uploaded to improve the service, and that they are shared with third parties such as commercial actors and authorities. This means that personal information entered is stored and used further. It is not possible to correct or delete them from the model. Be aware that requests submitted to the service may contain sensitive personal information and that the model can infer sensitive information about you from the provided information.
Citation of generative models
When software influences results, analyses, or findings in academic work, you should reference it. AI tools are considered software and not co-authors, because the software cannot take responsibility for the work.
AI-generated text cannot be reproduced by others. Therefore, in academic work, it should be stated how the AI tool was used, including the time and extent, and how the result was included in the text. It is often useful to show what was entered into the chat. Long responses from the AI tool can be included as appendices. Be aware that there may be subject-specific rules for documenting AI use.
The example below shows how you cite a AI-generated text in the bibliography using the referencing style APA7
Inline: text (OpenAI, 2023)
In the bibliography: OpenAI. (2023). ChatGPT (April 20th version) [Large language model]. https://chat.openai.com/.
DIGI courses at UiB
Many employees and students at UiB have taken advantage of the opportunity to build digital understanding, knowledge, and competence through the DIGI course package. These are smaller 2.5-credit courses that are available for everyone to take.
Information from the faculties
Several faculties at UiB have created their own guidelines for use of language models and other AI. At UiB, it is up to the academic community to assess the use of such tools, and the students must familiarize themselves with the guidelines in the relevant study.
The Faculty of Social Sciences: Guidance on the use of chat robots at the Faculty of Social Sciences
Faculty of medicine: Use of artificial intelligence (AI) at The Faculty of Medicine.
Faculty of Law: Retningslinjer for bruk av KI i undervisning og prøving (pdf) In norwegian.
Useful sources
Articles/video
Varför vi inte längre kan ha skrivna hemuppgifter i högre utbildning (in swedish)
Academia
Student Use Cases for AI (Harvard)
AI-tool and useful links
Read about Microsoft Copilot for web (formerly Bing Chat Enterprise) in the news story from På høyden.
Visibility and coordination of AI activity: UiB AI
Open discussion forum about AI
Join the AI discourse and keep updated on AI related UiB events and other news. You can also give feedback on specific services, i.e. the UiBchat pilot: Join on Teams