Tools based on artificial intelligence in education
This website gives a brief introduction about what generative artificial intelligence is, and the principles for using these tools at UiB.
Main content
The use of the language model ChatGPT and similar services has become widespread since the launch of ChatGPT in late 2022. Generative artificial intelligence brings many opportunities, but also some challenges.
What is generative artificial intelligence?
Collective term: Generative artificial intelligence is a collective term for various models that can produce text, images, videos, and more, based on some given input data. ChatGPT is one of many such tools that use generative AI to produce text. As a user, you can ask tools like ChatGPT questions and get seemingly credible answers back. As ChatGPT is the most used service for producing text using generative AI, it will be used as an example here. Other tools like Copilot for web from Microsoft and Bard from Google can function similarly to ChatGPT.
Needs to be trained: For generative AI to be able to produce content, it first needs to be trained. This training takes place by feeding the model and teaching it using data. During training, correlations in the data are identified, and it is these correlations that allow the model to produce content that is perceived as innovative or original.
Not necessarily a reliable source: It is important to emphasize that it is models that produce content. Even if a chatbot, for example, is trained with extensive amounts of data from around the world, it does not necessarily mean that it is a reliable source of information. Every time you ask the chatbot a question, it will generate several different answers in the background, calculate the probability that the answer is optimal and return this. Therefore, the chatbot does not function as a search engine with access to all the information it has been trained on and should not be confused with searching for information in a database.
- Great potential: In addition, there is great potential in these models, which was thematized during UiB AI’s seminar UiB AI #5 ChatGPT – trussel eller mulighet I forskning og utdanning? (Norwegian) The seminar dealt with the significance of developments within AI tools for research and education.
Pitfalls when using generative models
You never know what data the model is trained on
As mentioned above, every generative model must be trained on a set of data. The data used during training will greatly influence the results you get, and you must be aware of this when receiving a response from the generative model. Just as our opinions and attitudes can be shaped by the information we have, generative models will produce content in line with the data they are trained on.
As a user of these services, you can never be certain what data has been used in training unless the services have made this available transparently. Even though generative AI can be useful tools, domain expertise in the relevant field is necessary to assess the reliability of the content these tools produce.
For more information on how data can and has influenced AI to make false conclusions, see, for example, Rise of AI Puts Spotlight on Bias in Algorithms - WSJ.
The model is a result of the data used in training - Garbage in = garbage out
Even if you know what data a model is trained on, it does not necessarily mean that it produces sensible results. Generalising a bit, one can say that a model that uses high-quality data will usually produce high-quality content. Similarly, a model that is trained on low-quality data will produce low-quality content.
As a user of these services, you do not know the quality of the data or how they are processed. If you blindly trust the result of a generative model, you risk using unreliable answers. If you ask ChatGPT who was the rector at UiB in 1955, you get Asbjørn Øverås as the answer, not Erik Waaler, who is correct. This is an example of factual errors, something you can read more about in Professor Jill Walker Retberg’s article in NRK (Norwegian).
You do not control the data you send in
By using generative models on the internet, you send information to servers whose location you do not necessarily know. The EU has a very strict set of rules that regulate what companies can and cannot do with your information, but for web-based services there is no guarantee that they operate in the EU/EEA according to these regulations.
Unless the company that delivers the model you use has committed to handling your data lawfully, you risk that the data is used to train other models or goes astray. Like most other services on the Internet, it is therefore very important that you are aware of what data you send.
For example, students’ work, such as exam answers, are not allowed to be sent to generative models, as exam answers are considered personal information. In all cases where UiB is to send personal information to a third party, there must exist a data processing agreement.
Language models have a limited scope
At first glance, it may seem like ChatGPT and similar language models work very much like humans when it comes to thinking and reasoning. However, there are several things that ChatGPT simply cannot do because it is a language model. For example, it cannot remember facts and will often present factual errors (“hallucinations”) in a very convincing way. ChatGPT also cannot perform calculations, assess, reason, or think logically.
What does this mean for you as an educator and employee?
Digitalization, technology, and artificial intelligence are changing our subjects and working life across industries. In the future, the combination of specialized subject knowledge and digital understanding will be crucial in both work and civic life.
A survey conducted on the use and view of Chatbots and other tools based on artificial intelligence among students in Sweden found that the majority of students are positive about such tools and believe they make the learning process more efficient. At the same time, more than half expressed concern about the impact of chatbots on future education but were less concerned about other AI-based language tools. More than 60 per cent of students believe that the use of chatbots during exams is cheating, but the majority of students are against a ban on AI-based tools in education. Read more in the summary of the survey.
The emergence of new and accessible tools has different implications for different subjects and disciplines. It is, therefore, important that the academic communities themselves assess the use of such tools in their subjects and study programs. Common to all is that legal frameworks, regulations for plagiarism and cheating, and citations must be respected.
The impact of artificial intelligence tools for learning and teaching
Generative models can be used in a wide range of ways in teaching and learning. The academic communities must themselves assess how and to what extent the tool can be used in their own subjects and study programs and clarify the premises for use towards the students in course descriptions and relevant communication channels. Regardless of whether generative models are allowed in assessment, there are different ways students can use such models to acquire knowledge, skills, and general competence. Students can, for example, use such tools for summarising their lecture notes into coherent text, make flashcards, and as a sparring partners who can explain different parts of the curriculum. Other examples can be that ChatGPT or similar can help a student by asking questions from a known curriculum or improving texts.
In the example above, the students will have used generative models to acquire knowledge during self-study, which can then be used in an assessment. This can be compared to collaborating with other students, which is generally encouraged as part of self-study but usually not allowed in assessment. In addition, the students will acquire skills in the use of this type of tool.
To what extent students choose to use such tools in self-study largely depends on the students skills and interest in these. To facilitate the students acquiring such skills, the teaching can actively include tools based on artificial intelligence. Through an exploratory approach to such tools, one can gain insight into the positive effects on learning and constructive use of technology, as well as critical reflection and thinking around the use of such tools.
ChatGPT has been used in teaching in several subjects at UiB. In the spring of 2023, the lecturer in INF100 spent time presenting what ChatGPT is, namely a language model, and that one should be careful to call it artificial intelligence. The lecturer showed the students good and bad answers to questions asked to ChatGPT, and the students were allowed to use ChatGPT in their answers as long as they cited the use and were certain they knew the subject well enough to assess the tool's accuracy.
In INF250, the students who took the course in the spring of 2023 were asked to generate code using ChatGPT to visualize a dataset. The students were asked to deliver the code that ChatGPT produced, as well as a reflection on the experience of using ChatGPT, such as how easy/difficult it was to get the desired result, whether the code was correct, what the visualization looked like, etc.
In the spring of 2023, the students in DIKULT304 worked with AI. Here, they were made aware that ChatGPT and other AI models are tools with limitations. They discussed, for example, source criticism and referencing in artificial intelligence and the AI model's tendency to hallucinate things that have not been said or done. In several subjects on Digital Culture, they use Spannagel's "Rules for Tools" as a guide for using ChatGPT and other AI models in their subjects.
In the subject MUTP104, the students worked on what questions one can ask such tools in the spring of 2023, with a strong focus on source criticism. Does this agree with the students' own perception, and are there other sources that can substantiate this?
In several subjects at the Faculty of Medicine, AI chatbots have been used to create suggestions for colloquium tasks, exam tasks, and quizzes. A specific example from the Faculty of Medicine is MED6, where the lecturer has generally addressed how artificial intelligence can be used in medicine, both its advantages and its challenges.
The impact of artificial intelligence tools on assessment
The wide range of use of generative models becomes most evident in assessment situations. At the extremes, one finds, on the one hand, "no use" of generative models, and on the other hand, "extensive use", where the generative model has produced the entire content. Depending on which stance one takes in one's own subject, grey areas may arise regarding what is acceptable use. The use must, in any case, respect the current regulations for academic integrity and cheating, as well as citation.
If one takes a stance where generative models are used to some extent as part of the assessment, some grey areas may potentially arise. What happens if a student has written a text without using generative models but uses a tool to cut down on the number of words to stay within the word limit? To what extent is the text still the student's own independent work? If the model only removes and does not add, the student will have used generative models, but can one say that the student has produced the content on their own?
Another example is using generative models to correct linguistic and grammatical errors. In an assessment situation where language and grammar are being assessed, this will probably not be allowed, but in cases where other aspects of the students' learning outcomes are being assessed, there may be room to allow the use of generative models.
A third way the student can use generative models is to ask the model to criticize or give feedback on submissions. If, as a teacher, you have published assessment criteria for a task, the students can use the assessment criteria along with what they themselves have written and feed this into a model. The students can then ask the model to use the assessment criteria, the course description, and other relevant information and give feedback on the form and content of a submission.
Depending on how one sets up the teaching and assessment in a subject, there will be very different issues one needs to think through. If generative models are allowed in a subject, should the students have to attach the conversations they have had with the models? When is this eventually to be done? And does the conversation log with the chatbot also become part of the basis for assessment?
If one chooses to go for an assessment where students should not be able to use generative models, one must think through which form of assessment to use and how to detect cheating. Since generative models require internet access, one can control the use of these by using a school exam without internet access. Although using a school exam can be a good measure to prevent the use of generative models, it is not necessarily desirable if one has ambitions to use, for example, formative forms of assessment.
For forms of assessment other than school exams, one must therefore think through what tasks one gives the students, so that these cannot easily be answered by a generative model. The University of Oslo has highlighted important implications this technology has for higher education and made suggestions for good assessment practices in the short and long term that cannot be answered by generative AI like ChatGPT. You can read more about this on their website.
In the short term, it is suggested to emphasize critical reflection that brings out the students' views, reveals thought processes, and that emphasizes creating something new. By linking tasks specifically to the curriculum or current issues, it becomes more difficult to use ChatGPT to answer the tasks. Assessment forms linked to different formats such as video, presentations, oral assessments, project work, portfolio assessments, or forms of assessment where students receive feedback along the way and improve their work can also limit the possibilities of using Chat GPT. In the long term, it is suggested to change the focus from the control of learning to the process of learning and to explore how tools based on artificial intelligence can be included to support learning.
What tools based on AI can I use?
As part of the Microsoft package at UiB, all employees and students have access to Microsoft Copilot for web, Microsoft's chatbot that uses Open AI's GPT 4 model. This is the same model used in the paid version of ChatGPT, and it allows both text and images to be used as input, or "prompts," when using the service.
Because UiB has a data processing agreement with Microsoft, it is now possible to use tools based on artificial intelligence in a safe manner. However, the solution is not approved for confidential or strictly confidential information.
Read more about Microsoft Copilot for web (fomerly Bing Chat Enterprise) in the news story from På høyden.
Legal frameworks
One of the challenges with using artificial intelligence tools like ChatGPT and the General Data Protection Regulation (GDPR) is that the data used to train the tools may contain personal information. This personal information is fed to the tool from various sources, both from the internet and from users. It is uncertain whether this personal information has been collected legally, and data protection authorities from several countries have been sceptical of such services for this reason.
Users must be aware that solutions based on artificial intelligence use prompts or otherwise uploaded material from users to improve the service. This means that personal information that is entered is stored and used for further training. Be aware that requests sent into the service may contain sensitive personal information.
According to the Norwegian Data Protection Authority (DPA), companies must especially be aware
that the company’s own setup and configuration of services require technical competence to avoid creating vulnerabilities (similar to cloud services more generally)
that the implementation and integration of artificial intelligence tools require measures for their own information security and their own personal data security (in addition to the service’s own measures)
that requests sent in and responses to these can be stored in the service provider’s own history or used by the service provider for “further development purposes”.
Read more on the DPA’s website: Datatilsynet følger med på utviklingen av ChatGPT (Norwegian)
As an instructor, one must be aware that students cannot be required to use services that UiB does not have a data processing agreement with. This currently applies to ChatGPT and other AI services. If you are unsure whether the service you want to use has a data processing agreement with UiB, you can submit a case to UiB hjelp.
Academic integrity is an overarching norm that governs what is expected of, among others, UiB’s students. We expect students to trust their own skills, make independent assessments, and form their own opinions. Generative language models like ChatGPT and other generative tools raise questions related to both source use and requirements for independence. As a general rule, it is a requirement that the answer in an exam is an independently produced text. This means that submitting a text that is wholly or partially generated will be considered cheating unless it is cited in a fair manner. For more information, see the website Academic integrity and Cheating.
Citation of generative models
In general, software used in academic work should be referenced when it impacts the results, analyses, or findings presented. AI tools should be considered software and not co-authors since any attribution of authorship implies responsibility for the work, and AI tools cannot assume such responsibility.
An AI-generated text is not possible for others to recreate. In academic works, it should, therefore, be clear how the generator was used. For example, the time and scope and how the result was included in the text. Often, it is relevant to reflect the input used in the chat. For a long answer from the text generator, this can be added as an attachment. Be aware that there may be subject-specific guidelines for how the use should be documented, as AI tools can have different uses in different subjects.
The example below shows how you cite a AI-generated text in the bibliography using the referencing style APA7
Inline: text (OpenAI, 2023)
In the bibliography: OpenAI. (2023). ChatGPT (April 20th version) [Large language model]. https://chat.openai.com/.
DIGI courses at UiB
Since the autumn of 2022, many students at UiB have taken advantage of the opportunity to build digital understanding, knowledge, and competence through the DIGI course package. From the autumn of 2023, all employees can also take the courses.
DIGI101 - Digital Source Criticism has been updated with a new module on “Science in Artificial Intelligence”, which addresses the use of chatbots. The module provides an overview of what characterizes information generated by AI tools and how to relate to this information. The course is only available in Norwegian.
Information from the faculties
Several faculties at UiB have created their own guidelines for use of language models and other AI. At UiB, it is up to the academic community to assess the use of such tools, and the students must familiarize themselves with the guidelines in the relevant study.
The Faculty of Social Sciences: Guidance on the use of chat robots at the Faculty of Social Sciences
The faculty of psychology: Information about the use of Artificial Intelligence at the Faculty of Psychology
Faculty of medicine: Bruk av kunstig intelligens (KI) ved Det medisinske fakultet (Norwegian)