Home
Student Pages

Tools based on artificial intelligence in education

This website gives a brief introduction about what generative artificial intelligence is, and the principles for using these tools at UiB.

Illustrasjonen viser et eksempel på ChatGPT.
Photo:
Colourbox

Main content

The use of the language model ChatGPT and similar services has become widespread since the launch of ChatGPT in late 2022. Generative artificial intelligence (AI) brings many opportunities, but also some challenges. 

As a student, you must ensure that you are aware of the rules that apply to your subject. Each faculty and department may have its own guidelines for the use of AI within the various subjects. Good citation practice generally applies, even when using tools based on artificial intelligence. 

What is generative artificial intelligence? 

  • Collective term: Generative artificial intelligence is a collective term for various models that can produce text, images, videos, and more, based on some given input data. ChatGPT is one of many such tools that use generative AI to produce text. As a user, you can ask tools like ChatGPT questions and get seemingly credible answers back. As ChatGPT is the most used service for producing text using generative AI, it will be used as an example here. Other tools like Bing chat from Microsoft and Bard from Google can function similarly to ChatGPT. 
     
  • Needs to be trained: For generative AI to be able to produce content, it first needs to be trained. This training takes place by feeding the model and teaching it on data. During training, correlations in the data are identified, and it is these correlations that allow the model to produce content that is perceived as innovative or original. 
     
  • Not necessarily a reliable source: It is important to emphasize that it is models that produce content. Even if a chatbot, for example, is trained with extensive amounts of data from around the world, it does not necessarily mean that it is a reliable source of information. Every time you ask the chatbot a question, it will generate several different answers in the background, calculate the probability that the answer is optimal and return this. Therefore, the chatbot does not function as a search engine with access to all the information it has been trained on and should not be confused with searching for information in a database. 
     
  • Great potential: In addition, there is great potential in these models, which was thematized during UiB AI’s seminar UiB AI #5 ChatGPT – trussel eller mulighet I forskning og utdanning? (youtube.com) The seminar dealt with the significance of developments within AI tools for research and education.   

Pitfalls when using generative models 

You never know what data the model is trained on 

As mentioned above, every generative model must be trained on a set of data. The data used during training will greatly influence the results you get, and this is something you must be aware of when receiving a response from the generative model. Just as our opinions and attitudes can be shaped by the information we have; generative models will produce content in line with the data they are trained on.  

As a user of these services, you can never be certain what data has been used in training, unless the services have made this available in a transparent way. Even though generative AI can be useful tools, domain expertise in the relevant field is necessary to be able to assess the reliability of the content these tools produce. 

For more information on how data can and has influenced AI to make false conclusions, see for example Rise of AI Puts Spotlight on Bias in Algorithms - WSJ

The model is a result of the data used in training - Garbage in = garbage out  

Even if you know what data a model is trained on, it does not necessarily mean that it produces sensible results. Generalising a bit, one can say that a model that uses high-quality data will usually produce high-quality content. Similarly, a model that is trained on low-quality data will produce low-quality content. 

As a user of these services, you do not know the quality of the data or how they are processed. If you blindly trust the result of a generative model, you therefore risk using unreliable answers. If you ask ChatGPT who was the rector at UiB in 1955, you get Asbjørn Øverås as answer, and not Erik Waaler who is correct. This is an example of factual errors, something you can read more about in Professor Jill Walker Rettberg’s article in NRK

Bildet viser et eksempel på faktafeil ved bruk av ChatGPT.
Photo:
UiB

You do not control the data you send in 

By using generative models on the internet, you send information to servers whose location you do not necessarily know. The EU has a very strict set of rules that regulate what companies can and cannot do with your information, but for web-based services there is no guarantee that they operate in the EU/EEA according to these regulations. 

Unless the company that delivers the model you use has committed to handling your data in a lawful way, you risk that the data is used to train other models or end up astray. It is therefore very important that you are aware of what data you send, like most other services on the internet. 

For example, students’ work such as exam answers are not allowed to be sent to generative models, as exam answers are considered personal information. In all cases where UiB is to send personal information to a third party, there must exist a data processing agreement. 

Language models have a limited scope 

At first glance, it may seem like ChatGPT and similar language models work very much like humans when it comes to thinking and reasoning. However, there are several things that ChatGPT simply cannot do because it is a language model. For example, it cannot remember facts and will often present factual errors (“hallucinations”) in a very convincing way. ChatGPT also cannot perform calculations, assess, reason, or think logically. 

What does this mean for you as a student?  

Digitalization, technology, and artificial intelligence are changing our subjects and the working life across industries. Artificial intelligence - also in forms other than generative models - is being used in many contexts. In the future, the combination of specialized domain knowledge and digital understanding, including familiarity with artificial intelligence, will be crucial in both work and civic life. 

The emergence of new and accessible tools has different implications for different subjects and disciplines, and different faculties and departments may therefore have different approaches to their use. Regardless of this, there are legal frameworks, regulations for plagiarism and cheating, and for citation that must be respected. 

Which AI tools can I use?

As a student you now have access to UiBchat, a generative AI based on the same underlying language model as ChatGPT (GPT-4 from OpenAI).  

To get started, open your browser, go to chat.uib.no, and log in with your UiB account. 

What distinguishes UiBchat from ChatGPT?  

  • With UiBchat data is stored locally on your computer  

  • Chatlogs are stored temporarily in your browser and are not accessible from other devices 

  • Data is not used to further train the language model  

  • Prompts and conversations are processed but not stored in a Microsoft data center facility in Stockholm as regulated through a DPA (Data Processing Agreement).  

As always when using generative models, be aware of the challenges and pitfalls. You need domain expertise and a critical approach to be able to assess the reliability of the content it produces, even from UiB tools. Only use green data with UiBchat.  

A pilot during the spring semester 2024

UiBchat will be piloted until the summer 2024. Unstability and errors may occur.  

Plagiarism and cheating 

Academic integrity is a general norm that governs what is expected of, among others, UiB's students. We expect students to trust their own skills, make independent assessments, and form their own opinions. Generative language models like ChatGPT and other generative tools raise questions related to both source use and requirements for independence. Generally, it is required that the answer in an exam is an independently produced text. This means that submitting a text that is wholly or partially generated will be considered cheating, unless it is cited in an honest manner. Read more on the website Academic integrity and Cheating

About cheating and misconduct in the PhD education

Academic integrity is an overarching norm that governs what is expected of UiBs PhD candidates. The PhD program consists of two parts: the training component, and the thesis work. Exams in the training component of the PhD education are subject to the same legal framework, regulations for plagiarism and cheating, and for citing sources as exams at bachelor and master level. The thesis work is research and is subject to the same framework for scientific integrity as all research at UiB. For more information about scientific integrity, please look at the website, see Research Ethics

Citation of generative models   

In general, software used in academic work should be referenced when the software impacts the results, analyses, or findings presented. AI tools should be considered as software and not as co-authors since any attribution of authorship implies responsibility for the work, and AI tools cannot assume such responsibility. 

An AI generated text is not possible for others to recreate. In academic works, it should therefore be clear how the generator was used. For example, the time and scope, and how the result was included in the text. Often it is relevant to reflect the input used in the chat. For a long answer from the text generator, this can be added as an attachment. Be aware that there may be subject-specific guidelines for how the use should be documented as AI tools can have different uses in different subjects. 

The example below shows how you cite a AI generated text in the bibliography using the referencing style APA7 

  • In line: text (OpenAI, 2023)  

  • In the bibliography: OpenAI. (2023). ChatGPT (April 20th version) [Large language model]. https://chat.openai.com/. 

DIGI courses at UiB 

For students who wish to gain more digital understanding, knowledge, and competence, UiB offers small courses that are available to all students regardless of what they are studying. Read more about this offer on the DIGI website

DIGI101 - Digital Source Criticism (in norwegian) has been updated with a new module on “Science in Artificial Intelligence”, which addresses the use of chatbots. The module provides an overview of what characterizes information generated by AI tools and how to relate to this information. The course is only available in Norwegian. 

Information from the faculties 

Several faculties at UiB have created their own guidelines for use of language models and other AI. At UiB, it is up to the academic community to assess the use of such tools, and the students must familiarize themselves with the guidelines in the relevant study. 

The Faculty of Social Sciences: Guidance on the use of chat robots at the Faculty of Social Sciences 

The faculty of psychology: Information about the use of Artificial Intelligence at the Faculty of Psychology