ChatGPT

There is no escaping nowadays: the past few months generative AI tools in general, and ChatGPT in particular have been dominating the public debate within and without our university walls. Needless to say that systems such as these have an unmistakable impact on education. But what is ChatGPT, really? What is it capable of, and what not? What are the dangers of using it?  And what would your lecturers think if you were to use it? Find out the answer to these questions here.

Artificial intelligence (AI) is in constant evolution. We try to keep this page as up to date as possible.

What is ChatGPT?


ChatGPT (Generative Pre-trained Transformer) is an advanced chatbot you can ask any type of question. Within seconds, the system generates an answers to your question in the form of a brief or more extensive text. If you are unsatisfied with the result, you simply ask the bot to adapt the answer or to give you a new one. The bot can also write programming languages, and it corrects linguistic errors.

How does it work?


Keep in mind that ChatGPT is not linked directly to the internet. The free version of the chatbot relies on GPT-3.5, which is a language model that has been trained with billions of words from various sources. It is a language model which allows the chatbot to predict the most likely next word in certain contexts. The makers of ChatGPT used examples of real-life human dialogues: they showed the system which answers were correct and which were incorrect, and in a final phase, they assessed the bot’s answers in order to keep improving those answers.

What are the risks?


The free version of ChatGPT has not been fully optimised yet.

  • The chatbot is not directly connected to the Internet, but is trained by a database. That input was updated in January 2022, so the system can currently generate little to no information after February 2022.
  • The information you receive will not always be correct. Moreover, the chatbot will not give you references for you to verify the information it supplies. If you specifically ask for references, the bot will make them up as it goes along (this phenomenon is known as hallucination in the AI world). In other words, you need a sound knowledge of your discipline to be able to assess the reliability of the answers.

Some of the limitations described above are also found in ther generative AI systems to a greater or lesser extent. In general, one can say that the use of all tools carries the following risks:

  • The makers remain very vague on the training data they used: what kind of information did they input and where did they find it? Combined with a lack of references, the bot’s answers may possibly involve intellectual theft. They also do not make explicit what they will do with the data you provide them. Therefore, do not insert privacy-sensitive information! The same applies to syllabi, articles ... without the author's permission: you give those texts away for free to the creators of the tools.
  • The answers may also contain biases, because the system has been trained on “coloured” references.
  • The makers built in safety mechanisms to prevent the system from producing answers that are clearly ethically reprehensible. However, these safety mechanisms can be bypassed easily by giving certain commands.
  • The ecological footprint of use cannot be underestimated either. The additional IT infrastructure, tool development, data storage and data transmission cause a significant increase in energy consumption.
  • A final risk is the danger of anthropomorphism: making the computer appear to talk and think like a human being. This can make people interact less with each other. It is important to understand that programs cannot really reason: they have learned some reasoning patterns from texts, but do not actually contain explicit logic.

Does the use of ChatGPT impact assessment?


If you use ChatGPT to make an assignment and then submit that assignment as your own work, you in fact commit an act of plagiarism. Your lecturers have the final say in this matter: they decide whether or not, and to what extent, you are allowed to use ChatGPT without it becoming an act of fraud. They will communicate clearly in advance on what is allowed and what is not.

If you have used the chatbot in a context in which it was not allowed, this may result in a disciplinary measure for exams, in accordance with Article 78§2 of the Education and Examination Code. The use of ChatGPT or other generative AI tools can be considered as an act of fraud or an irregularity. The (Disciplinary) Examination Board decides whether or not there is sufficient evidence and whether or not to impose disciplinary measures.

Please note that, even more so than before, lecturers will take into account process assessment, i.e. how did you come up with a specific result or product? After all, the (writing) process requires highly specific skills: looking for, and finding the right information, and bringing that together to pursue an assignment. Self-reflection is crucial: you will have to keep tabs on, and make explicit how you achieve a specific result by means of e.g. an oral explanation, interim (peer) feedback, surveys, etc.

What will change in academic year 2024-2025?

In academic year 2023-2024, the above information about what is and is not allowed remains. Check with your program to see what is expected. In academic year 2024-2025, the guidelines for (writing) tasks will become more clear:

- for the master's thesis, the responsible use of generative AI tools is allowed.

- for other (writing) tasks, responsible use is encouraged, in preparation for the master's thesis.

Note: an individual teacher in a specific subject can still prohibit the use, in order to check whether you have really mastered certain basic competencies. (See also: What is the use of certain competencies in the light of an AI tool?) So always check your course sheet or ask your teacher.

The word "responsible" is very important here. The above risks show how you have to be careful with the tools, especially in terms of privacy, reliability and bias. You will learn in your study programme how to use those tools in a responsible way.

What are opportunities?


If you choose to use a generative AI tool, keep in mind the limitations described above, especially the ones that involve reliability and bias. If you use it, in other words, use it as a supporting tool only. In case of challenging learning content, for instance, you could ask for an additional explanation or examples, you could ask for a summary, for feedback on a text you have written, or you could let the bot generate knowledge questions to help you study, etc.

What is the use of certain competencies in the light of an AI tool?


In the light of AI tools, certain competencies (writing a text independently using correct language) may seem to have become less important. These competencies, however, are incorporated into the required learning outcomes of many a study programme. You cannot obtain a diploma in Law without being able to build a legal argument as the solution to a complex legal issue. Or a diploma in Languages without being able to produce well-written and correct texts. Or a diploma in Informatics without being able to program.
Strong writing skills still entail so much more than what generative AI has to offer. Critical thought, knowledge of effective communication, and creativity remain indispensable competencies to be able to assess and adapt automatically generated texts.

How can you sharpen your AI literacy?

Not every student is automatically able to work smoothly with such tools. Do you feel yourself incapable of using the tools? Would you like to learn more about generative AI in general? Register for the learning pathway on Generative AI: from learning towards creating (In Dutch). You can also test and update your knowledge in the e-module Responsible Use of Generative AI in Education , made by our colleagues at the University of Amsterdam.

What about other generative AI tools?


Generative AI tools are being developed by the dozens, and they are being integrated into various applications such as the search engines Bing (Microsoft) and Bard (Google). Each tool and app has its specific strengths and weaknesses.

So please, remain critical!