Generative AI in Education at Ghent University
There is no escaping nowadays: the presence of generative AI tools such as ChatGPT, Copilot, Consensus, Perplexity, etc. are ubiquitous, both within and outside the university context. But what are generative AI tools? What can they do and what can't they do (yet)? And are you allowed to use them at Ghent University? This page contains the answers to the above questions.
We try our best to keep this page up-to-date since artificial intelligence (AI) is evolving at a fast pace.
What is generative AI?
Generative AI refers to AI systems that are capable of creating new, original content based on the patterns and structures they have learnt from existing data. The tools are trained on vast tracts of data, obtained mainly from the internet.
Using advanced algorithms and neural networks, they can generate text, images, audio, videos, and computer code that can rival human generated content. This generated output is statistically close to the data it was trained on, but offer unique and customized response forms to specific inputs or instructions to the tool.
Do you want to know more about what generative AI is and how it works? Then take a look at module 1 of the Ufora learning path: Generative AI for Students – From Concepts to Creation.
Which generative AI tool should you use?
ChatGPT is still the best-known example of a generative AI tool. Similar chatbots are popping up everywhere and are being integrated into various applications at a rapid pace. Think of Copilot (by Microsoft), Gemini (by Google), and Claude.
Ghent University recommends Microsoft Copilot as the tool to use as an AI assistant. Thanks to an agreement with Microsoft, any data you enter in the Copilot version you access using your UGent account is secure. Go to copilot.microsoft.com and log in at the top right until a green shield appears. When you hover over that shield, you will see the message “Enterprise data protection applies to this chat.” If you don’t see an option to log in, check whether a green shield is already visible—you may already be logged in.
Another, safer tool is the transcription tool Whisper, on the condition that you connect it to our HPC, the High Performance Computer or supercomputer of Ghent University. This connection ensures that your transcribed data remain stored locally. You can read how to use the tool in such a safe and responsible way on Onderzoektips. You may only use this tool in the context of (master’s dissertation) research.
There are many other tools that can support you with scientific research, academic writing, and more. Every tool and app has its own strengths and limitations. Always remain critical.
Would you like to learn more about which generative AI tools you can use, and how to determine whether an AI tool is responsible? Then take a look at module 2 of the Ufora learning path Generative AI: about learning and creating!
What are the risks?
Using generative AI is not without risks. Always keep the limitations and potential ethical implications in mind while using the tools.
- The creators are often not very transparent about their handling of the data used for training the systems. More specifically, what kind of information was entered into the systems and where did they find it? If you add on the issue of absence of sources, blindly reproducing materials generated may amount to intellectual theft. They also do not explicitly state where the information you enter may end up. So we advise to never enter privacy-sensitive information! This is even punishable under the General Data Protection Regulation (GDPR). This is also applicable to syllabi, articles, etc.. Without the author's permission: you give away those texts for free to the creators of the tools.
- The information you get as your output is not always correct. The answers are unreliable as the data set on which the answers are based on is limited and may have inherent biases. If there is insufficient or no data to answer a specific question, you will still get a credible answer that is not necessarily true. This is called a "hallucination" of the system, which makes it more difficult for us to assess the right from wrong of the output. Moreover, those texts with errors, fake images, etc., can take on a life of their own, which sometimes contributes to fake news.
- The answers may contain bias, because the system is trained on potentially biased source material and because that source material is not always representative of information from around the world. For example, the answers will mainly be based on data from countries in the west.
- Typically, you cannot ask ethically problematic questions because of built-in safety mechanisms. However, that can be easily circumvented by changing up instructions.
- To weed out biases and potentially unethical responses from their systems, the owners of the companies have enlisted people (as moderators) worldwide to provide feedback on the tools' responses. The working conditions of these workers has come under scrutiny even while the companies continue to denounce these media reports.
- Another ethical implication of using a generative AI tool is its impact on research integrity of your project. Many tools don't provide sources for the responses generated, so you have to make your own checks on who the author is originally. It is your responsibility to ensure the quality of your research!
- It may seem that generative AI is on a mission to eliminate inequality. Let’s look at their claim that all students have now access to the tools and as a result hiring tutors to successfully complete certain writing tasks is no longer a privilege of the wealthier students. However, the creators of the tools appear to be offering more with the paid versions of their product. These paid versions perform much better than free versions and that may in fact exacerbate existing
- The ecological footprint of using these tools cannot not be underestimated. The development of the tools and their use requires enormous computing power. The data centers where the tools are trained and the data is tracked consume vast amounts of electricity and water (in order to cool the chips).
- An additional risk is the danger of anthropomorphism: it may seem as if the computer speaks and thinks like a human, which can cause to place more trust in the systems than is good for our well-being. In addition, there is danger that this tendency and our excessive interaction with these systems may result in a loss of human connection. It is important to understand that the programmes have learned some patterns of reasoning from texts, but lack inherent human emotional intelligence and are limited in reasoning abilities.
Are you allowed to use generative AI?
From the 2024-2025 academic year, the following guidelines will apply to assignments you complete at home:
- responsible use of generative AI tools is permitted.
- for other (writing) assignments, responsible use is even encouraged, in preparation for the master's thesis.
Please note: an individual teacher in a specific subject can still prohibit its use, in order to be able to check whether you have really mastered certain basic competencies. (see also: What is the point of certain competencies if you can also use an AI tool later?) For more information on this, refer to your course sheet or ask your lecturer.
The word "responsible" is crucial here. The above risks showcase how you should be careful with the tools, especially in terms of privacy, reliability and bias. In your training, you will receive tools for that responsible use.
You will have to demonstrate that you use the tools responsibly in the process, among other things. Your lecturers will, more than before, question that process, also with regard to the acquisition of specific competences. Just think of finding sources, summarizing ...: how did you go about that? Why is this a good summary? And so on. This allows teachers to assess whether you have acquired certain competencies yourself. Self-reflection plays a critical role here: you will have to keep track of the process and make your use of the tools visible, for example by means of an oral explanation, interim (peer) feedback, surveys, etc. Want to practice already? Ask within your program for the questions that might be asked or review some sample questions.
When does the use of GenAI count as fraud?
There are some clear cases where you have used GenAI irresponsibly and thus committed an irregularity:
- Plagiarism is plagiarism, whether or not you used GenAI tools. If you use the tools irresponsibly, you might provide information without sources, use fabricated sources (“hallucinations”), or include incorrect sources with the information. Note: a GenAI tool is not an author and therefore cannot be a source. Referring to a GenAI tool as your only source can also be considered plagiarism.
- You let GenAI generate fake data and use it as real data. This is obviously a clear case of fraud.
- You outsource the thinking process to GenAI: parts of the task were created using GenAI without your own ideas and/or intervention. You make it appear as if the product is entirely your own work. This is misleading because you claim to have acquired specific competencies required for the task, which is an example of falsely claiming authorship.
The above irregularities lead to an exam disciplinary procedure.
When does using GenAI cause you to fail?
When you were allowed to use the tools for certain purposes but used them poorly, that use is not necessarily fraud, but it can lead to a failing grade because you have not sufficiently mastered the required competencies.
What are the possibilities of using generative AI?
Of course, the tools can also assist you in several ways. For example, you can ask for extra clarification and examples for a tough topic, ask for feedback on drafts, search for scientific articles for your research, etc. Need inspiration? Check out the Ufora learning path: Generative AI – From Concepts to Creation.
Why master certain competencies when a generative AI tool can be used for it in the future?
Certain competencies, such as writing texts independently in correct language, may seem less important for now. However, these competencies are part of learning outcomes of many programmes.
For example, you cannot obtain a law degree without being able to formulate your own legal argumentation as a solution to a complex legal issue or a diploma from a language course without writing lucid texts yourself or a degree in Computer Science without being able to program yourself.
Moreover, strong writing skills require more than what such a generative AI tool can currently do. Competencies such as critical thinking, knowledge of effective communication and creativity remain necessary to assess and adjust the quality of the generated texts.
How can you sharpen your AI literacy?
Not everyone is proficient yet in working with these tools. Do you feel the need to learn more about their use? Would you like to know more about generative AI in general?
Enrol in the Ufora learning path: Generative AI – From Concepts to Creation