Risks and Ethical Implications of Using GenAI
Using GenAI involves certain risks. Always consider the limitations and potential ethical implications outlined below when using these tools.
The following interactive image shows the risks and ethical implications of using GenAI. Click on the key words for more information. Scroll down to find the same information as a continuous text.
Social Inequality and Unfairness
At first glance, it may seem that generative AI has the potential to eliminate inequality: everyone has access to the tools, which means that, for example, relying on private tutoring is no longer a privilege of the wealthy. However, the paid versions of these tools are much more powerful than the free ones. If anything, this actually increases inequality.
This inequality is also evident in whether people want or are able to use such tools, for any reasons.
The way the systems are fine-tuned also raises ethical questions. The companies behind these tools have incorporated safety mechanisms to prevent ethically objectionable answers. To achieve this, they relied on cheap labour under appalling working conditions. Furthermore, ethical decisions are influenced by cultural and ideological factors. As American tech companies currently take the lead, this results in a predominantly American perspective.
Bias
The tools generate input solely based on the samples they were trained on. These are not necessarily representative of the task they are assigned.
Bias arises not only from the amount of data but also from its quality. After all, since the data are obtained from the internet, they are not free from inaccuracies, prejudices, or bias.
Uniformity
Frequently used words and sentences will appear more often in the tools' data and thus be more prominent in their output. The more we depend on AI to compose our texts, the more these texts will start to resemble each other, leading to sameness in content and form.
Language models will continue to repeat these texts, resulting in even greater similarities and uniformity. This uniformity affects both the language employed and the content of the texts, stifling creativity.
Unreliability and Fake News
In terms of content, the tools might produce unreliable information. This results from the limited data upon which the answers are based.
If there is insufficient or no data to answer a particular question, the tool may generate convincing but not necessarily correct output. This feature in AI-speak is called 'hallucination'.
Moreover, these incorrect texts or texts containing false images, etc., may begin to take on a life of their own, feeding into fake news.
Copyright Infringement / Plagiarism
The companies behind AI models are often less than transparent about the sources of their training data. Furthermore, the tools frequently omit references or include ones that do not exist. These issues contribute to the fact that you may (unintentionally) contribute to copyright infringement whenever you use AI-generated output.
Please never knowingly contribute to such practices by feeding the system someone else’s work without their permission. Conversely, claiming copyright on AI-generated output is only possible if you can demonstrate that you have creative control. The latter remains a complex legal issue.
Violation of Privacy
Free-to-use tools often store your data to improve their systems. Therefore, never share confidential or sensitive information with the system. Feeding such systems data subject to GDPR is punishable by law unless these systems have clear rules and regulations on data processing.
Loss of Human Interaction
The computer appears to think and speak like a human, which might lead us to trust it excessively. Additionally, there is a risk of anthropomorphism, which could reduce human interaction.
It is essential to understand that the tools may have learnt to use several thought patterns from texts, but they contain no explicit logic and are, in fact, very limited in their reasoning.
Impact on Competencies and Competency Acquisition
A frequent reliance on AI tools can cause an (unconscious) decline in skill over time. For example, translating into a language you haven't fully mastered. AI tools can easily replace those skills, which might seem convenient, but in educational and research contexts, specific skills still need to be developed.
Larger Ecological Footprint
Developing and using AI tools demands substantial computing power. The data centres where the tools are trained and data are stored lead to a notable rise in energy and water consumption. Water is required to cool the chips.
Want to Know More?
Looking for more information on these risks and their impact? Go through Module 1.3 of the Ufora info site Generative AI: Concepts, Creations, Research and Classroom Practice