The ChatGPT effect in higher education

logo global campus

The ChatGPT effect in higher education

The integration of GenAI in education raises significant human rights concerns. Any information input into the platform, including personal information, is used as training data for the model. This practice risks infringing on the right to privacy and the right to data protection. Quality of education is also impacted by GenAI accuracy concerns and misinformation, undermining the right to education.

Generative AI (GenAI) refers to algorithms that can generate new content, from text to images, by learning patterns from existing data. Unlike traditional AI, which focuses on recognising patterns and making predictions, GenAI can produce novel outputs, hence the term ‘generative’. It can generate text, images, audio, videos, and even computer coding. For instance, it can write essays, compose music, design graphics, and even produce videos.

This technology leverages deep learning models trained on vast datasets to understand the nuances of the content it aims to generate. The goal is AI that mimics human creativity, creating new examples based on what it has seen before and representative of the underlying patterns and relationships in the data.

In reality, GenAI tools respond to prompts they are given by producing plausible-sounding content, which, upon close inspection, often turns out to be nonsense. GenAI is prone to randomly generating false information. When it has no information, it will hallucinate. Hence, the information it generates is unreliable.

What is ChatGPT?
A particular GenAI that has gained massive popularity, including in education, is ChatGPT. This is a pre-trained text-generation language model developed by OpenAI. Like any other GenAI tool, it can generate human-like text based on the input it receives. It is publicly accessible and has a user-friendly interface. With the latest updates (as of July 2024), it has the ability to browse the web by using Bing, allowing the model to integrate into its output current information from the web. It can use plugins, including voice capability, and has also integrated Dalle-E allowing it to generate images from text, and recognises text in non-English languages too. Its latest version, GPT 4o, can accept any combination of text, audio, image, and video as input.

How does ChatGPT impact the right to education?
In the rapidly evolving landscape of higher education, the integration of GenAI has the potential to disrupt, dissuade, and distort the fundamental building blocks of education. On the one hand, its ability to produce diverse and contextually relevant content in real-time offers significant advantages for students and educators alike.

For students, GenAI can serve as a personalised tutor or virtual assistant, providing tailored feedback and guidance and helping with assignments by generating drafts or suggesting improvements (e.g., thesis writing or even code generation to accelerate the learning process of computer science students). It can even help in critical thinking by providing different arguments, different viewpoints of a particular issue, engaging students to think beyond their boundaries. With its multimodal capability, ChatGPT can assist students in their learning process by presenting information in an easily comprehensible visual format, such as an infographic.

For educators, it can assist in creating educational materials (e.g., presentations, videos, new questions for a quiz, debate topics, course design, hypothetical scenarios/classroom activities for enhancing student’s problem-solving skills), developing a syllabus, grading, identifying areas where students may need additional support (e.g., adaptive learning materials for each student based on their learning progress), and assisting in research. For example, educators can make a video of their lecture by using the voice feature and even select the image they want the ‘actor’ to use. They can also insert some (GenAI-developed) jokes into the videos if they like. All this and much more is now possible. This changes the way knowledge is acquired and presented, the way courses are conceived, designed, developed and delivered, and potentially also the student-professor contact/relationship. Ultimately, it shakes the role of professors. Whether this is a blessing or a curse, time will show.

In research, it can turn text into visual graphs. If appropriately used, GenAI can assist educators in freeing up time and resources, allowing them to focus on other tasks that technology cannot replace. The adaptability of multimodal tools makes educational experiences diverse and inclusive, accommodating various learning styles and sensory preferences in the academic community. It might even be (mis)used as a replacement for humans in peer-reviewing academic writings. This shift in educators’ engagement with technology from passive adaptation of pre-made software to active engagement in innovation and creativity is both technical and philosophical. It urges a reimagining of educators' roles as creators and collaborators rather than mere consumers of technology. However, to fully harness this potential, educators must lead the way in research and development, guiding AI integration with expertise.

For universities, ChatGPT can optimise certain academic and administrative functionalities. It can be trained on their dataset to handle queries related to schedules, course content, grading, adapted to comprehend the university’s policies and regulations, and even predict trends by analysing large data sets. This could then be used for research and even academic decision-making. ChatGPT’s multilingual support can be used by universities to provide a communication tool enhancing information accessibility for their international students, promoting an inclusive learning atmosphere. Consequently, together with other GenAI tools, ChatGPT promises to be a transformative force for higher education.

Risks and limitations in higher education: human rights and ethical perspectives
Despite all the benefits, the risks and limitations of GenAI cannot be ignored. There is a concern that students might become overly reliant on AI-generated content, potentially compromising the development of critical thinking, problem-solving, and writing skills and even risking unlearning already acquired skills. Overreliance also causes concerns regarding fact-checking. GenAI can generate plausible-sounding but inaccurate or false information, which should always be cross-verified with credible sources.

Furthermore, the integration of GenAI in education raises significant human rights concerns, particularly regarding data protection and the potential for surveillance. Any information input in the platform, including personal information, uploaded files, images, and screenshots, are used as training data for the model. In this regard, the human rights at risk are the right to privacy and the right to data protection, as enshrined in the General Data Protection Regulation (GDPR) and Article 12 of the Universal Declaration of Human Rights (UDHR).

The use of GenAI also risks widening the educational divide because of the existing unequal access to technology and related resources. This affects the right to education as enshrined in Article 26 of the UDHR and Article 13 of the International Covenant on Economic, Social and Cultural Rights (ICESCR).

Additionally, the potential for AI to perpetuate biases present in training data poses ethical and legal challenges. GenAI may amplify societal biases in educational content, affecting the right to non-discrimination.

Notably, the variability of ChatGPT leads to getting different responses when asking the platform the same question twice. This seems to be an intentional feature aiming to make the interactions more dynamic. However, in some instances, it gives the feeling that the system thinks when you ask the same question twice, and that it is because you do not like the answer, so the second answer is slightly contradictory to the first one. The more times you repeat the question, the more contradictory it gets. It feels like talking to different people about the same topic. In education this dynamic might be more a bug than a feature. It may be confusing, particularly to students relying on ChatGPT as a source of information, and may make the platform seem inconsistent and unreliable.

Accuracy concerns and the potential for spreading misinformation, which can be misleading for students, raise significant concerns about the quality of education. Article 13 of the ICESCR emphasises the need for education to foster the full development of human personality, respect for human rights, and the ability to participate effectively in society. When GenAI systems produce inaccurate or biassed information, they undermine these educational goals by perpetuating false knowledge, impeding critical thinking, and compromising students' ability to make informed decisions.

Another concern is academic integrity. The use of ChatGPT for assignments and thesis writing might blur the boundaries of independent learning and research and overreliance on AI-generated content.

While the benefits are substantial, a balanced approach that includes robust ethical guidelines and regulatory frameworks is essential to mitigate these risks and ensure that GenAI enhances rather than undermines the educational experience.

How do universities react?
Research shows that most European universities assembled Task Forces to explore how and where GenAI will impact higher education. This effort resulted in a number of universities embracing GenAI and releasing guidelines on its healthy use by staff and students, including guidelines on academic integrity in the light of GenAI and training about ethical and responsible use of AI. Some even developed their own GenAI tools. Only a few European universities preferred to keep their heads in the sand, with punitive policies about its use. For a detailed overview of approaches by different European universities refer to a recent policy brief on ChatGPT in the classroom.

What can be done to limit the harm of GenAI in education?
It is impossible to ignore the existence of GenAI and continue the old-style way. One possible solution can be embedding watermarks within the content, which can identify that the content is AI-created or reveal information about who prompted the output. Labelling, however, is anything but perfect. When technology labels what is ‘false’, this implies that what is left unlabeled must, by default, be true and trustworthy, while this is not always the case.

Another existing solution is the use of AI content detection tools. Within the educational context, these tools may initially appear to offer a method for preventing plagiarism. However, current AI detection systems are unreliable or even arbitrary, particularly in non-English language contexts. This paradox of using AI to detect AI-generated content is problematic. Furthermore, content flagged as ‘false’ or AI-generated is frequently misidentified. Consequently, relying on these tools to evaluate students' assignments is inadvisable, as it could lead to unjust accusations of using GenAI, thereby potentially infringing on students' right to equal access to education and fair treatment, such as when used in assessing university entrance examinations. Misidentifying genuine student work can lead to discrimination and unjust penalties, undermining educational fairness.

This issue also extends to professors' academic writing, affecting their academic freedom and professional integrity. It could potentially damage their reputations and discourage academic inquiry.

Therefore, a more prudent approach would involve a critical assessment of content, acknowledging that any content can easily be generated by AI.

Continuous education for students and training for educators in the cautious use of these tools is essential. Students need to be informed about the limitations and capabilities of GenAI and advised to use these tools as supplementary learning aids (such as for explanations, summaries, and preliminary research) rather than as substitutes for critical thinking, preventing misuse and the pitfalls of passive learning.

Educators play a pivotal role in ensuring that students develop critical thinking skills while using tools like ChatGPT. One effective strategy could be for educators to leverage ChatGPT's limitations as a teaching tool. For example, a teacher might use ChatGPT to generate an incorrect solution and then challenge students to identify and explain the errors, critically assessing its accuracy and relevance. This approach, although labour-intensive, preserves active engagement and collaborative learning, aligning traditional teaching methods with the new digital reality.

This evolving digital educational landscape demands that educators continuously update their digital skills. It challenges the educational system by necessitating a structural transformation within higher education institutions, redefining both teaching and academic writing, ensuring the preservation of human rights in the process. By recognising and addressing these issues, higher education institutions can more effectively leverage GenAI technologies to drive innovation and enhance learning outcomes.

Desara Dushi

Written by Desara Dushi

Dr. Desara Dushi is a senior postdoctoral researcher at the Law, Science, Technology & Society Research Group (LSTS), Vrije Universiteit Brussel. She holds a double PhD degree in Law, Science and Technology from University of Bologna and University of Luxembourg. She is one of the policy analysts of the 6th edition and Global Campus Policy Observatory.

Cite as: Dushi, Desara. "The ChatGPT effect in higher education", GC Human Rights Preparedness, 28 November 2024, https://gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/the-chatgpt-effect-in-higher-education.html

 

Add a Comment

Disclaimer

This site is not intended to convey legal advice. Responsibility for opinions expressed in submissions published on this website rests solely with the author(s). Publication does not constitute endorsement by the Global Campus of Human Rights.

 CC-BY-NC-ND. All content of this initiative is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

freccia sinistra

Go back to Blog

Original Page: http://scalar.gchumanrights.org/gc-preparedness/preparedness-science-technology/article-detail/the-chatgpt-effect-in-higher-education.html

Go back