AI LANGUAGE MODELS, STANDARDIZED TESTS, AND ACADEMIC INTEGRITY: A CHAT (GPT)

Shalevska, Elena (2023) AI LANGUAGE MODELS, STANDARDIZED TESTS, AND ACADEMIC INTEGRITY: A CHAT (GPT). Teacher (26). pp. 17-25. ISSN 1857-8888

[thumbnail of t+26-3.pdf] Text
t+26-3.pdf - Published Version

Download (476kB)

Abstract

Language models’ popularity is on the rise, and with that, concerns about academic integrity in the times of such advanced Artificial intelligence (AI) tools are on the rise, too. Considering such concerns, this small study, which employs both qualitative and quantitative methods, thoroughly examines the role of language models, particularly ChatGPT, in the context of academic integrity. By assessing the accuracy of test answers generated by said language model, on questions from the state-issued high-school graduation English exam in N. Macedonia, and analyzing parts of essays generated using various prompts, the study aims to explore the potential implications of such AI tools on academic integrity in this new tech era. The study shows that ChatGPT's accuracy in providing test answers is satisfactory, with a minimal number of mistakes and over 80% accuracy on average, on both tests! As for the text/parts of essays generated by the model, the study has shown that the quality of the generated text differed based on the prompts that the user provided and their proficiency in articulating their specific demands. The study also showed that current AI detection remains unreliable at best. These findings contribute to the ongoing discourse on AI's influence on education and academic integrity, especially in regard to ChatGPT’s capabilities to generate content that can pass standardized tests and excel in open-ended writing tasks.

Item Type: Article
Subjects: Scientific Fields (Frascati) > Humanities > Languages and literature
Divisions: Faculty of Education
Depositing User: MA Elena Shalevska
Date Deposited: 28 Dec 2023 09:44
Last Modified: 28 Dec 2023 09:44
URI: https://eprints.uklo.edu.mk/id/eprint/9467

Actions (login required)

View Item View Item