A mass cheating scandal at Yonsei University has intensified pressure on South Korean institutions to rethink how they evaluate students, as academics warn that “complete policing of AI use is increasingly unrealistic”.
Hundreds of students who took an online mid-term exam in October for a third-year natural language processing and ChatGPT course at Yonsei have admitted cheating.
Although students were required to keep their webcams directed at their faces, hands and screens throughout the exam, footage reviewed afterwards reportedly revealed widespread attempts to circumvent the safeguards.
According to the university, some students altered camera angles, switched between open programmes or copied questions into generative-AI tools during the test.
51Թ
A poll later appeared on a student message board asking classmates whether they had used unauthorised assistance during the test.
Of the 387 students who responded, 211 admitted that they had cheated and 176 said they had completed the exam independently, reported the .
51Թ
The incident has drawn attention to the absence of robust rules around AI in South Korean higher education.
Robert Fouser, former associate professor of Korean language education at Seoul National University, said the problems exposed at Yonsei reflect vulnerabilities in online assessment.
South Korean students, he said, “are among the most tech-savvy in the world, and AI is part of how they study and learn”.
Online factual recall tests “invite cheating”, he argued, and handwritten exams are “the only way to prevent the potential for AI cheating”.
Jisun Jung, associate professor in the Faculty of Education at the University of Hong Kong, stressed that the focus should not be on Yonsei alone.
“I do not believe the incident was solely due to a single university’s failure to prevent AI-enabled cheating, as similar cases are currently occurring across universities and countries,” she said.
The large class size and the difficulty of monitoring hundreds of students simultaneously, she added, intensified the challenge.
Jung noted that South Korea’s image as a memorisation-driven system can obscure the fact that many courses have already shifted towards analytical and project-based work.
51Թ
But she warned that declarations of ethical AI use were insufficient without structural change. Programme-level flexibility, expanded formative assessment and greater in-class engagement were essential, she said, though “this remains challenging in large classroom settings”.
51Թ
Institutions have had nearly three years to respond to the arrival of generative AI yet most have not articulated clear expectations for its use.
A 2024 study by the Korea Research Institute for Vocational Education and Training found that 91.7 per cent of surveyed university students who have used generative AI turn to such tools for research or completing assignments.
The situation at Yonsei is part of a broader pattern across South Korea’s most selective universities.
Last month Seoul National University acknowledged that students had relied on AI tools during a statistics exam held on campus, leading administrators to consider cancelling the results and requiring a retake.
Meanwhile Korea University reported a separate case in which about a third of the 1,400 students enrolled in an online course had shared answers in a group chat while completing coursework.
Despite the increasing use of AI, institutions have been slow to define acceptable guidance.
The Korean Council for University Education reports that 77 per cent of 131 universities analysed have no formal policies governing AI use. Where guidelines exist, they tend to be broad or advisory rather than enforceable.
Kyuseok Kim, centre director of IES Abroad in Seoul, said the events unfolding across South Korea’s elite institutions showed that the sector’s core assumptions no longer match the technological reality students inhabit.
“Traditional assumptions about academic integrity, online testing and student behaviour no longer hold when powerful cognitive tools are widely accessible,” he said.
Recall-focused tasks are now “highly vulnerable” and attempts at technological surveillance will “generate mistrust” while failing to detect misuse. He added: “Complete policing of AI use is increasingly unrealistic.”
In response to growing concerns, to convene a public hearing, led by its Institute for AI and Social Innovation, to examine how teaching and assessment should change as AI becomes embedded in learning.
51Թ
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to ձᷡ’s university and college rankings analysis
Already registered or a current subscriber?








