Universities’ efforts to “AI-proof” assessments are built on “foundations of sand” because they rely on instructions that are “essentially unenforceable”, a new paper argues.
Australian researchers say the standard methods to ensure assessment integrity, such as “traffic light” systems and “declarative approaches”, give assessors a “false sense of security” while potentially disadvantaging students who follow the rules.
The researchers liken universities’ use of the traffic light system – a three-tiered classification where red means no AI use, amber indicates limited use and green allows unrestricted use – to “real traffic lights” without detection cameras or highway patrols.
“Traffic lights don’t work just because they are visible,” the researchers in the journal Assessment & Evaluation in Higher Education. “Educators might assume these frameworks carry the same force as actual traffic lights, when in fact they lack any meaningful enforcement mechanism. This creates a dangerous illusion of control and safety.”
The authors argue that “discursive” approaches to control AI use – outlining the circumstances where it can be used, or requiring students to declare its use – are problematic because they lack clarity and assume voluntary compliance.
But the biggest problem is that there are no “meaningful” mechanisms to “verify student compliance” because no reliable AI detection technology exists.
Lead author Thomas Corbin, a fellow at Deakin University’s Centre for Research Assessment and Digital Learning, said the first step for educators was to acknowledge that instructing students about AI use – and imagining they will comply – was “just not going to work”.
“Even if they wanted to, we’d first have to assume they’d actually read the…instructions,” he said. “That’s a serious leap in itself. No one reads boring admin.
“What we need at the moment, in this uncertainty of artificial intelligence, is a conceptual toolkit that is robust enough to allow us to do the work that we really need to do.”
Educators should use “structural” approaches that “build validity into the assessment design itself rather than trying to impose it through unenforceable rules”, the paper argues. Examples could include requiring students to produce essays under supervision, giving students random questions during “interactive” oral assessments, or requiring tutor sign-off on lab work.
The authors say the sheer variety of skills and knowledge that are assessed makes it impossible to be “prescriptive” about structural design approaches. But the paper proposes two “broad strategies”.
Campus spotlight guide: AI and assessment in higher education
First, assessment should focus on the process rather than the “final product, which could potentially be AI-generated”. Second, educators should design “interconnected assessments” where later tasks explicitly build on students’ earlier work. “Talk is cheap,” the paper says. “Solutions [must be] based not on what we say, but on what we do.”
Corbin said it was unrealistic to expect “plug and play assessment designs that we can take from one unit and just plug straight into another unit”. But he advocated an “open sharing exchange” of ideas that could be adapted from one unit to the next.
“I know from my own experience [as] a [philosophy] teacher…that thinking critically about what I want students to get out of it is always much more productive than thinking about cheating or academic integrity. Often, those questions can be solved, or at least more meaningfully addressed, when I’m thinking more carefully about why I value an assessment, and why students would get something out of doing it.”
请先注册再继续
为何要注册?
- 注册是免费的,而且十分便捷
- 注册成功后,您每月可免费阅读3篇文章
- 订阅我们的邮件
已经注册或者是已订阅?