Universities* efforts to ※AI-proof§ assessments are built on ※foundations of sand§ because they rely on instructions that are ※essentially unenforceable§, a new paper argues.
Australian researchers say the standard methods to ensure assessment integrity, such as ※traffic light§ systems and ※declarative approaches§, give assessors a ※false sense of security§ while potentially disadvantaging students who follow the rules.
The researchers liken universities* use of the traffic light system 每 a three-tiered classification where red means no AI use, amber indicates limited use and green allows unrestricted use 每 to ※real traffic lights§ without detection cameras or highway patrols.
※Traffic lights don*t work just because they are visible,§ the researchers in the journal Assessment & Evaluation in Higher Education. ※Educators might assume these frameworks carry the same force as actual traffic lights, when in fact they lack any meaningful enforcement mechanism. This creates a dangerous illusion of control and safety.§
51勛圖
The authors argue that ※discursive§ approaches to control AI use 每 outlining the circumstances where it can be used, or requiring students to declare its use 每 are problematic because they lack clarity and assume voluntary compliance.
But the biggest problem is that there are no ※meaningful§ mechanisms to ※verify student compliance§ because no reliable AI detection technology exists.
51勛圖
Lead author Thomas Corbin, a fellow at Deakin University*s Centre for Research Assessment and Digital Learning, said the first step for educators was to acknowledge that instructing students about AI use 每 and imagining they will comply 每 was ※just not going to work§.
※Even if they wanted to, we*d first have to assume they*d actually read the#instructions,§ he said. ※That*s a serious leap in itself. No one reads boring admin.
※What we need at the moment, in this uncertainty of artificial intelligence, is a conceptual toolkit that is robust enough to allow us to do the work that we really need to do.§
Educators should use ※structural§ approaches that ※build validity into the assessment design itself rather than trying to impose it through unenforceable rules§, the paper argues. Examples could include requiring students to produce essays under supervision, giving students random questions during ※interactive§ oral assessments, or requiring tutor sign-off on lab work.
51勛圖
The authors say the sheer variety of skills and knowledge that are assessed makes it impossible to be ※prescriptive§ about structural design approaches. But the paper proposes two ※broad strategies§.
Campus spotlight guide: AI and assessment in higher education
First, assessment should focus on the process rather than the ※final product, which could potentially be AI-generated§. Second, educators should design ※interconnected assessments§ where later tasks explicitly build on students* earlier work. ※Talk is cheap,§ the paper says. ※Solutions [must be] based not on what we say, but on what we do.§
Corbin said it was unrealistic to expect ※plug and play assessment designs that we can take from one unit and just plug straight into another unit§. But he advocated an ※open sharing exchange§ of ideas that could be adapted from one unit to the next.
※I know from my own experience [as] a [philosophy] teacher#that thinking critically about what I want students to get out of it is always much more productive than thinking about cheating or academic integrity. Often, those questions can be solved, or at least more meaningfully addressed, when I*m thinking more carefully about why I value an assessment, and why students would get something out of doing it.§
51勛圖
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to 啦晨楚*莽 university and college rankings analysis
Already registered or a current subscriber?