51勛圖

Universities* AI safeguards promote &enforcement illusion*

The only way to prevent AI from undermining assessments is to design them that way, Australian researchers argue

June 10, 2025
Traffic light road ahead sign against cars in the background.
Source: iStock/Joe Morris

Universities* efforts to ※AI-proof§ assessments are built on ※foundations of sand§ because they rely on instructions that are ※essentially unenforceable§, a new paper argues.

Australian researchers say the standard methods to ensure assessment integrity, such as ※traffic light§ systems and ※declarative approaches§, give assessors a ※false sense of security§ while potentially disadvantaging students who follow the rules.

The researchers liken universities* use of the traffic light system 每 a three-tiered classification where red means no AI use, amber indicates limited use and green allows unrestricted use 每 to ※real traffic lights§ without detection cameras or highway patrols.

※Traffic lights don*t work just because they are visible,§ the researchers in the journal Assessment & Evaluation in Higher Education. ※Educators might assume these frameworks carry the same force as actual traffic lights, when in fact they lack any meaningful enforcement mechanism. This creates a dangerous illusion of control and safety.§

51勛圖

ADVERTISEMENT

The authors argue that ※discursive§ approaches to control AI use 每 outlining the circumstances where it can be used, or requiring students to declare its use 每 are problematic because they lack clarity and assume voluntary compliance.

But the biggest problem is that there are no ※meaningful§ mechanisms to ※verify student compliance§ because no reliable AI detection technology exists.

51勛圖

ADVERTISEMENT

Lead author Thomas Corbin, a fellow at Deakin University*s Centre for Research Assessment and Digital Learning, said the first step for educators was to acknowledge that instructing students about AI use 每 and imagining they will comply 每 was ※just not going to work§.

※Even if they wanted to, we*d first have to assume they*d actually read the#instructions,§ he said. ※That*s a serious leap in itself. No one reads boring admin.

※What we need at the moment, in this uncertainty of artificial intelligence, is a conceptual toolkit that is robust enough to allow us to do the work that we really need to do.§

Educators should use ※structural§ approaches that ※build validity into the assessment design itself rather than trying to impose it through unenforceable rules§, the paper argues. Examples could include requiring students to produce essays under supervision, giving students random questions during ※interactive§ oral assessments, or requiring tutor sign-off on lab work.

51勛圖

ADVERTISEMENT

The authors say the sheer variety of skills and knowledge that are assessed makes it impossible to be ※prescriptive§ about structural design approaches. But the paper proposes two ※broad strategies§.


Campus spotlight guide: AI and assessment in higher education


First, assessment should focus on the process rather than the ※final product, which could potentially be AI-generated§. Second, educators should design ※interconnected assessments§ where later tasks explicitly build on students* earlier work. ※Talk is cheap,§ the paper says. ※Solutions [must be] based not on what we say, but on what we do.§

Corbin said it was unrealistic to expect ※plug and play assessment designs that we can take from one unit and just plug straight into another unit§. But he advocated an ※open sharing exchange§ of ideas that could be adapted from one unit to the next.

※I know from my own experience [as] a [philosophy] teacher#that thinking critically about what I want students to get out of it is always much more productive than thinking about cheating or academic integrity. Often, those questions can be solved, or at least more meaningfully addressed, when I*m thinking more carefully about why I value an assessment, and why students would get something out of doing it.§

51勛圖

ADVERTISEMENT

john.ross@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

Related universities

Reader's comments (4)

The only qualification I would value these days would be one gained in a pen and paper exam with phones handed in beforehand.
Well exams are far too stressful for our students, especially those with mental health issues, so I don't think this one will fly tbh. But yes, I agree the formal examination is probably the best means of assessment but it can be a brutal affair for many.
Generative AI (gAI) is a tool. Like any other tool, students need to learn how to use it properly and appropriately. If, for example, they've been asked to create a web system to demonstrate their skills at full stack development, using gAI to produce pages of content for web pages is fine, it's their ability at designing a good user experience and building interactivity into the system that we're testing. However, if they want to use gAI to assist them in writing a paper they need to be able to discuss the prompts they used and analyse critically the response, editing it to suit their purposes. The same goes for the use of gAI code assistants... they're useful tools but need to be used intellegently, with understanding.
Interesting comment. But, in my view, gAI is not just like any other tool. Far from it. It's not just a calculator or a set of log tables or google searches. It's something more than this and rather transformative. None of the previous tech advance tools were 'intelligent' in the same way. With academic essays, for example, we are looking at programmes that can provide better essays than some of the students. Of course, they need some form of editing and revision to efface the obvious signs of their origin. And with Creative assignments the gAI can produce excellent results. Some creative artists now use gAI as part of their working methods. So it has the potential to actually transform the object of knowledge itself and the processes of research and interpretation. gAI won't assist as such but it has the potential to do the work itself. I have been experimenting with some of these systems myself and, frankly, I am very impressed with what they can do and, more importantly, what in a few years they will be able to do at this rate. I ave colleagues who witter ion about the human factor and how they can tell when something is AI generated etc, but that's often because the student has been too lazy to tidy up the final copy. And there is always the problem of suspecting and proving, which we all know from our plagiarism inquiries.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT