51勛圖

Students win plagiarism appeals over generative AI detection tool

Ombudsman tells universities to be mindful of &limitations* of detection tools and to consider if they are biased against international students

July 15, 2025
Forensic fingerprint analysis
Source: iStock/Vichien Petchmai

Universities?that relied too heavily on detection software to find students guilty of using generative artificial intelligence tools to draft assignments have been rapped over the knuckles by the sector ombudsman for England and Wales.

The Office of the Independent Adjudicator (OIA) has issued new guidance urging institutions to be mindful of the ※limitations§ of detection tools and to consider whether such tools might be biased against international students whose first language is not English, or disabled learners.

On 15 July the OIA published case summaries of six complaints involving alleged AI use, many of which focused on the role of detection software such as that developed by technology firm Turnitin.

In one case, a student with autism who was given a mark of zero after a disciplinary panel ruled that they had used AI appealed to the ombudsman, claiming that the university*s detection software was ※biased against their writing style§. They said that the software had previously flagged another essay for suspected AI use but it was later agreed that the student had not used AI.

51勛圖

ADVERTISEMENT

Upholding the student*s complaint, the OIA said that the university ※didn*t properly consider all the points the student raised or the evidence they presented, including their essay planning and preparation§.

※When the provider reconsidered the case, it decided that there had not been any academic misconduct,§ the OIA said.

51勛圖

ADVERTISEMENT

Campus resource: Can we detect AI-written content?


In another case, the OIA upheld the appeal of an international student who was given a zero mark after Turnitin identified ※substantial amounts§ of AI-generated content in their work. They admitted using Grammarly, an AI-powered writing assistant, but said they thought this was permitted because English was not their first language.

In its ruling, the OIA said that the student had not been given a fair opportunity to respond to the allegations, and relied too heavily on the outcome of a viva, which the student complained had occurred some time after the original submission.

Other cases detailed by the OIA include:

  • An international student found guilty of academic misconduct after Turnitin identified AI-generated content but claimed they had only used Google to find synonyms. The OIA said that the university ※did not consider whether Turnitin's AI detection might be less reliable for non-native English speakers, which was relevant given the student's international status§, but found that the complaint was only partly justified because a separate conclusion that the student had colluded with another person was reasonable.
  • A postgraduate who failed a module after Turnitin indicated that an essay contained AI-generated content, including ※hallucinated§ references. The student claimed they had simply recorded references incorrectly and the university agreed to consider the matter afresh after the OIA expressed concern that it had not set out detailed reasoning for its decision.

In a casework note accompanying the summaries, the OIA says that the responsibility ※is on the provider to prove that the student has done what they are accused of doing, not on the student to disprove it§.

※Providers should consider a range of evidence. It is important that decision-makers understand the strengths and limitations of detection software, and weigh this evidence carefully against other available information,§ the guidance says.

It adds: ※Providers should consider whether assumptions about AI use could be biased against a student*s writing style, for example if the student is disabled or has a communication difference, or if English is not the student*s first language.

51勛圖

ADVERTISEMENT

※It may also be relevant to consider whether these factors could have affected the student*s understanding of what AI use was permitted.§

The intervention comes amid mounting concern about the impact of generative AI tools such as ChatGPT on the integrity of university assessments.

Commenting on the cases, academic integrity expert Thomas Lancaster, a principal teaching fellow in computing at Imperial College London, said that universities ※need to clearly articulate to both students and staff what their vision of acceptable AI use looks like§.

51勛圖

ADVERTISEMENT

※There*s still a huge amount of confusion across the higher education sector about how far GenAI use should be allowed and encouraged,§ Lancaster said.

※The new guidance from the OIA is useful. Any kind of academic misconduct allegation, whether warranted or not, puts students under a huge amount of pressure.§

A final case summarised by the OIA concerned an MBA student who complained that their dissertation feedback was ※too generic to be useful§ and said that a detection tool ※showed that the feedback had been partly AI-generated§.

The university said that AI had not been used, warning that AI detection tools ※did not provide conclusive and reliable results§, and said that the feedback was instead based on ※a set template to ensure consistency of approach§.

51勛圖

ADVERTISEMENT

The OIA agreed, stating: ※We decided that it was reasonable for the provider to place more weight on information from the staff member about how they had marked the work, and to place less weight on the results of the AI detection tool. AI detection tools can be unreliable.§

chris.havergal@timeshighereducation.com

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Please
or
to read this article.

Related articles

This year*s marking season has confirmed for many academics that, less than three years since the launch of ChatGPT, AI use by students has become so rife that their submitted writing is no longer a reliable indicator of what they have learned. Three scholars offer their views on where to go from here

Reader's comments (2)

Well I think a few of us have warned about this and the complexities of the issue. To simply assume the use of AI as equivalent to a form of "cheating" in effect plagiarism will not wash, especially when it comes to international students whose first language is not English. To run these essays though Turnitin or other detection system and rely on that to back up such allegations won't be sufficient. These detectors are themselves fallible and inaccurate when it comes to this practice. Dealing with these cases if we suspect them is going to be rather laborious
new
I have always argued that any technology whose endgame is revenue can not be relied upon to make such judgments. Most of these Ai detection tools have AI humanizing offers attached thus making them very biased. They almost look like they flag every work so us to lure them into using AI humanizing offers.

Sponsored

Featured jobs

See all jobs
ADVERTISEMENT