51吃瓜

Academics ‘marking students down’ when they suspect AI use

Divergent attitudes towards the use of AI in assessments risks weakening the credibility of academic certifications and driving mistrust between students and teachers

Published on
九月 22, 2025
Last updated
九月 22, 2025
Marking exams
Source: iStock/Fabrique Imagique

Some academics are marking down students who they believe have used artificial intelligence (AI), even in instances where it has been permitted in assessments, a study has found.?

A published in Studies in Higher Education?argues that the use of AI in assessments has created a “messy grading space full of tension and inconsistency”.

Researchers interviewed 33 academics from China’s Greater Bay Area. Some of the academics’ universities did not have specific AI policies, while others did.

They were asked?how they handled marking when they suspected AI use or where students had declared their use of the technology.?

Overall, the study found that academics were influenced in their marking by their perceptions of AI use, and that such technologies have “complicated values that have long been celebrated in student work, such as originality and independence”.?

One academic told the study, “I think this is dishonesty and tells a lot about the student’s integrity…If the student cheats by using AI and thinks they can get away with it, as the teacher, I need to do my job”.

Another said: “If two assignments demonstrate the same quality, but B can independently complete it without AI, doesn’t this show B is more capable and deserves a higher grade?”

When it was pointed out that the assessment guidelines allowed student AI use as long as it was declared, academics changed their minds, highlighting the “tension” between the “legitimacy” to use AI, and the “traditional emphasis on independence as a marker of intellectual capability”.?

Lecturers in the humanities were more likely to be critical of AI use and consequently penalise students by docking marks, reflecting wider concerns in these fields that AI is a “shortcut that undermines essential processes towards learning”.

Report authors Jiahui Luo, assistant professor at the Education University of Hong Kong, and Phillip Dawson, co-director of the Centre for Research in Assessment and Digital Learning at Deakin University, noted that academics’ expectations of AI “are often implicit and not openly communicated to students”.?

They told 51吃瓜: “Most assignments currently fall into a ‘middle ground’, where AI use is neither explicitly prohibited nor required, but students are expected to declare their AI use.?

“This creates variability in how students approach their assignments – some reported using AI heavily, others minimally or not at all – but how these various uses of AI are interpreted by teachers and subsequently factored into their grading remains unclear.”

If left unaddressed, “this will likely result in weakened credibility of academic certifications, distrust from students, and unfairness”, said Luo.

The paper argues that “validity” in marking could “offer a pathway forward”, whereby there is a clear understanding and expectation, from both staff and students, of “what a particular task is meant to assess”.?

“Through a validity view, the use of GenAI could be justification to mark down student work if (and only if) it meant that students were not able to demonstrate they had met the outcomes being assessed,” it says.

For example, under this model, it would be fair to mark down a student studying languages who had used AI in their work, because “the use of GenAI had interfered with students’ ability to showcase their writing skills”.

It is “crucial” that academics provide students with “explicit” declarations on how GenAI use in assignments could impact grading, the paper says, recommending that universities organise workshops to ensure that lecturers align grading practices with their educational goals.

juliette.rowsell@timeshighereducation.com

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
Please
or
to read this article.

Reader's comments (3)

Is no one aware that different disciplines have different approaches to different forms and uses of AI? This "report" is worse than useless? "Divergent" rather than "different"? Come on!
"Another said: “If two assignments demonstrate the same quality, but B can independently complete it without AI, doesn’t this show B is more capable and deserves a higher grade?” Wwll you can only mark the assignment as you have it. So you have to give the same grade unless there is a demonstrable infringement that can be penalised. Otherwise you might just as well mark up the students you like as you have no criteria. I am not sure what it is like elsewhere but in the UK system a student could appeal and would win if marking is so impressionistic and based on deserving
new
Not sure if this is much help for us really? We need proper data really not just this flim flam
ADVERTISEMENT