How universities spot AI cheats – and the only word that gives it away

Sitting in his office, working his way through a handful of undergraduate essays, Dr. Edward Skidelsky became suspicious of one telling word – “delve”.

“The sentence was something like ‘this essay focuses on the rich tapestry of experiences…’ and I thought there’s that word ‘delve’ again,” says the University of Exeter philosophy lecturer. “The sentence is typical of the purple but empty prose produced by ChatGPT.”

OpenAI ChatGPT, the AI ​​software that creates text on any given topic in seconds, launched for free at the end of 2022. Other models soon followed – and its arrival inspired the same measures of horror and excitement across education . world.

At its best, it is a tool that streamlines research. A recent investigation by the Higher Education Policy Institute (HEPI) shows that more than half admitted to using generative AI to “help prepare assessments”.

It is safe to say that the use of AI generation software by university students is now endemic. But its prevalence means the mundane task of marking essays is becoming increasingly difficult for thousands of people on campuses across the UK. Because what started out as a trickle of AI text popping up in student work has become a steady stream, resulting in “many essays” written, at least in part, by generative AI.

Along with “delve”, academic writing is more than ever littered with AI’s favorite words such as “show”, “emphasis”, “potential”, “crucial”, “enhance” and “demonstration”.

There are other gifts. Incorrect or incomplete references in student work to papers in academic journals may be due to “ChatGPT having difficulty with page numbers”. Sudden changes in writing styles within a single essay are another red flag, along with the lack of a sustained argument.

In this new world, poor grammar and spelling of content is frowned upon, rather than irritating.

“Ironically you know students aren’t using AI just because they make mistakes in their grammar,” says Dr. Skidelsky. “ChatGPT’s content, while vaguely vague, is error-free and immediately distinguishable from the student’s own work.”

Dr Skidelsky: 'You know students aren't using AI just because they make mistakes'

Dr Skidelsky: ‘You know students aren’t using AI just because they make mistakes’ – Jay Williams

Professor Steve Fuller, professor of sociology at the University of Warwick, feels his antennae when he comes across “certain words or phrases that are repeated in a mechanical way”, a sign that ChatGPT is mentally repeating expressions often seen in the internet content that exists. sampling.

Fuller believes that most students do not cheat. That said, he does come across AI-generated text regularly, he says. A vital sign is that there is little or no reference to the subject of the course in the students’ answers.

“The required reading is supposed to appear in their answers,” says Professor Fuller. “But with ChatGPT there is no particular reason why it should. You end up with answers that may be correct but are very generic and not really appropriate for the course.”

Some professors were blunt in their assessment of the impact. Des Fitzgerald, a professor at University College Cork, has said that students’ use of AI has “completely gone mainstream” and described it as a “crap producing machine”.

Meanwhile, as academics despair and try to get the upper hand on academic integrity, university policies on the use of AI can be vague and contradictory in practice.

Where the “fair use” line is drawn is not defined. AI detecting software is of little help. Its creators themselves admit that it is unreliable and as a result many universities do not use it.

The phenomenon of AI generation has academics at sea looking back with nostalgia at the days of simple, old-fashioned plagiarism that software can spot and check against original material.

It is harder to prove that students are cheating with ChatGPT. There is no source document to verify. As one academic says, the instructor cannot prove anything, and the student cannot defend themselves.

The HEPI study suggests that cases of academic misconduct have increased since the launch of AI generation – doubling or even tripling in some institutions. But academics say they are reluctant to report allegations without strong evidence.

“It’s impossible to prove and you’d be wasting a lot of time,” said Professor Fuller, at Warwick. “But if I have doubts, it shows in my marks and my comments.”

The Professor recently gave an essay 62 percent and wrote that it “appears to have been generated by ChatGPT”.

“I also gave feedback and explained that it was a very superficial handling of the matter,” he says. “The student did not challenge me about it. I’m pretty sure I haven’t caught it all [ChatGPT generated text] but I’m also pretty sure I never gave the first one to anyone who used ChatGPT a lot.”

Dr Skidelsky at Exeter has a similar approach: “You can mark it down because it’s bad, but you can’t make an accusation without proof.”

Academics are between a rock and a hard place. Many believe in AI generation should be used by students and integrated into courses because it is a fact of life and employers will expect them to use it effectively. But overuse of ChatGPT risks students failing to acquire the knowledge needed to consolidate knowledge and develop critical skills and competencies.

And it is naive for universities to treat AI generation as the equivalent of a calculator or an online thesaurus. As Dr. Skidelsky says: “It’s much more than just a tool; it replaces certain sophisticated cognitive processes and if students are encouraged to use it, they may not do their thinking for themselves.”

One obvious way to ensure that students do their own work is through in-person exams – a method that many universities have replaced coursework and online unsupervised exams. So should they be sent back?

Institutions argue that a diet of exams fails to assess what matters most or to reflect the world outside the classroom. And, as one academic points out, “Students don’t like exams and are consumers of £9,250 a year and need to be kept happy.”

But as more sophisticated iterations of generative AI come to market, the telltale signs that academics currently rely on to use it are likely to disappear.

“Technology is moving at breakneck speed,” says Kieran Obermanan Oberman, associate professor at the London School of Economics. “Down the line, ChatGPT generated essays, or similar, won’t be bad and they won’t be clear.”

He predicts a future with more “AI-focused” personal assessment – ​​exams, oral tests and class presentations. Beyond that, policing might include asking students to save multiple versions of their essay to track changes, making “massive copy and paste jobs” obvious.

“It’s always on your mind,” says Oberman. “You’re looking into the future and you know that technology is getting better and maybe it’s going to get harder to detect and harder for students to avoid if everyone else is doing it use. It’s like doping in sports, and academia, like sport, is very competitive.”

Leave a Reply

Your email address will not be published. Required fields are marked *