He Was Falsely Accused of Using AI. Here’s What He Wishes His Professor Did Instead

Future Studios
William Quarterman was a UC Davis Senior when he learned the hardway what happens when AI detectors get it wrong. (Image credit: Future)

William Quarterman remembers the exact moment he learned that he had been accused of using AI to cheat: Between classes when he logged onto the student web portal at the University of California, Davis, to check his midterm grade in a history class. 

The exam had been a take-home one and Quarterman, a senior, was shocked to learn that not only had he been given a zero, but his professor was accusing him of cheating by using ChatGPT to write his essay. The professor also informed him that she was referring Quarterman's case to the university’s Office of Student Support and Judicial Affairs for academic dishonesty. 

A photo of William Quarterman sitting on the ground with his dog.

William Quarterman was falsely accused of using AI to cheat by his professor.  (Image credit: William Quarterman)

“I almost immediately started breaking down crying,” Quarterman says. “I managed to get back to my apartment before I had what I'm now recognizing was a full-blown panic attack.” 

Quarterman had never used ChatGPT and felt like his college career was getting derailed right before the finish line. Faced with the prospect of proving a negative, he felt like a Kafka character.

“It was a very traumatic experience because it's one of those few situations where you're sitting there and you have no control of the situation whatsoever,” he says. “It's extremely stressful because you're a college student, you’re supposed to have a lot of autonomy over your ability to get through college, and then suddenly, you're in a situation where none of that matters anymore.” 

How He Proved His Innocence Against an AI Cheating Accusation 

A sample of the paper William Quarterman submitted that was incorrectly fllagged as AI

The essay question William Quarterman submitted that his professor incorrectly believed was AI generated.  (Image credit: William Quarterman)

Before his tears had dried, Quarterman called his parents – his father is a lawyer and his mother also has a background in law. His parents and other members of the family, including a sister who works in tech, immediately began advising him on ways he could prove his innocence. 

Quarterman says his professor ran his essay response through an AI detection tool because she felt that his response was too general. When the essay was marked positive, she proceeded to accuse him of cheating, never considering that the AI detection tool could have logged a false positive, which can be fairly common. 

To counter these claims, Quarterman and his family did several things. First, because Quarterman had written the essay on Google Docs, he was able to use a feature that saves each document's edit history to provide time-stamped evidence of how he created the paper over the course of two or three hours. 

In addition, the family demonstrated how the AI detection tool that his professor used frequently incorrectly flagged famous works as AI-generated. This included Martin Luther King’s “I Have A Dream Speech” and excerpts from the Book of Genesis. 

Quarterman was eventually cleared of the accusation and graduated on time, however, he’s since been contacted by several students who have had similar experiences with false AI use accusations. He wonders what might have happened to him if he didn’t have the benefit of free legal advice from his family. 

“If I were an international student without this support structure or a first-generation college student, I’d be in a lot of trouble and I’m certain there are international students and first-generation students who are getting unfairly accused of AI usage who are getting punished for something they didn't do," he says. 

He adds, this is all because universities can be so “interested in punishing students and making an example out of quote-unquote ‘bad apples,’ that they're not paying attention to the health of their own student bodies.” 

What Instructors Who Suspect Students of AI Use Can Do Instead  

Quarterman believes that in five or six years, using AI to help write papers will be an accepted part of education, so he says, instructors should worry less about punishing students in this transitionary period. However, he understands many instructors won’t agree with that. In those cases, Quarterman endorses a strategy of talking with the student, asking them to provide evidence that they are the author of the essay or potentially asking them to rewrite it. 

While this approach might still result in false accusations and stress for a student, it is much better than what he experienced. “I wish my teacher had done that. It would have been stressful but not a tenth as stressful as what I went through,” he says. “I wish my teacher asked me, 'Did you write this? Can you show me that you wrote this? Or can you just try and rewrite this one more time?'” 

It doesn’t matter precisely the approach to this conversation, Quarterman says, it’s just important that teachers give students a chance to explain themselves. “Ten minutes with the student can save you a lot of time further down the road,” he says. 

Quarterman’s story demonstrates that not giving a student a presumption of innocence can cause tremendous mental anguish for them, that AI detectors can get things wrong, and that there are consequences for students when instructors make false accusations. 

Although Quarterman is still bitter about what happened, he does hope to use it in a positive way. The experience inspired him to apply to the San Francisco Police Department. 

“I've had the experience of a judicial system unfairly accusing me,” he says. “Granted, a college judicial system is much less stressful than the U.S. judicial system and much less impactful. But I want to take that experience, apply it to police work, and try and be the most accurate policeman possible.” 

Erik Ofgang

Erik Ofgang is a Tech & Learning contributor. A journalist, author and educator, his work has appeared in The New York Times, the Washington Post, the Smithsonian, The Atlantic, and Associated Press. He currently teaches at Western Connecticut State University’s MFA program. While a staff writer at Connecticut Magazine he won a Society of Professional Journalism Award for his education reporting. He is interested in how humans learn and how technology can make that more effective. 

TOPICS