Stanford professors Rob Reich, Mehran Sahami, and Jeremy M. Weinstein are sounding the alarm on a new method for student cheating: AI-generated papers.
AI writing tools, such as the recently made public ChatGPT, have improved rapidly in recent years, and even months, and free programs are regularly advertised to students in targeted ads, making this an issue that might already have impacted many educators’ classrooms, whether they know it or not.
“The astounding pace of progress has far outpaced any capacity for regulation to meaningfully keep pace,” Reich says. “[This] has put educators in an unbelievably difficult position.”
What Should Teachers Know About AI Writing Tools?
AI (artificial intelligence) writing programs have become good enough that detecting whether a student uses one is often impossible for a teacher. However, teachers can still discuss the technology with their students and explain that using AI writing tools is a serious form of cheating.
For the first time, Reich, Sahami, and Weinstein are addressing the potential misuse of the technology with students for the course they teach together, Ethics of Technological Disruption. “We feel that we need to add language in the syllabus that signals for our students our expectations about them not using these tools, to either write code for the technical assignment, or to draft their philosophy papers or policy memos,” Weinstein says. “[Using these] will be met by the same kinds of standards with respect to the honor code at Stanford as any other form of plagiarism. But of course, we don't ourselves have the ability to detect the use of these tools.”
“It’s an honor system,” Reich adds.
Teachers can also structure assignments in such a way that the AI writing models can’t mimic currently. For instance, a student could be required to show an outline and other work documenting how an assignment progressed. Or teachers might structure assignments in the form of a dialogue, forcing each student to respond to inputs from another student.
However, Reich, Sahami, and Weinstein believe this places the emphasis on teachers when the tech companies that create and release this technology should do more to safeguard against its misuse.
In a recent op-ed in The Guardian, Reich, Sahami, and Weinstein put forth several potential ways that tech companies could safeguard against the misuse of AI writing models. For example, one method would be to have all text generated by commercial AI language moderators placed in an independent repository so papers could be checked for plagiarism. Age restrictions and age-verification systems could also limit the potential for student misuse.
Why This Technology is Different
An AI writing program called GPT-3 has made headlines and sometimes produces writing that is almost indistinguishable from human writing. Reich, Sahami, and Weinstein used it to write a small part of their book System Error: Where Big Tech Went Wrong and How We Can Reboot. That was published in September 2021, and in little more than a year, more powerful AI language models have emerged, including ChatGPT.
Now that these tools are more widely available, teachers need to educate students and the public about what makes these different from other student-friendly technology, such as a grammar check or even a calculator. “It challenges us as educators to say, ‘Well, what is the line between this and the word suggestions that Google offers students when they're writing to complete a sentence?'” Weinstein says.
The answer for Weinstein and his colleagues is that these tools create a shortcut that doesn’t just help a human complete task, it essentially eliminates the human role in the task. It’s one thing for professional writers or adults to utilize an AI writing tool to save time, but quite another for students to skip out on learning a fundamental mode of human communication.
“I don't think that the parallel to the calculator makes a lot of sense here,” Weinstein says. “These are fundamental tasks about being human, connecting with others, and engaging in deliberation. The ability to develop writing skills is central to that.”
What Does AI-Generated Text Looks Like
To better understand how quick and effective AI writing tools are, I asked ChatGPT the following question: Why is it wrong for students to use AI language generators for assignments? Here is the response:
It is generally not considered ethical for students to use AI language generators for assignments because doing so would involve submitting work that is not their own and that has not been properly cited. This is a form of academic dishonesty, which is taken very seriously by schools and can result in penalties such as failing grades or even expulsion. Additionally, using AI language generators for assignments deprives students of the opportunity to learn and develop their own writing and critical thinking skills.
As you can see, it may not win a Pulitzer but it's probably good enough to get a good grade.
Update 12/14/22: This story was updated to include direct mention of ChatGPT and to include a sample of its writing abilities.