Syracuse University Gave AI Access To 30,000+ Students and Faculty. Here’s What They Learned
When used in the right way AI seems to help test scores and save teacher and staff time, say Syracuse University's Jeff Rubin and Andrew Joncas
Syracuse University has gone all in on the AI revolution, deploying Claude AI to 30,000+ students, faculty members, and staff. Along the way, school leaders say they've developed effective AI use cases both in the classroom and beyond, ranging from test practice to course schedule management.
Syracuse University Chief Digital Officer Jeff Rubin, and Andrew Joncas, Assistant Vice President for enterprise data and artificial intelligence, recently shared how they were able to develop effective uses for AI, get student, staff, and educator buy-in, and talk through and work to rectify some of the concerns around AI use at their institution.
Using AI To Increase Text Scores By Adjusting Questions
Rubin knows firsthand that just adding AI to the classroom doesn't always enhance learning. When he first started using Claude AI himself, he had the idea of uploading his recorded lectures and then having Claude generate practice questions for students. The thinking behind this strategy was sound as he teaches an intro to information technology course with several hundred students enrolled in it. “Personalized learning is always hard, and really trying to meet students where they're at is challenging in a large class,” Rubin says.
Students have long wanted detailed and varied practice test questions, but creating enough questions to meet the demand was a challenge. “Without AI, it is actually really hard for my brain to come up with thousands of different ways of asking questions,” Rubin says. But once he shared his previously recorded lectures with Claude AI, the tool was able to generate a virtually unlimited supply of multiple choice questions and work through them with students.
In theory, this seemed perfect. However, in practice, when Rubin surveyed his students, he found that those who were extensively using Claude AI to study weren’t doing any better than the others. After meeting with members of Syracuse University's education department, Rubin realized that this lack of results was likely because Claude AI was asking students multiple-choice questions that didn't require them to think. He reworked his prompt to Claude AI so it would ask short answer questions and then engage with students about what they got wrong, all drawing from his lectures.
“It'll throw out a term that we've talked about in class in the form of a question and ask, basically, what do you know about this?" Rubin says. "Instead of just picking an A, B, C, or D, the student would type out their response. And then what Claude does is it grades the response and basically says, ‘Hey, this is a B type of response. Here’s the things you got right,’ but Professor Rubin also said, ‘These are the things you should know about.’”
This time Rubin saw an immediate impact with initial exam scores jumping by 12 points on average. While he just started using the system this semester with students, he believes the positive trends he has seen so far will continue.
Tools and ideas to transform education. Sign up below.
Other AI Uses and Buy-In
Other uses for AI at Sryacuse include Clementine, a recently launched course search tool powered by Claude Opus. This system queries millions of rows of institutional data in real time and provides results tailored to each student’s schedule, grades, and goals. Another tool the university also deploys allows staff members to efficiently search for potential data about university donors.
The university has also launched a pilot program using Claude Code, Anthropic’s agentic coding tool. Joncas says they’ve been surprised and pleased by the interest in this tool from various members of the university community.
He adds that while sometimes there is pushback in using AI among coders who take artistic pride in their work, he says meeting with stakeholders and showing them what the tool can do often alleviates these fears. He’ll often demo it by having Claude Code quickly create a tic-tac-toe game.
“It sounds ridiculous, because nobody needs to make tic-tac-toe. It's already been made a thousand times,” he says, but the demo shows people the tool’s potential. “They see the light and they go, ‘Oh, I could do that now.’”
Handling Academic and Environmental Concerns
AI ethics is a big topic these days, from concerns about the technology’s environmental impact to potential misuse and AI plagiarism from students and faculty members. Rubin and his team have invited conversations around this at various AI training events, including some focused on AI at work more generally and others specifically built around AI and teaching pedagogy.
“We also started a whole AI governance, which is led by somebody in IT but is inclusive of folks from around the university,” Rubin says. “We talked about things like AI and mental health. We talk about AI and data privacy. AI and job loss, AI and sustainability, all things that are big, deep topics.”
As far as concerns around inappropriate AI use by students, right now Syracuse University syllabi include a statement on AI that essentially is either you can use AI in this class, you can't use AI, or you can use it under these circumstances.
“That, to me, is a band-aid. It's not an answer,” Rubin says. “The answer is, 'How do we think about AI from a pedagogy standpoint?'” He adds, “If you integrate AI from a pedagogical standpoint, we're going to have fewer of these concerns."
In that vein, instead of having students write essays on some aspect of World War II, he'd like to see more active assignments. For instance, "'You are a commander of this unit who went into this situation in World War II. What would you do?’” he says.
In the meantime, Rubin also advises faculty members to be forthright with how they are using AI.
“We have a responsibility to let students know if we're using AI. Are we using AI to generate our lectures? Are we using AI to generate our exams?” he says. “I don't think there's something wrong with it. I just think there's a level of transparency that is needed to show, 'Hey, we're still owning this, but I'm using AI in certain aspects of it.’”
Erik Ofgang is a Tech & Learning contributor. A journalist, author and educator, his work has appeared in The New York Times, the Washington Post, the Smithsonian, The Atlantic, and Associated Press. He currently teaches at Western Connecticut State University’s MFA program. While a staff writer at Connecticut Magazine he won a Society of Professional Journalism Award for his education reporting. He is interested in how humans learn and how technology can make that more effective.

