Responsible Use Is The New Acceptable Use: One District's Pragmatic Playbook for the AI Era

One District's Pragmatic Playbook for the AI Era - YouTube One District's Pragmatic Playbook for the AI Era - YouTube
Watch On

Watch above or download/listen below.

When the school year began last fall, Kenzo Vanhaesebroeck knew something had shifted. Students at Pineville ISD returned from summer break already fluent in AI tools—using these faster and, in some cases, more efficiently than the adults in the building. But instead of just reacting, the administration has taken a proactive stance to work with the technology not obstruct it. I had a chance to talk to Kenzo about the details.

For a small district that had barely finished digesting the implications of the technology, it was a wake-up call. "With every other platform, we had a little bit of a heads up," Vanhaesebroeck says. "With AI, we didn't even have a chance to really put policy in place before Google, Meta, all these big companies just said, 'Hey, it's here, it's integrated.'"

Article continues below

That speed is what makes this moment categorically different, he argues. Social media crept in gradually. AI arrived overnight—embedded in platforms students were already using, accessible on devices already in their pockets. And new platforms keep appearing daily, often discovered by students before their teachers have even heard about it.

Vanhaesebroeck's response has been a philosophy shift: out with "acceptable use," in with "responsible use." The distinction matters because a named-platform policy is obsolete before the ink dries. A responsible use framework, intentionally written to be somewhat vague, can flex as the landscape changes.

The practical centerpiece of that framework is what he calls a "grading scale" for AI on assignments—a spectrum that runs from no AI permitted to AI-focused projects in which students actually produce content using generative tools. Teachers set the dial for each assignment. The policy backstops them either way.

On the faculty side, Vanhaesebroeck relies less on mandates and more on peer modeling. When a teacher is doing something innovative with AI, he puts that teacher in front of the skeptics. "It's not just cheating," he tells reluctant colleagues. "There's also good sides to this."

His advice to administrators watching from the sidelines: stay open, experiment carefully, and don't wait for a perfect policy before engaging. "Almost being a little too restrictive by being vague," he says, "is better than falling behind and not having policy at all."

Kevin Hogan is a forward-thinking media executive with more than 25 years of experience building brands and audiences online, in print, and face-to-face. Kevin has been reporting on education technology for more than 20 years. Previously, he was Editor-at-Large at eSchool News and Managing Director of Content for Tech & Learning.