What Schools Should Ask Before Buying An AI Tool

Female teacher is surrounded by her kindergarten students as she shows them how to use an ai on digital tablet.
(Image credit: Getty Images)

AI hype keeps getting bigger and the offerings more complex. It’s being pitched to schools from every direction, but what really matters when it comes to choosing the right tool for your school?

Some AI tools promise to reduce teacher workload while others claim to revolutionize tutoring, feedback, or family communication. You’ll also find platforms that use AI focused on district operations, reporting, and data analysis. The common thread is that these all look great when demoed–believe the hype, and you’d think they could transform a school overnight into some sort of tech-enabled teaching utopia.

And this is why it’s important to slow down, ask better questions, and focus on the practicalities.

Sure classic questions such as which platform is best or what are other schools using are fair, but not the right place to start. Schools need the right fit for their students, staff, systems, and priorities. That means being more pragmatic.

So before investing in AI, school leaders should ask different questions. What problem are we actually trying to solve? Who will use it? What data will it store? Do we have the capabilities to use it effectively? How much control will we have? What happens if we get this wrong?

Those questions matter more than the flashiest demo.

Start with the problem, not the product

The first question is the simplest: What do you actually need an AI for?

Identify your goals. Do you want to reduce teacher workload, support tutoring, help teachers adapt materials for different students, speed up admin tasks, improve family communication, or analyze attendance or grades more effectively?

If leaders can’t explain the problem they’re trying to solve, they’re not ready to buy a solution.

That seems obvious, but it’s an easy mistake to make. A platform may be enticing because it generates lesson ideas in seconds or gives polished responses on demand. But if it doesn’t meet staff and student needs, it will become underused and hard to justify.

Build practical use cases first and ignore the peer pressure to bring in AI without good reason.

Ask whether your current ecosystem already covers most of your needs

If your district is already invested in Google, Microsoft, ChatGPT, or another approved environment with AI capabilities, then take stock before adding more tools.

This matters because AI doesn’t exist in a vacuum. It can sit within all sorts of systems from identity, email, and documents to permissions, policies, and support structures. If you already have an approved AI in those systems, then it’s an opportunity to discover what features and functions you might not be using to full potential yet. Identify the gaps that will add real value before considering new solutions.

Adding more tools without a clear rationale can lead to overlap, confusion, and needless costs.

Be clear whether this is for the classroom or the district office

That said, a single AI tool may not be sufficient to meet all your needs. Not every school AI tool does the same job, and schools should be wary of products that claim to do everything brilliantly.

Some tools are geared toward teaching and learning–think MagicSchool, Khanmigo, or teacher-facing uses of platforms such as Google’s Gemini, Microsoft’s Copilot, or OpenAI’s ChatGPT.

Others sit much closer to school and district operations, such as PowerSchool, SchoolStatus, attendance tools, reporting systems, family communication platforms, and products built around data rather than classroom practice.

This distinction matters. A tool that works well at a district level may offer little in the classroom. An AI that teachers love to use in class may be a headache for IT and governance.

Before buying, ask the simple question: Is this mainly for teaching and learning, or for operations? That will help you refine your list of candidates.

Ask whether staff have the knowledge to use it well

School leaders need to think beyond a few professional development sessions when it comes to embedding a new AI tool into existing systems and workflows.

A school can buy the perfect AI tool and still struggle if staff lack confidence or digital skills. Be honest about whether teams and individuals have the basic knowledge to use the tool well, question its weaknesses, and stay within policy.

That includes knowing:

  • What the tool is good at
  • When not to trust it
  • What data should never be entered
  • How to spot weak, biased, or invented outputs
  • When human review is essential

This isn’t just about knowing where to click, but about digital and AI literacy, critical thinking, and understanding AI’s limitations. This will be an ongoing task as new releases and updates will regularly offer new functionality but also potentially new issues.

It’s important to question whether you have in-house knowledge and discipline to use AI responsibly. If your school has clear guidance and confident staff then it will get more value from AI than one with patchy, uncertain usage.

That question applies differently across the organization. Senior leaders, classroom teachers, support staff, IT teams, and students don’t all need the same depth of knowledge, but they do all need enough understanding to reduce risk and use the tool effectively.

Ask data questions early

Privacy and data use needs to be included from the start.

Schools should ask:

  • What data goes into the tool?
  • Where is that data stored?
  • Is it used to train the model?
  • How long is it retained?
  • Who can access it?
  • Can it be reviewed and deleted?
  • What visibility do administrators have?

These aren’t just compliance questions, but trust questions as well.

If staff aren’t sure what they can safely enter into a system, adoption becomes inconsistent very quickly. If leaders can’t explain how the tool handles student data, it becomes much harder to build confidence among teachers, families, and other stakeholders.

AI tools can look simple and user-friendly, but schools still need to understand what’s happening behind the scenes before they commit, especially since student data in these tools is extremely sought after by hackers.

Take extra care if the tool analyses student data

The appeal of AI for some schools goes beyond lesson planning. It’s the potential to spot patterns in attendance, grades, behavior, or assessment data more quickly than staff could on their own. In theory, that means spotting issues earlier and acting faster.

That may be useful, but it’s also one of the riskiest areas.

Schools should ask:

  • What data is being analysed?
  • How reliable are the patterns or predictions?
  • Is the tool flagging signals for staff to review, or is it making clear recommendations that teams might feel pushed to follow unquestioningly?
  • To what extent does it reinforce biases or overstate weak correlations?
  • How transparent is its reasoning and confidence, particularly when it comes to suggestions?
  • Who is ultimately accountable if the AI flags something incorrectly or misses a student who needs support?

While an AI can help surface useful signals about attendance, attainment, intervention, or student support, it should not sit in a technological black box. Human professional judgment still matters. In fact, it matters even more once AI is involved as school leaders need to be clear about where that line sits and ensure an overreliance on tech doesn’t lower standards.

And this is where schools need to be especially careful not to confuse pattern detection with judgment. For example, some AI detection tools used to check whether work has been generated with the aid of AI can be extremely aggressive, leading to a much higher percentage of false-positives than vendors claim.

Look at real-world accuracy, not just the demo

Which leads us into claimed accuracy. AI tools can seem utterly authoritative until you question outputs, which can be unreliable in the situations schools care about most.

The real question is whether the AI meets the school’s requirements. Can it help teachers draft useful classroom materials? Support student revision? Summarize factual information without adding hallucinations or confusion? Can it save time without creating extra checking work?

Schools should also think about the cost of mistakes.

An AI-generated worksheet with a few weak questions may be inconvenient. A tool that mishandles attendance information, behavior summaries, grading, special education support, or safeguarding-related communication creates a much more serious problem.

That’s why buying decisions should be informed by realistic testing. Schools should test tools against real-world scenarios and decide when human oversight has to remain absolute.

Ask whether it improves the work or just speeds it up

Don’t judge AI by how quickly it completes tasks. Instead, ask whether it improves the quality of the work, supports better decisions, and reduces pain points, especially in areas of low-value admin and bureaucracy.

A tool that saves time on paper may still be a poor fit if it encourages generic planning, weak feedback, or an overreliance on samey AI-generated drafts. The risk isn’t just introducing minor mistakes into lesson plans; it’s a gradual reduction of originality and human judgment.

AI should be there to either improve current work or reduce repetitive or time-consuming admin tasks. What it shouldn’t do is replace real value, expertise, subject knowledge, or meaningful interaction with students. Faster isn’t better if the work becomes thinner, flatter, and less human.

Check the admin controls before the marketing claims

One of the least glamorous parts of AI procurement is also one of the most important: control.

Before signing for anything, anyone buying an AI should ask:

  • Who can use the tool?
  • Can access be limited by role or age group?
  • Are staff and student experiences separated?
  • What is logged and flagged?
  • What can administrators turn on or off?
  • How does identity management work?
  • Does it fit with existing sign-on and rostering systems?

A strong school AI product isn’t just one that produces impressive outputs, but one that leaders and IT teams can actually manage.

Think beyond the pilot

Many schools are willing to trial new tools. The harder question is what happens afterward.

How much training will staff need? What guidance will be required? Will policies need updating? How much support does the vendor provide? What will this look like after one term, not one week?

A successful rollout goes beyond choosing the right platform to whether the school has the capacity to use it well over time.

If a tool requires constant explanation, provides unreliable results, or adds another layer of work for already stretched staff, its value may look very different once the initial excitement fades.

Look at the full cost

So you’ve asked all the right questions and made a choice. Now you need to understand the full cost of the solution, and for that you need to look past the headline license fee.

Some costs will be monetary, others related to time and effort. Training, integration work, admin time, support, security review, policy development, and the time staff spend learning a new system should all be considered. A low-cost tool can end up feeling much more expensive if it adds unnecessary complexity or doesn't end up delivering value.

One of the most useful procurement questions is: “What will this cost in practice over a year, not just on paper at the point of sale?”

Ask what happens if you do nothing

This may be the most overlooked question of all, but before buying an AI, schools should ask: “What would happen if we didn’t buy this?”

Would an existing tool already cover much of the need? Could a simpler process change solve the same problem? Is the issue actually about training, workflow, or staffing rather than missing technology?

This question helps schools test whether the purchase is genuinely necessary or simply feels timely because AI is dominating the conversation.

What happens when schools don’t ask these questions?

When schools rush into AI purchases without asking the harder questions first, the same problems tend to appear: You end up with a solution looking for a problem; staff use it inconsistently; the benefit is overstated; data risks are misunderstood; confidence drops.

The most awkward outcome is that a school becomes dependent on a tool it doesn’t fully trust or understand. By the time the limits become clear, the system may already be embedded.

Leaders may also want to think ahead to related issues, such as parent trust, equitable use, academic integrity, and how they will evaluate impact over time. Those questions may sit beyond the initial purchase decision, but shouldn’t be an afterthought once a tool is already in place.

That’s why the early questions matter so much. These are not there to stunt innovation or slow change but to stop schools from making avoidable mistakes.

Swipe to scroll horizontally
Before Buying AI, Ask:

Question

A good answer looks like

What problem are we solving?

A clear, specific use case, not a vague desire to “use AI” somewhere in the school.

Who is it really for?

A defined group of users, such as teachers, students, support staff, or district leaders, with a clear purpose for each.

Do we already have something that does most of this?

Evidence that existing tools have been reviewed properly and that a new product fills a genuine gap.

Is this mainly for teaching and learning, or for operations?

A clear understanding of whether the tool is classroom-facing, district-facing, or both, and why that matters.

Do staff have the knowledge to use it well?

Confidence that users understand what the tool does well, where it can go wrong, and when human oversight is needed.

What data does it use and store?

Clear answers on what data goes in, where it is stored, whether it trains the model, who can access it, and how it can be deleted.

If it analyses student data, how reliable is it?

Transparent, evidence-based explanations of how predictions or flags work, with strong human review included.

How accurate is it in real-school scenarios?

Results from realistic testing in the kinds of tasks and situations the school actually cares about.

Does it improve the work or just speed it up?

Evidence that it improves quality, reduces low-value admin, or supports better decisions rather than just producing more output faster.

What controls do leaders and IT teams have?

Strong admin tools, role-based access, logging, visibility, and the ability to manage or restrict use.

What happens after the pilot?

A realistic plan for rollout, training, support, policy updates, and ongoing evaluation.

What will it really cost?

A full picture of licensing, setup, training, support, admin time, and longer-term value.

What happens if we do nothing?

An honest answer about whether the problem truly needs a new tool or could be solved another way.

Evan Kypreos was Brand Director of Tech & Learning and Editor of Trusted Reviews and Top Ten Reviews before moving into consultancy. He now advises organisations on content strategy, digital growth, and the practical use of AI. As a techie and father of three, he has a particular interest in how AI is changing education, work, and everyday life.