Amplifying Human Skills in the Age of AI: Using The C.A.R.E.S. Framework
Why the work teachers already do becomes even more vital with new technologies
In my work with aspiring educational leaders, I have increasingly encouraged them to approach artificial intelligence not with fear, but with purpose and responsibility. At the end of a recent course, I received a card from a student that read, “In a world of artificial intelligence, thank you for instilling human intelligence in the course.”
The message touched me deeply because it captured something essential that many leaders and educators are grappling with right now. As schools consider how to integrate AI into instruction, policy, and professional learning, we need a compass that reminds us what must remain fundamentally human in teaching and learning.
I offer one such compass, a framework that I call C.A.R.E.S.
Teachers are being asked to navigate a new instructional landscape. AI now drafts lessons, analyzes student work, creates rubrics, and generates feedback. These capabilities will only expand, yet the growing presence of AI makes something else increasingly clear: the core of teaching remains human. Students do not grow simply because information is delivered to them, but because someone understands them, believes in them, and creates a learning environment where they feel safe to take intellectual risks.
This is why the conversation about AI in schools must center on human capacity. Jason Wingard argues that the future belongs to individuals who demonstrate discernment, empathy, adaptability, creativity, and ethical judgment. AI may support instruction, but it cannot replace the intellectual and relational labor that helps young people become thoughtful, capable humans.
Recognizing this, the challenge is not to compete with AI, but to amplify the strengths that teachers already bring to the classroom. To thrive in the AI era, teachers must exercise the C.A.R.E.S. model: the core human strengths that remain irreplaceable in classrooms.
The C.A.R.E.S. Framework
Human skills that remain essential in the age of AI.
Tools and ideas to transform education. Sign up below.
C | Cultural competence and curiosity | Builds belonging and honors identity, community, and lived experience. |
A | Adaptability and instructional agility | Responds to student needs in real time and follows learning, not scripts. |
R | Relationships and empathy | Creates trust, emotional safety, and motivation for learning. |
E | Ethical judgment | Uses AI responsibly and protects dignity, fairness, and student consent. |
S | Scholarly discernment and critical thinking | Evaluates content, identifies bias, and teaches students to question AI outputs. |
C - Cultural Competence and Curiosity
AI can reference cultural information, but it cannot build genuine trust or ask, “Tell me more about your story.” Curiosity and humility are human skills that build belonging and deepen engagement. When teachers learn from families and community strengths rather than assuming they already know them, students experience dignity in learning.
Cultural competence also helps teachers interrogate bias in AI systems and avoid digital tools that do not reflect student realities. Tools may feel neutral but can reinforce stereotypes or leave out voices that matter. Educators grounded in curiosity and identity work are more equipped to question and adapt digital content, ensuring that technology supports equity rather than undermining it.
A - Adaptability and Instructional Agility
As Dan Meyer has noted, technology frequently struggles with the “last mile” of learning: it can generate a lesson or practice task, but it cannot guarantee understanding or belonging.
AI is emerging in this regard, yet it remains especially limited in meeting the needs of marginalized groups, including multilingual learners and students with disabilities, whose learning often depends on nuanced scaffolds, language supports, relational cues, and culturally responsive adjustments that no algorithm can fully anticipate. The last mile of learning requires human interpretation and care, because only teachers can notice which supports to adjust, which barriers to remove, and which moments call for affirmation, translation, or re-teaching to ensure every student is truly seen and reached.
Used without care, AI can overwhelm teachers with more content without actually supporting students who need the most help. Skilled educators know when technology supports deeper learning and when it distracts from it. The goal is not more lessons. The goal is lessons that reach every learner.
R - Relationships and Empathy
Even as AI becomes more capable, it cannot build the relationships that inspire students to try again after failure or open up about challenges. Students thrive when they feel seen by an adult who understands their emotions, notices small changes, and communicates belief in their potential. Algorithms can detect errors in writing, but cannot detect discouragement in a child’s eyes.
Social emotional learning is not a plug-in. It is a human commitment. AI can reinforce routines and feedback, but it is the teacher who co-regulates stress, celebrates effort, and strengthens confidence. Empathy is not sentimental; it is instructional strategy. Students do not risk making mistakes or pursuing ambitious questions for someone who does not know them.
Recent reporting has raised questions about whether highly automated, AI-driven learning environments can meaningfully support all learners, especially those who need consistent human connection and responsive care. The implication is clear: technology should expand equity, not replace empathy and teacher-student relationships.
E - Ethical Judgment
AI brings real questions: Whose data trains it? Who benefits? What assumptions shape its outputs? When do technology tools support learning, and when do they risk harm? Ethical teachers weigh context, equity, privacy, and fairness. They model responsible use and help students make informed choices about their own digital actions. AI can process data; humans must decide what is right.
Ethical judgment also demands noticing when technology amplifies bias or inequity. Teachers must think critically about when to use AI and when to prioritize human processes, voice, and expression.
Doing the ethical thing is often slower, messier, and relational — but it protects student dignity and trust. Technology can produce efficiency. Ethics protects humanity.
S - Scholarly Discernment and Critical Thinking
AI can produce polished language and convincing answers, but polished does not always mean accurate, fair, or wise. Teachers must evaluate information, interrogate bias, and help students understand that technology outputs are not neutral truth. This now includes AI literacy: identifying synthetic media, understanding misinformation dynamics, and recognizing algorithmic bias.
Critical thinking is not obsolete in the AI era–it is the point. A teacher must help students distinguish between a coherent answer and a meaningful one, and empower them to keep their voice and curiosity intact even when technology can speak for them. AI can produce essays, however, humans produce originality, agency, and reflection.
Conclusion
In response to AI, some schools have turned to handwriting requirements to preserve academic integrity and student voice, and while the instinct to protect authentic thought is understandable, simply returning to pre-AI routines cannot be the full answer. Handwriting has a place in learning, especially when supporting memory and reflection, but handwritten essays alone will not prepare students for a world shaped by AI.
The goal is not to retreat from innovation or to treat technology as a threat, it is to strengthen the human thinking, expression, and connection that AI cannot replicate. We do not need students to write like machines to prove they are not using one. We need them to think like people.
AI may shape tasks, but teachers shape lives. Students do not need educators who compete with machines, they need more who lead with humanity.
Dr. Andy Szeto is an education leadership professor and district administrator with extensive experience in instructional leadership, AI in education, and professional development. His book Leading Before the Title is forthcoming in December 2025. He has authored numerous articles on leadership, social studies, and AI integration, teaches graduate courses in leadership and instructional improvement, and writes Lead Forward, a blog focused on practical, human-centered leadership.