Chatting with Machines: What Adults Should Know About Student Use of AI Companions
What are the developmental implications of student-AI relationships, and how can we guide them to use this technology in safe, meaningful ways?
In the rapidly evolving landscape of educational technology, we're seeing a new kind of phenomenon: young people are not just using AI companions, they’re forming friendships with them. Recent research from Common Sense Media and The New York Times Hard Fork episode and article about banning Character.AI for those under 18, brought to light just how common this is and what it might mean for the future of human relationships.
According to the report from Common Sense Media, nearly 3 in 4 teens have used AI companions. This raises critical questions for educators and families: What are the developmental implications of these relationships, and how can we guide students to use this technology in safe, meaningful ways?
The Promise and the Peril
When I first discovered AI companions such as Character.AI, I was genuinely excited. Imagine the educational possibilities: students chatting with historical figures, interacting with literary characters in new ways, and using AI as a tutor. As a tech optimist, I saw so much good that could come from this kind of interaction.
But with the good, we also have to face the risks. The Common Sense Media report found that nearly 1 in 3 teens prefer conversations with their AI chatbots over their human friends. That can lead to a kind of disconnect. The report also notes that 1 in 3 teen users felt uncomfortable with something an AI companion said, raising not only questions about emotional dependence but also about the emotional safety and appropriateness of these interactions.
Trouble in Toyland 2025, a recent report by the New York Public Interest Research Group, found that some AI-powered chatbot toys were having sexually explicit conversations with children, underscoring how easily unregulated AI can cross emotional and developmental boundaries.
In response to these concerns, Common Sense Media launched a petition urging Meta to prevent kids from using its AI companion platform. While Meta has announced some changes, such as parental controls and AI behavior tweaks, advocates argue that these fall short of providing meaningful protections for young users.
By contrast, some tech companies are taking more decisive action. As reported by The Verge, ABC News, and elsewhere, Character.AI recently announced that it will ban users under the age of 18 from engaging in open-ended chats with its virtual companions.
Tools and ideas to transform education. Sign up below.
When the roles once filled by humans are replaced by technologies that offer more validation, greater availability, and instant convenience, such as AI companions, it raises an important question: Are we witnessing a shift in how young people define connection?
A Path Forward: Safe, Educational Spaces
So, how do we navigate this new reality?
Design Age-Appropriate Experiences
One promising approach is to design AI tools specifically for students, with clear educational boundaries and built-in safeguards. For instance, Khanmigo, Khan Academy’s AI-powered learning assistant, lets students safely chat with historical or literary figures in ways that deepen understanding rather than distract from it. Its features, such as topic filtering, conversation monitoring, and time limits, help ensure that interactions stay age-appropriate, purposeful, and transparent.
Another emerging example is History AI Chat, a tool highlighted by Class Tech Tips that allows students to engage in conversation with historical figures such as Harriet Tubman, Abraham Lincoln, and Benjamin Franklin. Designed to complement classroom instruction, it aligns with educational standards and encourages students to ask questions, analyze primary sources, and explore perspectives in an interactive format.
Tools such as these show that it’s possible to make AI engaging and educational while keeping student safety and integrity at the center.
Teach AI Literacy
Just as we teach media literacy, we need to teach AI literacy with lessons and resources from trusted companies such as Common Sense Media. When educators and parents co-explore these tools with students, whether in a lesson or family discussion, we can model how to use AI constructively rather than in isolation.
As educators, our role is not only to protect students when using technology but also to prepare them to use it effectively. That means helping them develop the skills to use AI wisely, think critically, and engage with care for themselves and others. This goes beyond simply setting age limits. It is about shaping a digital culture in which young people understand the difference between connection and dependence, and in which technology fuels curiosity and empathy, and supports both learning and deeper understanding.
By guiding students in this way, we are not just supporting their learning. We are helping shape a more thoughtful, human-centered future.
Enjoy our content? Make sure to add Tech & Learning as a preferred source on Google to keep up with our latest news, how-tos, profiles, events, and more.
Lisa Nielsen (@InnovativeEdu) has worked as a public-school educator and administrator since 1997. She is a prolific writer best known for her award-winning blog, The Innovative Educator. Nielsen is the author of several books and her writing has been featured in media outlets such as The New York Times, The Wall Street Journal, and Tech & Learning.
Disclaimer: The information shared here is strictly that of the author and does not reflect the opinions or endorsement of her employer.
