AGI (Artificial General Intelligence): What Teachers Need To Know
The term "AGI" is gaining more usage. Melanie Mitchell, a professor at the Santa Fe Institute, shares what you and your students should know about it.

Artificial General Intelligence, more commonly called AGI, is a concept that is receiving a lot of press lately. The term is generally understood as referring to a point at which AI achieves something akin to human-level intelligence across a variety of tasks. What that means, how soon we’ll get there, and how it might impact the decision-making field, however, is different depending on who you ask.
Melanie Mitchell, a professor at the Santa Fe Institute, demystifies these terms and shares everything educators need to know about AGI.
What Is AGI?
“Most people would say, sort of informally, [AGI] is machines that have human-level intelligence. But what does that mean?” Mitchell says. “In the early days of AI, back in the 1950s and 1960s, the word 'AGI' didn't exist, but the goal was to build machines that could do everything that humans could do, and that includes all the physical things that humans do in their jobs. So not only could your machine answer your questions and chat with you, but it also could do things like fix your plumbing.”
These days, definitions of AGI still sometimes include those physical tasks. For instance, OpenAI defines AGI, in part, as when AI can outperform humans on most economically valuable work.
“Other people have restricted it to what they call 'cognitive tasks,' meaning something like the kinds of things you could do sitting at a computer. But [it's] not fixing a car or something like that,” Mitchell says.
Some of this vagueness is seemingly by design. “I don't think there's an attempt to really make the definition more rigorous. It’s meant to be fluid so that companies can claim [they’ve achieved] AGI as needed,” Mitchell says.
Have We Achieved AGI?
Though AI can perform remarkably well and some models recently passed the Turing Test, AGI has not been obtained. Existing AI models can only perform certain limited tasks as well as humans, and existing capabilities are often overrated.
Tools and ideas to transform education. Sign up below.
Mitchell points out that AI capabilities are frequently touted based on performance on various benchmark tests, but these tests are limited and don’t reflect real-world success.
“Even if AI does well on a benchmark, that doesn't necessarily mean it's going to be able to do that, have that capability, in the real world,” Mitchell says. “Performance on benchmarks, which is what is almost always reported, is not the same as performance in the real world.”
“You’ve probably seen that ChatGPT can do well on the bar exam, but that doesn’t translate to doing legal work or legal reasoning in a trustworthy way,” Mitchell adds. “These systems have flaws that prevent them from really being able to do these tasks autonomously, without humans supervising them.”
Are There AGI-Proof Jobs We Should Be Educating Students About?
The bad news for those hoping to prepare students for economic success in an AGI world is that the idea of an AGI-proof job is almost an oxymoron.
“If you define AGI as doing anything a human could do, you know, as well as a human or better, then by definition, there are no AGI-proof jobs,” Mitchell says.
The good news, in Mitchell’s estimation, is that we’re far from achieving anything such as true AGI. The reason Mitchell believes this is linked to the idea that AI performance on benchmarks does not translate to real-world success.
“For example, Geoffrey Hinton, who is one of the pioneers of neural networks, said in 2016 that within five years, we won't need any human radiologists,” Mitchell says. This was because of how well AI was doing on benchmarks that involved interpreting radiology images. Almost a decade later, however, we still need radiologists.
“The fallacy there was that those benchmarks were the correct evaluation to see if an AI system could do a radiologist's job,” Mitchell says. “It's true that AI systems that are trained in the right way can do really well on interpreting X-rays and MRIs and so on, but a radiologist's job is much more than that. So there is a fallacy in AI that a task, like interpreting a set of images, is equal to a job. But a job involves much more than just doing tasks. It involves figuring out what tasks to do, how tasks interact, and the boundaries between tasks.”
What Else Should Educators Know?
Students will certainly continue to encounter the term AGI, so it’s worth helping them understand what that does and does not mean as part of larger AI literacy efforts, Mitchell says.
“Certainly, students will need to understand not only what these systems can do—what their capabilities are—but also their limitations and why we're not at AGI yet,” Mitchell says.
She adds that it's important to probe what AI can do well and with what you still can't trust it. “Those are the kinds of things I would encourage educators to sort of educate themselves about, and then pass that on to their students.”
Erik Ofgang is a Tech & Learning contributor. A journalist, author and educator, his work has appeared in The New York Times, the Washington Post, the Smithsonian, The Atlantic, and Associated Press. He currently teaches at Western Connecticut State University’s MFA program. While a staff writer at Connecticut Magazine he won a Society of Professional Journalism Award for his education reporting. He is interested in how humans learn and how technology can make that more effective.