How AI Can Help Society

AI slop
(Image credit: Getty Images)

When I speak with Dr. Atri Rudra and refer to the University of Buffalo’s new “AI department,” he quickly and gently corrects me: “It's not the new AI department, but the AI and Society Department,” he says.

Rudra is the chair of the brand-new department, which has been formed with the mission of helping to research the ways in which AI can be developed in a responsible manner that can serve society. At the same time, the department will help students gain an understanding of AI while also thinking about the technology’s broader implications.

As such, The Department of AI and Society has some lessons for all of those interested in AI and education.

AI Can Be Developed With More Than Tech Company Profits in Mind

Currently, the best-known AI tools are developed by large companies that have profit as their primary goal.

“The incentives in place for companies are not necessarily to build AI systems that are hyper-focused on society,” Rudra says. Instead, these AI tools are built with a focus on benchmarks such as how fast specific tasks can be performed or how well people are engaged.

Rudra wants to see the Department of AI and Society become a kind of incubator for AI that is built from the ground up with society in mind, so it’s “not something that you fine-tune for later on,” he says. “[We can] build prototypes and show that these kinds of systems can be built. I'm not claiming that we're going to build something like ChatGPT for society. We don't have the resources, but the idea is to show that building these new kinds of AI systems that are based primarily for society can be done.”

AI Doesn't Have To Be A Black Box

Rudra would like AI developers to get more community input on AI decision-making processes, but a challenge with that is it's often unclear exactly why an AI model makes the decisions it does, even to the machine learning (ML) engineers who did the designing.

“Many of these ml engineers also don't know what's going on,” he says. “They try stuff and it works, but we have a very poor understanding of why these models work. So most of ML is, 'I ran it and it worked well.' But if you ask them, ‘Can you explain why?’ It's very hard.”

He adds, there have “been people who are working on doing some post hoc things, like when you already have built a model, can you make it more explainable?” This is one of the types of issues that Rudra hopes students and faculty at his new department will be able to explore and one of the larger questions around AI development more broadly.

Those Working In AI Don’t Need To Be Afraid of Math

When it comes to students joining AI programs, Rudra thinks there is an opportunity to attract a wide swath of students if more could get over their fear of math.

“There's a lot of math phobia out there, which I think is misplaced in the sense that I think people fear math more because of a lack of exposure, not because the math is inherently hard,” he says. Some of this may have to do with the abstract way math is taught, he adds, noting that in his experience, students do better with math when it attached to something real and practical, such as programming a system, etc.

Beyond that, in some ways, understanding AI can involve less math than people realize. “To get an understanding of how these things work, you just need some concepts from algebra and some concepts from probability,” he says. “Sure, if you know calculus, some of these things become easier, but it's not inherent to these concepts.”

That’s also part of the Department of AI and Society’s overall philosophy, which seeks to bring students from different backgrounds together into groups that will learn about ways in which AI can be improved to help solve more real-life problems.

“We will be very deliberate about making sure that it's not a group with all students from one discipline, because that sort of defeats the purpose,” he says. “Solving these problems—you can't do it in one discipline, probably not even two. So you need people to talk to each other.”

Erik Ofgang

Erik Ofgang is a Tech & Learning contributor. A journalist, author and educator, his work has appeared in The New York Times, the Washington Post, the Smithsonian, The Atlantic, and Associated Press. He currently teaches at Western Connecticut State University’s MFA program. While a staff writer at Connecticut Magazine he won a Society of Professional Journalism Award for his education reporting. He is interested in how humans learn and how technology can make that more effective.