Images of multi-colored AI faces

DANIELLE HOFFMAN

Morality and Artificial Intelligence

It might seem as if Artificial Intelligence is everywhere these days, and that its emergence is a recent phenomenon. But since the early days of computer science, when punch cards were being fed into room-sized mainframes, the concept and the importance of AI has shown up consistently in academia, industry and across our broader culture. Depending on the media you consume, you might be thrilled by the possibilities or terrified of the implications.

Vincent Conitzer

Vincent Conitzer, Professor of Computer Science and Director of the Foundations of Cooperative AI Lab (FOCAL).

The implications go far beyond a monotone robot voice droning. “I’m sorry, Dave, I’m afraid I can’t do that,” as HAL, the famous rogue AI, says in “2001: A Space Odyssey.” Modern AI encompasses everything from students using ChatGPT to write papers, to the Turkish media circulating convincing deepfake videos of political opponents, to a health care system using an AI trained on biased data, resulting in white patients being dramatically prioritized over black patients with the exact same level of illness.

The need for ethical boundaries around AI has never been more important.

Carnegie Mellon University, often credited as a “birthplace of Artificial Intelligence,” is well equipped to help guide this conversation. As Anand Rao, a professor of applied data science and AI in the Heinz College of Information Systems and Public Policy said: “One of the reasons I came to CMU was the way the university is very deeply rooted in the technology, while at the same time looking clearly at the implications on society.”

But how can CMU help make sure that the tech leaders of tomorrow will be aware of those implications? How can we ensure that the AI we build makes moral choices? And beyond that lies a greater question: What is morality when it comes to AI?

To help answer some of these questions, we turn to Vincent Conitzer, renowned artificial intelligence researcher and ethicist at CMU and co-author of “Moral AI and How We Get There” along with Jana Schaich Borg, a neuroscientist, and philosopher Walter Sinnott-Armstrong.

Defining AI

American computer and cognitive scientist John McCarthy coined the term in 1955, calling it “the science and engineering of making intelligent machines.” McCarthy went on to say that “intelligence is the computational part of the ability to achieve goals in the world.” Meanwhile, AI pioneer Alan Turing’s famous test states that if a machine can have a conversation with a human, and the human cannot distinguish if they are conversing with another human or with a machine, that machine has demonstrated human-level intelligence.

Today Turing and McCarthy would likely call our powerful computing machines and modern algorithms very intelligent indeed — yet we keep moving the goalposts of what constitutes true intelligence. Conitzer and his co-authors call this the AI effect. “Once we know how to write a computer program that can perform a task we once thought required intelligence, such as navigating mazes, we no longer think the computer program’s solution to the task is really intelligence. Instead, it becomes just another algorithm.”

Therefore, the authors of “Moral AI” use the definition provided by the US National Artificial Intelligence Initiative Act of 2020: “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments with sufficient reliability.”

A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments with sufficient reliability.
— US National Artificial Intelligence Initiative Act of 2020
Anand Rao

Anand Rao, Distinguished Services Professor of Applied Data Science and AI in CMU’s Heinz College of Information Systems and Public Policy

Toward Making AI Fair

Anand Rao, Distinguished Services Professor of Applied Data Science and AI in CMU’s Heinz College of Information Systems and Public Policy, worked in AI and business for 30 years before landing in academia, and thus, has had a front-row seat to moral AI discussions over the years, many centering on AI that will act ethically in difficult situations.

One common thought experiment involving ethical situations is the trolley problem. There are innumerable iterations, but the classic version is: there are five people tied to a train track, and a trolley is headed straight for them. You can flip a switch to divert the trolley to another track, but there is one person walking along that track. Do you knowingly kill one person to save five?

“Early on in the late ‘80s and ‘90s, no one really worried too much about ethics,” Rao said. “If you brought up the trolley problem, they’d say ‘that’s a philosopher’s problem, and not relevant to what we are doing.’ The focus was much more around how do we plan efficiently to get the robot from point A to point B? I’m really happy it moves from this room to the other room, rather than worrying about whether it will harm someone.”

Rao said it wasn’t until 2014 that people started to get serious about ethical AI. That was the year the Oxford philosopher Nick Bostrom published his sobering polemic “Superintelligence: Paths, Dangers, Strategies,” which landed on the New York Times’ best seller list for science books and prompted Tesla and SpaceX founder Elon Musk to assert that artificial intelligence was potentially more dangerous than nuclear weapons.

The need to apply ethics to AI for engineers and other technical professionals that might not have received any training, studied ethics or fairness, or been given specific guidelines, is an important point.

A few years later, The Future of Life Institute, a nonprofit made up of academics and tech giants, recognized both the tremendous potential and alarming risk of AI and put together the Asilomar Conference on Beneficial AI. At this 2017 conference, they developed the 21 Asilomar AI Principles, one of the earliest and most influential sets of AI governance.

“Academically, we’re going very deep in many of these areas,” Rao said. “There’s a lot of detail in a narrow sense. Now, let’s say I’m just a company person building a model in a bank. For them, it is really daunting — they know how to build a model, now you come and tell them it needs to be ‘fair.’”

In “Moral AI,” Conitzer and his co-authors touch on the transition from establishing principles and philosophies to practical implications. They tell us that while these principles and commitments might sound good, ethical issues persist, and until we address some of the nuances of the process of creating AI, from inadequate communication to a lack of clear metrics, the issues aren’t going away.

Teaching Moral AI at CMU

Rising amid the cluster of Beaux-Arts buildings that make up CMU’s campus are the glass and steel Gates-Hillman Centers, where some of the world’s top minds in computer science focus on ethics, fairness and AI.

During the academic year, Vincent Conitzer, one of those top minds, runs the Foundations of Cooperative AI Lab (FOCAL). He also spends his summers at Oxford University at the Institute for Ethics in AI. Like Rao, Conitzer uses the trolley problem as an example of the thorny complexity of puzzling out ethical quandaries.

“The point of the trolley problem is that it’s not that easy. And if you think that something simple will do — I’m just going to save the maximum number of people — well, it’s much more complicated than that. We’re running into problems that have been studied in philosophy, except now with AI systems they’re real and concrete. And it’s not just ethics, but also, for example, philosophy of mind.”

If it’s a matter of sacrificing one person to save five, the problem seems relatively easy to solve. But when we add variations found in applying AI, things get trickier. What if you’re driving a car, and you know that if you swerve to avoid the child in the road, you’ll kill two adults? What’s the moral choice there? And then, what if it’s a computer driving that car? Can we give the AI tools to make these moral decisions, given that every scenario will have thousands of variations?

And of course, students need to want to tackle those issues. Conitzer tells a story about an Intro to AI course he taught for undergraduates, wherein he sprinkled ethical instruction and assignments among the technical work.

“Some students liked it,” Conitzer said, “but in the middle of the semester we asked for some feedback, and a student said: ‘Can we just cut out that ethics stuff?’ It illustrates a little of the difficulty. In some ways technical work feels more satisfying to the student, but it doesn’t make the ethics issues any less important.”

Jay Aronson

Jay Aronson, Professor of Science, Technology and Society in CMU’s History Department

Moral AI Across CMU

Faculty members tackling the ethical questions of AI can be found in almost every department across campus: from Alex John London and Hoda Heidari’s pioneering work in philosophy and computer science at the K&L Gates Initiative in Ethics and Computational Technologies; to the Block Center for Technology and Society, where Ramayya Krishnan, dean of the Heinz College of Information Systems and Public Policy, leads faculty in research around the future of work and responsible AI.

These questions are even addressed in CMU’s History Department, where Jay Aronson, professor of science, technology and society thinks a lot about ethics and AI. He runs the Center for Human Rights Science and teaches an undergraduate course called “Killer Robots? The Ethics, Law and Politics of Drones and AI in War.”

“I decided to teach it because it’s interesting, but also, it was a little bit of a bait and switch,” said Aronson. “I think a lot of CMU students are interested in autonomy and technology, and they hear the word ethics and think, ‘Oh, I know what ethics is.’ But when I get them in class, I spend the majority of time teaching them about international law and the potential moral and strategic hazards of using technology in war.” The content is not just around technology, he adds, but around actively questioning that technology being used. The mere fact that we can develop these advanced technologies isn’t reason enough to dive in; we need to consider the ramifications and decide whether we should.

“I want them to think about their place in the broader world, and what the implications of their intellectual efforts might mean,” Aronson said. “Get them to think from a life-saving — rather than a life-taking — perspective. My students tell me that they crave opportunities to think through the implications of their work.”

I want them to think about their place in the broader world, and what the implications of their intellectual efforts might mean.
— Jay Aronson

Of course, AI encompasses so much more than autonomous vehicles on the ground and drones in the air. In “Moral AI,” Conitzer and his co-authors relay that there is “both bad news that gives us reason to be worried about some uses of AI, and good news that gives us reason to advocate for other uses.”

That good news includes The Los Angeles Times’ Quakebot algorithm, which warns its readers about California earthquakes more quickly and accurately than traditional media. The good news also includes the lives saved by a complicated kidney exchange network, the work of Tuomas Sandholm, Angel Jordan University Professor in CSD, made possible because of AI. AI might even save the planet, as researchers use AI systems to visualize future floods and wildfires, monitor forests, and improve decision-making about the climate.

Even with all this good news, as AI becomes more sophisticated, Conitzer raises some sobering possibilities: What if AI is used to control our physical environment? We’re already using AI to plan tactical strikes; how much future control over cybersecurity or autonomous weapons systems will we hand over to AI? Yet, Conitzer remains cautiously optimistic, especially when it comes to not only reflecting on, but improving upon, human morality.

“Humans make important moral judgments,” Conitzer points out. “Problems arise in decisions about which targets to attack in war, which criminal defendants to grant bail, whether to brake (and which way to turn) a car in an apparent emergency, and many other cases. If AI could help humans make better moral judgments in just a few of these cases, it could benefit us tremendously.” ■

Moral AI And How We Get There

Can we build and use AI ethically?

That’s the timely topic of “Moral AI and How We Get There” by SCS Professor Vincent Conitzer, co-authored by Duke University professor of neuroscience Jana Schaich Borg and Duke University philosophy professor Walter Sinnott-Armstrong.

“Moral AI” resists painting apocalyptic scenarios or waxing poetic about utopias; instead, the authors take a more sensible approach, clearly laying out what we know about AI right now, the ways in which it is already being used for good and ill, and future decisions to make. The book is not just a primer on the challenges and opportunities of ethical AI, it also lays out concrete calls to action that technology, policy and business leaders of today could put into action tomorrow. The authors encourage careful reflection on the values we want AI to reflect, and suggest ways in which we can program AI to be more moral than us.

Though written by academics, the authors of “Moral AI” wanted the book to be accessible and wrote in a conversational style. Conitzer said the goal was to write a basic introduction to moral issues around AI so that anyone could pick it up and learn from it, regardless of their background in the field. ■

Book cover of Moral AI and How We Get There