RESPONSIBLE AI

Hoda Heidari Seeks to Apply Social Awareness to Developing Technologies

SUSIE CRIBBS

Growing up in an eventful corner of the world like Iran, Hoda Heidari developed a keen interest in history, politics and economics as a teenager. But she also excelled at math and computing, even earning a medal in the Iranian National Mathematics Olympiad as a senior in high school — a feat not common for a woman. It wasn't until she earned a bachelor's degree in computer engineering and began exploring graduate programs that she realized she could combine her passion for computing and social sciences into one career. Thus, her interest in responsible AI was born.

”I believe that our collective values are the bond that holds us together. We must be careful that the technologies we build reflect these values or we risk the technology tearing us apart,“ Heidari said. ”For better or worse, technologists are in a position to impact people's lives and social dynamics. We need to recognize our power and wield it responsibly.“

Heidari completed her Ph.D. at the University of Pennsylvania, working with Michael Kearns, a professor in the Department of Computer and Information Science, on her disser-tation, ”Essays in Algorithmic Market Design Under Social Constraints.“ She held post-doctoral appointments at ETH Zurich and Cornell University before joining the School of Computer Science‘s Machine Learning Department in fall 2020. She knew CMU offered the environment where her interdisciplinary research could thrive.”

Carnegie Mellon appealed to me as my academic home because of its collaborative and multidisciplinary atmosphere," said Heidari, who has a joint appointment in the Societal Computing program in the Institute for Software Research. ”I got the impression that CMU wouldn't hold me to the confines of the traditional definition of computer science. I would be not just permitted but encouraged to explore various methods and perspectives, form and grow interdisciplinary collaborations, and take risks.“

Now in her second year as an assistant professor, Heidari has begun a robust research program that explores the social and ethical aspects of artificial intelligence. Specifically, she is interested in evaluating bias and unfairness at different stages in the process data scientists and engineers use to create automated decision-making systems — what Heidari refers to as the machine learning pipe-line. In creating decision-making tools, researchers first decide whether it is appropriate to apply machine learning technologies to the problem at hand, given the available data. If so, the next step involves inspecting the data and cleaning up any glaring biases. Then, researchers create a statistical model of that data that can be used to make predictions about never-before-seen-instances. Next, the model is tested and validated. Finally, if all goes well, the model is deployed in the real world, where it should be monitored for unintended consequences.

We [Technologists] need to recognize our power and wield it responsibly.
— Hoda Heidari, Assistant Professor in ML and ISR

Each of these stages involves normative choices: Should the technology be used at all? If so, what statistical model is most appropriate for the data? What methods should be used to test and validate the model? Under what conditions will the model be deployed? Will the model's predictions help humans make decisions, or will the tool decide autonomously? And finally, to what extent can we foresee the long-term consequences of deploying the model in real decision-making environments?

These decision points are where Heidari's research comes in. Her work measures unfairness and determines places in the pipeline where it might sneak in. In a world where throwing automation at nearly every problem has become incredibly common, her work is more relevant than ever. ”My research attempts to understand the origin of algorithmic unfairness in predictions produced by machine learning models. For example, is the quality of the input data the main culprit or does the type of statistical model fail to capture the distinct statistical patterns in various data segments?“ she said.

Heidari's work in this area got a boost last year from an NSF-Amazon Fairness in AI Award for her project ”Achieving Fair Societal Outcomes in ML Applications to Education, Criminal Justice, and Health and Human Services.“ She was one of three CMU faculty members to earn the award. The trio's conversations sparked the Responsible AI initiative, a university-wide effort dedicated to designing, developing and deploying AI responsibly to provide effective mechanisms for accountability and transparency and create a more just and equitable world."

CMU has always been at the forefront of artificial intelligence, and we firmly believe that it should and will play a similar trailblazing role in Responsible AI. What distinguishes us from other research institutes of the same caliber is our willingness to engage with a variety of expertise and experiences. I believe this openness is vital to progress in the Responsible AI domain," Heidari said. ”Our faculty have already made tangible positive impacts beyond academic circles. The goal of the Responsible AI initiative is to amplify that impact and give visibility to the great work happening here at CMU.“

Clearly Heidari has already begun making her mark, even though she started her CMU career during the COVID-19 pandemic and wasn't even on campus until last year. My favorite memory was getting the keys to my office in the Gates-Hillman Complex after a year here," she said. ”That was an exciting moment. It made it real for me that I am a faculty member now, and I have the chance to build my research agenda at my dream academic home.“ ■