SHAPING AI POLICY
TRICIA MILLER KLAPHEKE
SCS Faculty Advise Government Officials on Benefits, Threats and Regulations Required for Ethical AI Implementation
The executive order on artificial intelligence that President Joseph Biden announced in October of 2023 followed years of conversation between policymakers and academics on how AI can be used responsibly. Though not an entirely new phenomenon, faculty from across CMU have since provided expertise and a sense that AI can help policymakers in specific circumstances.
Martial Hebert, Dean of SCS
Tom Mitchell, Founders University Professor in Computer Science
SCS Dean Martial Hebert said the revolution around AI reminds him of the digital revolution, except that advances are coming much more rapidly. As with many political issues, the reaction to that is often binary: some leaders fear that AI is evil and could lead to increasingly dangerous outcomes, while others feel AI is a force for good that can solve many problems. The reality, SCS faculty agree, lies somewhere in the middle.
While it is important for experts advising government officials to have a deep understanding of both technical details and public policy surrounding AI issues, sending a consistent message to government entities will help coordinate efforts across individual circumstances. Hebert identified three priorities for the government in crafting policy: training people to use AI, researching potential new uses for AI, and establishing frameworks to make AI trustworthy. Explaining the nuances of each of those priorities can be difficult.
Carnegie Mellon and SCS are uniquely positioned to advise U.S. government officials at the federal, state and local levels on such policies. In addition to having expertise in AI, SCS faculty collaborate broadly across the university, working closely with faculty from the Heinz College of Information Systems and Public Policy, the Dietrich College of Humanities and Social Sciences, and the Tepper School of Business, as well as from other universities across the country, to give recommendations that consider each question from every angle.
Every time AI has had a potentially big impact, Tom Mitchell, Founders University Professor in Computer Science, has been there to advise government officials. He first briefed federal officials about AI in the early 1990s when he spoke to the Information Sciences and Technology (ISAT) committee within the Defense Advanced Research Projects Agency (DARPA), the Department of Defense’s funding agency for research. As a member of ISAT, Mitchell spoke about where technology was heading and where ISAT might want to invest its resources. Since then, he has traveled to Washington frequently, briefing officials from different government branches and agencies about how they should be taking advantage of AI as well as how it should be regulated.
Tom Mitchell (left), meets with Rep. Susie Lee (NV-03) to discuss the latest advancements, research, opportunities and challenges surrounding AI. Also meeting: members of CMU’s Block Center Responsible AI leadership team: Hoda Heidari (center front) expert in AI ethics, fairness and accountability and Ramayya Krishnan (right front) faculty director of the Block Center. Not pictured but attending: Rayid Ghani, expert in AI, policy and social impact.
In the late ‘90s Mitchell joined the National Academies of Science (NAS) Computer Science and Telecommunications Board. Following the attacks of September 11, 2001, he participated in and chaired NAS’ Workshop on Information Fusion and Counter-Terrorism, briefing the workshop on the use of AI in counterterrorism. Soon after, he testified before a congressional committee on how AI could be used to help the Veterans Administration process medical claims.
“All of the interactions I’ve had with government people have made me more optimistic about our government function than what I read in the newspapers,” he said.
In 2023 Mitchell attended multiple private meetings with members of Congress and spoke publicly to Senate Republican staffers about large language models. With a nonpartisan think tank called the Special Competitive Studies Project, Mitchell chairs a task force that will give government officials recommendations on generative AI, advising how the U.S. can remain competitive with other countries as officials learn how to use the technology ethically. Mitchell also works with the U.S. National Academies on a congressionally-mandated study on AI and the future of work. Both will be published in 2024.
Mitchell said three broad principles should be followed as they design the government response: most regulations should target the application of AI, not AI at large; a small percentage of regulations should target general purpose AI tools such as Chat GPT; and ultimately, no matter how diligent they are in anticipating potential issues, some will come up that can’t be anticipated and an organization will need to be in place to address those.
AI Helping Humans Make Good Decisions
Aarti Singh, professor in the Machine Learning Department, briefed members of Congress and their staff for the first time in September 2023 when the NSF brought together the leaders of all 25 AI research institutes it is funding to Capitol Hill to raise awareness of how AI can positively impact society. Singh is the co-director of the AI Institute for Societal Decision Making (AI-SDM), which opened at Carnegie Mellon in June 2023 thanks to a $20 million, five-year grant from the National Science Foundation.
Researchers from an array of disciplines and institutions collaborate at AI-SDM to advance AI and use it to better inform decisions people make. The institute focuses on two specifics: as a matter of public health, the researchers are looking at ways AI can help identify patients at high risk for certain pregnancy complications early on and ways to engage them with health services.
In the area of emergency management, AI-SDM explores ways to use autonomous robots and drones to go places too dangerous for humans. Getting into these areas supplies emergency managers with more informed data to make critical decisions. These challenges are ideal for academics to explore, since they are not the kind of projects that profitability-driven companies have shown interest in.
Singh has continued to answer questions from Congressional staff following the September event.
“My takeaway was that people talk a lot about what AI can do, both positively and negatively, so that leads to both the hype and the fear of it,” said Singh, “but what people talk less about is what AI cannot do and that’s so important to convey.”
Singh will continue to engage with policymakers and the other AI research institutes. NSF hopes to host another showcase for the AI institutes on Capitol Hill, and Carnegie Mellon will host the annual summit for the AI institutes in October 2024.
Aarti Singh, professor in MDL and director of AI-SDM
Encouraging Smart Government Acquisition
One of the primary ways the federal government can influence how artificial intelligence evolves, even while it moves slowly to establish regulations and laws, is through its acquisition of technology. As one of the largest buyers in the market, government agencies can encourage ethical, effective technology to be developed by buying from companies that produce it well. Once that technology is built for the government, it can more easily be adapted for private buyers.
In September Rayid Ghani, distinguished career professor in the Machine Learning Department and the Heinz College of Information Systems and Public Policy, testified before the Senate Homeland Security and Governmental Affairs Committee on Acquisition and Procurement.
“Too often, organizations go on the market to buy AI without completely understanding, defining and scoping the concrete problem they want to tackle, without assessing whether AI should even be part of the solution, and without including individuals and communities that will be affected,” Ghani wrote in his testimony. “AI systems are neither applicable for all problems facing government agencies, nor are they one-size-fits-all. By starting with the concrete problem at hand, and understanding how it’s being tackled today, an effective, collaborative and inclusive scoping process can help determine the requirements that the AI system needs to fulfill.”
Ghani has testified before Congressional committees before and often works with government staff at the local, state and federal levels as they look for technology solutions. He said that each time he has testified before a committee that starts a conversation, it continues as staffers work to understand the nuances of potential policies.
Going forward, Ghani said he sees three avenues where CMU faculty can help government in interesting and impactful ways. The first: helping local governments figure out how to use AI to allocate resources, making sure that the government’s services and supplies are reaching the people who need them. The second: helping regulatory agencies determine how to use AI responsibly in their efforts to audit companies in their jurisdiction and enforce the law. The third: to continue supporting the National Institute of Standards and Technology, an office within the U.S. Department of Commerce, as it develops broader guidelines that make up the AI Risk Management Framework.
Rayid Ghani (left) testifying before the Senate Homeland Security and Governmental Affairs Committee on Acquisition and Procurement.
Jodi Forlizzi, Herbert A. Simon Professor in Computer Science and the Human-Computer Interaction Institute
Supporting Workers
Jodi Forlizzi, the Herbert A. Simon Professor in Computer Science and the Human-Computer Interaction Institute, is deeply involved with the AFL-CIO’s Technology Institute. As AI continues to reshape the responsibilities of frontline workers, Forlizzi and her HCII collaborators, faculty member Sarah Fox and Ph.D. student Franchesca Spektor, work with the union to think about how these workers can be part of developing technology that makes their jobs easier, not harder.
In October 2023, Forlizzi spoke to the AI Insight Forum on AI Innovation, hosted by two senators from each party. Senate Majority Leader Chuck Schumer (N.Y.) and Senator Martin Heinrich (N.M.) represented the Democrats and Senator Mike Rounds (S.D.) and Todd Young (Ind.) represented the Republicans.
Forlizzi built on the previous closed-door briefings she had delivered to members of Congress, emphasizing the involvement of workers in the design, development and deployment processes of AI to ensure workers’ expertise is reflected in the AI systems created. At the forum, Forlizzi cited housekeepers at hotels as an example.
“For housekeepers, algorithmic managers (AMs) increase work, increase job requirements and decrease worker autonomy,” she wrote in her testimony. “Instead of letting housekeepers clean rooms in the order that makes the most sense to them based on their ability to complete their room quotas with a minimum of wear and tear on their bodies, AMs send them back and forth and up and down in hallways and elevators, while they push 200- to 300-pound carts. We have heard again and again from housekeepers that the AM ‘wastes my time.’ AMs increase wear on the worker by assigning several check-out rooms, which require heavy cleaning, back-to-back, as opposed to alternating them with the lighter physical requirements of rooms in which only sheets and towels need changing. Housekeeping is also an entry-level job that traditionally did not require technology skills or even fluent English. This, combined with typical connectivity issues, has altered the job of the housekeeper greatly, with little to no increased training or increased compensation.”
Along with UNITE HERE, an AFL-CIO member that is the largest hospitality union in the U.S., Forlizzi will lead a research team, a hospitality training center and a software company that provides algorithmic management solutions for the hospitality industry in 2024. Funded by the NSF, the collaborators will develop recommendations to prepare workers for the future.
Forlizzi, who earned her master’s degree in interaction design and her Ph.D. in design in human-computer interaction, both from Carnegie Mellon, said those groundbreaking experiences inform the way she works and advises officials on issues that are developing now.
“We were some of the first people doing the kinds of design work that we’re doing today. Our program was really new. In some ways we were making it up as we went and creating new knowledge, so it taught me to be comfortable with uncertainty,” Forlizzi recalled. “You’re making a lot of judgments as a designer and a researcher to try to improve the state of the world, and that’s something I’m still doing.”
Of course, many more CMU AI experts have given their testimony to government officials at all levels, including the United Nations, and will continue to do so as AI policy develops. At a time when the technologies advance at a dizzying pace, the U.S. government continues to look to CMU for expertise not only to understand the advances in AI and the guardrails needed to keep people safe, but also how to best implement AI ethically and with fairness for all. ■