digital hands and voting ballots

SHERI HALL

AI and the Election

Across the political spectrum, candidates running for election this year are using artificial intelligence to connect with voters and criticize their opponents.

Within hours of President Joe Biden announcing his reelection bid in April 2023, the Republican National Committee released a video ad imagining what would happen if Biden won the 2024 presidential campaign. In the top left corner of the ad, there was a note in a small font that announced the ad was “Built entirely with AI imagery.” At least one Democratic Congressional candidate is using a generative AI robot to call potential voters.

Without a doubt, generative AI has tremendous potential to influence voters this election year and going forward. Researchers at Carnegie Mellon University’s School of Computer Science are using this opportunity to educate voters on how to identify and evaluate AI-created content, and to better understand how AI influences elections.

Compared with the last election, the ability to use artificial intelligence to create fake images, phone calls and stories online is much higher and much more sophisticated,” said Kathleen M. Carley, director of the Center for Computational Analysis of Social and Organizational Systems in the School of Computer Science. “There is also more potential to use generative AI for good.”

This means informed voters must be able to identify materials created with AI and interpret the objectives of the creators. Carnegie Mellon researchers working to guide the public in understanding how AI is being used this election cycle are also striving to help policymakers build laws and rules around its use.

“Generative AI is a powerful technology,” said Hoda Heidari, the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies and co-leader of CMU’s Responsible AI initiative. “But we don’t fully understand yet under what conditions it works as intended, and under what conditions it doesn’t. We’re looking into reliable, valid ways of evaluating this technology in a range of settings — and that includes how it is used in political campaigns.”

Kathleen M. Carley

Kathleen M. Carley, Director of the Center for Computational Analysis of Social and Organizational Systems

Hoda Heidari

Hoda Heidari, the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies and co-leader of CMU’s Responsible AI initiative.

Yonatan Bisk

Yonatan Bisk, Assistant Professor of Computer Science at the Language Technologies Institute

Types of Misuse

A major aim of research for SCS identifies the different ways generative AI can be misused, Heidari said. “This technology has been made publicly available without adequate guardrails,” she said. “We have already seen numerous examples of how it can be used in malicious
or otherwise questionable ways.”

It’s important to recognize that AI itself is not harmful, Carley explained; but people can use AI in harmful ways. Researchers categorize the misuse of generative AI into two main areas. The first is disinformation, which refers to inaccurate information purposely spread by people or robots with the intent to mislead. Secondly, there is misinformation, which is also inaccurate, but the sender doesn’t realize it and they spread these falsehoods without malicious intent.

A special class of disinformation known as deepfakes involves content that looks and sounds real, but has been completely generated by a computer. Deepfakes come in the form of audio, video, images, memes and even news stories that contain disinformation but appear to be completely true. Researchers have found evidence that online actors are using AI, sometimes in the form of bots, to spread disinformation, including deepfakes, on a daily basis. For example, clear data shows that foreign entities are using generative AI on quasi-news and social media platforms to influence U.S. voters.

Carley, whose research combines cognitive science, social networks and computer science, is currently conducting a global investigation to analyze a type of deepfake called “pink slime,” or hyper-localized fake news websites that contain completely made-up stories.

“With the demise of many local newspapers, we’ve seen a sharp increase in these fake local news sites,” said Carley.

“You will see news websites in two completely different towns that use the exact same story and words, but just change the photos and quotes as if they came from local officials. Our project is trying to identify these sites and then quantify how much misinformation they are spreading.

“Unfortunately, we have seen them have an impact in getting people to donate money to fake causes, and also in getting people to believe a candidate has more support than they actually do,” she said.

This project is just one example of how CMU researchers are working to identify and quantify the effects of AI misuse — an important first step to ensuring that AI is used fairly and productively.

Will AI Affect Election Results?

Despite specific examples of misuse, it is extremely difficult to quantify AI’s overall effect on elections and the democratic process because currently there is no concrete way to measure its cumulative impact over time.

“If you pick out any one instance of GenAI misuse, one could argue it is unlikely to have changed the outcome of the election,” Heidari explained. “But over time and as instances accumulate, the technology can contribute to the erosion of trust in the democratic process. We may get to a point where many voters won’t believe what they see or hear, simply relying on their gut feelings and emotions, instead of facts and reason, to decide how to vote.”

“Ideally, we want the democratic process to be a rigorous debate and exchange of information about candidates and their policies, rather than an uninformed majority vote.”

Heidari compares the impact of generative AI on elections to how social media has led to declining mental health of adolescents over the past decade. “In both cases, we are dealing with diffuse harm,” she said.“ With this type of harm, it’s difficult to pinpoint whether any one interaction has tipped the scale. There aren’t many identifiable victims, yet over time, we observe a huge impact.”

Broadly, Heidari’s research addresses issues of accountability and governance of AI, and specifically how to evaluate the negative impacts of AI tools on individuals and society. For instance, she studies a technique called red-teaming for GenAI, which involves stress-testing the model to assess the risk of it producing problematic content, such as misleading information about the electoral process.

“The results of the risk assessment can then be utilized to decide, ‘How can I prevent harms by adding appropriate guardrails?’” Heidari said. These guardrails can be technological, such as prompt-based filters, or policy-based, such as guidelines and regulations surrounding the use of GenAI in elections.

Both are important in ensuring the responsible use of AI, said Heidari. “Both are important, and they work in complementary ways,” she said.

Generative AI for Good

Although there is ample potential for the misuse of generative AI, researchers agree it’s critically important to highlight the positive uses for the technology as well.

“The goal is to create robots that are actually helpful to society,” said Yonatan Bisk, assistant professor of computer science at the Language Technologies Institute. Bisk’s research focuses on how robots use language to communicate with humans.

There are many examples of generative AI content used for good, such as alerting communities when a wildfire is approaching. In elections, generative AI can help candidates reach voters who might otherwise be disenfranchised by translating campaign messages into different languages or condensing policy proposals into summaries that are easier to understand.

“The problem is, we’re not seeing it used for good as much,” Carley said. “I know a lot of political consultants are afraid to use generative AI because they view it as complicated, and they don’t want to get it wrong.”

Carly explained that this hesitancy tips the scales in the wrong direction because bad actors are using AI freely.

“For researchers, the question becomes, what brakes can we create that help people to use this technology in good ways?” — Yonatin Bisk

To get there, technology creators must first understand all of the applications — both good and bad — for the technologies they create. According to Bisk, this often happens through a process called threat-modeling, where researchers try to imagine unintended and malicious uses for the systems they build.

Understanding the potential threats of AI systems is an important first step in regulating them, said Bisk. “The interesting questions are who is inventing, who is legislating, who is regulating that space? My biggest goal is to ensure that our elected officials are well-educated on the implications of the projects they are funding, and are constructing the appropriate policies to regulate these systems.”

Recognizing and Evaluating AI content

As generative AI becomes more widespread, it’s critically important for voters to learn how to recognize and evaluate computer-generated content.

CMU’s Block Center for Technology and Society houses a research area on how to responsibly harness AI and analytics for social good. And CMU’s Center for Informed Democracy and Social-cybersecurity (IDeaS) houses a research area on how to detect, measure the impact of, and mitigate online harms that threaten democracy such as disinformation, hate and terrorism. These two centers have jointly created The Responsible Voter’s Guide to GenAI in Political Campaigning to help protect the integrity of the democratic process.

The biggest part we can play at the moment is increasing awareness,” Heidari said. “As AI researchers, we should make sure we don’t contribute to the hype, but instead, present a voice of reason.”

There are currently no federal rules about using AI in political campaigns. The Federal Communications Commission has proposed requiring politicians to disclose the use of AI in TV and radio ads. At the same time, the Federal Election Commission proposed banning candidates from using AI to deliberately misrepresent opponents in political ads. So far, neither proposal has been enacted.

This leaves U.S. voters on their own in identifying and deciphering AI content this election cycle. CMU researchers have some tips for how to navigate this difficult landscape. In addition to the Voter’s Guide, the IDeaS center has also published guides on detecting disinformation and identifying content generated by AI language models.

While it can be difficult to identify malicious computer-created content, one strategy is to pay attention to your emotional reaction to the material, Carley explained, because people working to spread disinformation often attempt to play on people's emotions.

“If you’re reading a story, and you find you’re starting to get really excited or sad or angry, that emotional shift is probably because you’re being played by the story,” Carley said. “That’s a sign you should take a break from the media.”

When consuming media, it’s also important to consider your own biases, noted Bisk. He points to the fact that people are more inclined to believe viewpoints that they agree with, and not believe viewpoints they disagree with. The same is true when evaluating whether content was generated by AI. “If you see a video of someone talking and you agree with what they are saying, you are more likely to think the video is real,” he said. “If you disagree with what they are saying, you are more likely to think the video is a bot or a fake.”

Thinking about the motive of the person or organization delivering the content is also useful, Bisk said.

“If I’m presented with a clip — whether it’s audio or video — of a candidate making a nonsensical statement, I should ask, ‘What is the incentive for that person to say those things? Does what you’re seeing fall within the character of the candidate? Is it consistent with your model of who they are?” Candidates are only going to put out material they believe will get them elected, Bisk said. “If a particular clip seems out of character or designed to simply provoke outrage, it may not be real.”

As always, it’s important to consider the source of the information you are consuming, said Carley. “People tend to look at sites that are very similar, and get the same story from all of them, even though it may be inaccurate,” she said. “That’s why it’s important to look at a variety of sources, even if you think you’re going to hate them.”

“Most importantly, people should be aware that misuse of generative AI is out there. “To voters, I would just say, ‘Be careful,’” Carley said. “And to policymakers, try to use these new technologies for good.” — Kathleen M. Carley  ■