Skip to main content

Blog Article

Deepfakes and Democracy in the Digital Age

Combatting misinformation in the 2024 U.S. Presidential Election is crucial to ensuring democracy. It falls to science to address this challenge.

Published October 8, 2024

By Nick Fetty
Digital Content Manager

From left: Nicholas Dirks; Joshua Tucker, PhD, Maya Kornberg, PhD; and Luciano Floridi, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

The complexities of artificial intelligence were discussed during the Deepfakes and Democracy in the Age of AI event, presented by The New York Academy of Sciences and Cure on September 17, 2024.

Seema Kumar, Chief Executive Officer of Cure, a healthcare innovation campus in New York City, set the stage for the discussion by emphasizing the impact of AI on healthcare. She cited a survey of nearly 2000 physicians who expressed concern about changes in behavior they’ve observed in patients as we move into a more digital age.

Nicholas Dirks. Photo by Nick Fetty/The New York Academy of Sciences.

“Patients are coming to them with misinformation and they’re not trusting physicians when physicians correct them,” said Kumar, who also serves on the Academy’s Board of Governors. “In healthcare, too, this is becoming an issue we have to tackle and address.”

Nicholas Dirks, president and CEO of the Academy introduced the panel of experts:

  • Luciano Floridi, PhD: Founding Director of the Digital Ethics Center and Professor in the Practice in the Cognitive Science Program at Yale University. His expertise covers the ethics and philosophy of AI.
  • Maya Kornberg, PhD: Senior Research Fellow and Manager, Elections & Government, at NYU Law’s Brennan Center for Justice. She leads work around information and misinformation in politics, congress, and political violence.
  • Joshua Tucker, PhD: Professor of Politics, Director of the Jordan Center for the Advanced Study of Russia, and Co-Director of the NYU Center for Social Media and Politics. His recent work has focused on social media and politics.

The Role of Deepfakes

Professor Tucker suggested that research can be an effective way to better protect information integrity.

“The question is, and I don’t know the answer to this yet, but this is something we want to get at with research,” he said. “Is there a meaningful difference across modes of communication?” adding that modes include text, images, and video.

Professor Tucker argued that the most impactful video so far in this U.S. election cycle wasn’t a deepfake at all. Instead, it was the unedited footage of President Joe Biden’s performance in the debate on June 27, 2024.

Not A New Phenomenon

Luciano Floridi, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

Dr. Kornberg agreed that misinformation is not a new phenomenon. However, she does recognize that because of the often-realistic nature of deepfakes, it may be more difficult for people today to differentiate fact from fiction. The lack of regulation in the tech sector in this regard further complicates the issue. She posed the example of an AI generated phone call impersonating an election official sent to misinform potential voters.

“It can be difficult to determine if this is a real call or a fake call,” said Dr. Kornberg. “It’s extremely important, I think, as a society for us to be doubling down in civic listening and civic training programs.”

The ease of producing realistic AI-generated content is also contributing to the issue, according to Professor Floridi. He cautioned that media can become so oversaturated with this content, that consumers begin questioning the legitimacy of everything.

Professor Floridi cited a research project that he and his team are currently working on with the Wikimedia Foundation. The team hopes to release their findings prior to the U.S. election, but at this point, they have not observed anything particularly worrisome in terms of deepfakes.

Maya Kornberg, PhD. Photo by Nick Fetty/The New York Academy of Sciences

“What we do see is call it ‘shallowfakes.’ The tiny little change [to otherwise authentic content],” Professor Floridi said. He added that these “shallowfakes” can almost be more dangerous than deepfakes because the slight manipulations are generally less obvious.

The Issue of Credibility

Dirks then shifted the focus of the conversation to credibility. With first order effects, a person sees something untrue, then forms an opinion based on that misinformation. Dirks invited Professor Tucker to talk about his research on second order effects, in which the political consequences can be more salient and destabilizing.

Professor Tucker and his lab studied the Russian misinformation on Twitter during the 2016 U.S. Presidential Election. However, counter to popular belief, the researchers did not observe a significant correlation to indicate that exposure to such misinformation influenced American voter opinion.

“Yet, we spent years talking about how the Russians were able to change the outcomes of the election. It was a convenient narrative,” said Professor Tucker. “But it worried me. And I wondered for a long time after this, did that sow the seeds of doubt in people’s minds?”

With the current hype surrounding generative AI as we enter the 2024 election, Professor Tucker expressed concern that it can be a new tool to further spread misinformation.

Combatting Voter Suppression

Dr. Kornberg and her colleagues at the Brennan Center study the impact of voter suppression efforts. The researchers are studying ways to debunk, or “pre-bunk,” certain misconceptions that may be on the minds of voters. She said that purveyors of misinformation deliberately focus on simple themes like malfunctioning voter machines, distrust of election officials, and dead people voting.

Joshua Tucker, PhD. Photo by Nick Fetty/The New York Academy of Sciences.

“We saw that in 2020. We saw that in 2022. There’s a lot of reason to believe we’re going to see that in 2024,” said Dr. Kornberg. “So, we’re working to proactively get resources out to election administrators [so they can better counteract these threats].”

She cited the role of AI in further amplifying misinformation, which will make deciphering fact from fiction even more difficult for the average voter. Dr. Kornberg and her colleagues aim to get ahead of these issues by offering training and other resources for election administrators. She also advocates for experts with technical expertise in AI to advise local election official offices, municipalities, state legislatures and even congress.

“There’s a lot of demystifying for the workers themselves that we’re trying to do with our trainings about how to deal with AI,” said Dr. Kornberg. “This will help us to come up with some intelligent and timely solutions about how to combat this.”

Academy members can access an on-demand video recording of the event. Click here to listen to or watch the full conversation.

Not a member? Sign up today.


Author

Image
Nick Fetty
Digital Content Manager
Nick is the digital content manager for The New York Academy of Sciences. He has a BA and MA in journalism from the University of Iowa as well as more than a decade of experience in STEM communications. Nick is also an adjunct instructor in mass media at Kirkwood Community College.