AI Expert Warns of a Surge in Misinformation for the 2024 Election

AI expert Oren Etzioni warns of a significant increase in election misinformation due to advanced AI tools and reduced social media safeguards for the 2024 U.S. presidential election.

As we approach the next presidential election, experts are sounding the alarm about the potential for an unprecedented wave of misinformation. The systems that were once in place to combat false claims are becoming less effective, while the tools used to create and disseminate these claims are becoming more sophisticated.

The mistrust in the electoral process, fueled in part by former President Donald Trump’s unfounded claims, continues to be a concern, with a significant portion of Republicans still believing that Joe Biden’s election victory was not legitimate.

Advancements in artificial intelligence have made it easier and more cost-effective to produce and spread misinformation that can mislead voters and impact election outcomes. Social media companies, which previously played a role in addressing misinformation, have shifted their focus elsewhere.

Oren Etzioni, a leading artificial intelligence expert from the University of Washington, expressed his fears: “I expect a tsunami of misinformation,” he said. “The ingredients are there, and I am completely terrified.”

Deepfakes Go Mainstream

The upcoming U.S. presidential election is set to be the first where advanced AI tools, capable of creating highly convincing fake images and videos in mere seconds, will be readily accessible. These deepfakes could potentially be used in deceptive ways, such as portraying political figures in false scenarios or making false claims, which could spread rapidly on social media platforms and influence voters right before they head to the polls.

Instances of high-tech fakes have already impacted elections globally. For example, in Slovakia, AI-generated audio recordings that falsely portrayed a candidate’s intentions were circulated on social media just days before the election, despite fact-checkers’ efforts to debunk them.

Experts are concerned about the potential for these tools to target specific communities with misleading messages about voting, such as through fake websites, deceptive text messages, or misinformation in various languages on apps like WhatsApp.

With content that appears authentic, it becomes increasingly difficult for individuals to distinguish between what’s real and what’s fabricated. Kathleen Hall Jamieson, a misinformation scholar and director at the University of Pennsylvania’s Annenberg Public Policy Center, warns that our natural instincts may lead us to believe in the fakes rather than reality.

Efforts to Regulate Technology

While Republicans and Democrats in Congress and the Federal Election Commission are exploring regulatory options, no concrete rules or legislation have been established. Only a few states have enacted laws to address political AI deepfakes, either by requiring labeling or banning misleading representations of candidates.

Some social media platforms, like YouTube and Meta (Facebook and Instagram’s parent company), have implemented policies to label AI-generated content, but it’s uncertain how effective these measures will be in catching violators.

Social Media Safeguards Diminish

The landscape of social media moderation has changed significantly, particularly since Elon Musk’s acquisition of Twitter. The platform, now known as X, has seen a dismantling of its verification system and a reduction in its misinformation-fighting teams. This has led to the reinstatement of previously banned individuals known for spreading conspiracy theories and extremism.

The changes at Twitter have been praised by some conservatives who viewed the platform’s earlier moderation efforts as censorship. However, democracy advocates are concerned that Twitter has become a less regulated space that amplifies hate speech and misinformation.

Other platforms have also scaled back their policies against hate and misinformation, and with layoffs affecting content moderation teams, there are fears that the spread of misinformation in 2024 could surpass that of 2020.

Meta insists it has a large team dedicated to safety and security and actively removes networks of fake accounts. YouTube also emphasizes its commitment to providing reliable election news and removing content that misleads voters about the voting process.

The Trump Factor

Donald Trump’s continued influence and false claims about election fraud are a major concern for misinformation researchers. His rhetoric has the potential to incite election vigilantism or violence, as seen in his calls for supporters to “guard the vote” against purported fraud in the upcoming election.

Election Officials Take Action

In response to the anticipated surge in election denial narratives, election officials have been proactive. They are engaging in public education efforts, monitoring misinformation, and enhancing security at vote-counting centers. In some regions, officials are making a concerted effort to build trust by offering public demonstrations of voting equipment and providing direct access to election workers.

Minnesota has launched #TrustedInfo2024, an initiative to promote election officials as reliable sources of information, and has updated its “Fact and Fiction” web page to address emerging false claims. Additionally, a new law in the state aims to protect election workers and criminalize the non-consensual distribution of deepfakes intended to harm candidates or influence elections.

Officials are preparing for the worst while hoping for the best, recognizing that misinformation poses one of the greatest threats to democracy today.

Leave a Reply

Your email address will not be published. Required fields are marked *