Thursday, November 14

How to protect the 2024 elections from Artificial Intelligence threats

Avatar of Mireya Navarro

By Mireya Navarro

08 Nov 2023, 11:37 AM EST

Next year will mark the first presidential campaign season where there will be widespread access to artificial intelligence (AI) tools that could threaten the security of our elections if proper controls are not in place.

President Biden took a great first step a few days ago when he signed an executive order on October 30 that sets some standards for the development and use of AI. Now Congress must also take the lead in regulating AI, and tech firms must act to protect elections from any AI-generated threats, even without Congress requiring them to do so.

As we already know, AI makes it possible to produce audio with anyone’s voice, generate realistic images of anyone doing practically anything, and empower social media bot accounts with excellent conversational skills.

It also has the ability to increase phishing attempts, the practice of sending malicious emails to obtain sensitive information or trick the recipient into downloading harmful software, all on an immense scale and with extraordinary speed.

Due to the popularization of conversational bots, or chatbots, next year will also be the first election season in which a large number of voters will see information produced by AI. This significantly increases the opportunities for lies and disinformation to spread among Latino communities and other segments of the voting population, which could affect the electorate’s decision to vote or diminish their confidence in the voting results.

The group of security specialists we interviewed at the Brennan Center agree that election offices across the country have the ability to combat cybersecurity risks for the 2024 elections. In fact, AI also offers powerful tools to defend systems elections, but only if the government ensures that electoral authorities and workers have access to those tools.

States have already begun to regulate AI and focus their efforts on content manipulation. For example, California prohibits the distribution of deceptive content, such as images, videos or audio of election candidates that “falsely appear… to be authentic” and that convey a “fundamentally different interpretation or impression” from reality, with the aim of harming the reputation of candidates or misleading the electorate.

Congress should follow the lead of these states and also provide election authorities with more technical support to safeguard election infrastructure. Additionally, social media platforms, such as

The potential for AI to misinform, suppress votes, and incite violence is far more detrimental to populations with poor English language skills, limited knowledge of how elections work in the United States, and lack of digital experience.

In a letter sent to Congress last month, the Brennan Center and more than 85 other public interest organizations noted that many entities and companies are already employing AI systems that produce error-ridden results that threaten civil liberties and economic opportunities. the American population. People have been arrested and jailed because police facial recognition systems used returned incorrect results.

The letter calls for regulations that seek to address the harms AI is already causing to communities right now, well before the next presidential election.

More about Brennan in Spanish.