Bold Business Logo
Search
Close this search box.

AI Gone Wild: Election Influence Edition

deceptive AI in elections means fake people

Advances in generative AI are causing quite the shakeup in a number of industries. Many content creators worry about job displacement given how effective AI has already become. Its capacity to create not only effective texts but likewise images and videos are impressive. Therefore, it’s not surprising that one of the biggest concerns is the spread of AI-generated disinformation by bad actors. Such campaigns have the ability to create turmoil and confusion on a good day. But for the upcoming calendar year, the use of deceptive AI in elections poses serious threats to democracy and beyond. And the vast majority involved with running these elections are expressing grave concerns about the impacts.

a candidate using deceptive AI in elections
Don’t fall for deceptive AI in elections!

(Generative AI has been fed a steady diet of Internet nonsense–read more in this Bold story.)

In the last several months, there’s been evidence that deceptive AI in elections is on the rise. Several campaigns in Europe have been sabotaged by fake videos and images designed to sway public opinion. Which direction people might be swayed depends on the intentions of the group or individual creating AI-generated disinformation. But one thing is clear. Using AI for this purpose is both easy and convincing. This is why those supervising elections in the U.S. this year are already preparing workers for potential AI threats. Even AI platforms are revising their policies and procedures as a result of these concerns as well. Everyone seems to agree that this will be an election year that witnesses AI’s effects to a profound degree.

“Our staff is in conversation with a lot of folks around the country. [Deceptive AI in elections] has a lot of potential to do a lot of damage.” – Arizona Secretary of State Adrian Fontes

Examples of Deceptive AI in Elections

When it comes to the use of AI-generated disinformation in politics, the easy targets might assume to be the actual candidates. Using generative AI to create deep fake videos or images of a gubernatorial candidate or presential one could be powerful. But there’s a problem in that these political figures are too public and too well known. Convincing the general public of a false act or statement is therefore more difficult. In contrast, the use of deceptive AI in elections tends to target the lesser-known players. This might involve secretaries of state overseeing elections or even actual election workers themselves. AI-generated disinformation related to these individuals can be much more influential.

To offer some examples of how this might be carried out, several come to mind. For instance, suppose a fake video was created of a secretary of state given unusual directions on election day. Or perhaps a fake phone call using audio cloning instructed election workers they were no longer needed at the polls. Naturally, if these pieces of AI-generated disinformation were believed, voting disruptions could well result. At the same time, deceptive AI in elections might target secretaries of state themselves. Consider a video showing such an official tearing up ballots secretly when no one was around. These types of disinformation undermine trust in the elections at a public level and might convince many not to vote. Numerous other examples like this exist, and all have the potential to undermine the entire election process.

a robot candidate running for president
Don’t fall prey to disinformation and AI nonsense this election.

Election Center Strategies to Combat AI

Among election supervisors and state officials, there is substantial awareness of the threat posed by AI-generated disinformation. As a result, many have incorporated training programs designed to educate election workers about the use of deceptive AI in elections. By recognizing its potential, the belief is that they will think twice before accepting a deepfake audio, image or video as legitimate. This is especially true if the directions, instructions, or behaviors seem unusual or out of character. Even the federal agency of Cybersecurity and Infrastructure Security Agency is conducting similar sessions. These efforts combined with public education campaigns reflect two of the primary efforts to reduce these fake AI threats. When prevention is limited, education and awareness seem to be the best tools.

There are some strategies that might be preventative in nature with some being rather easy to employ. For example, having all local election centers sign up for .gov websites and email addresses is one approach. If a message or video doesn’t come from a .gov source, then it may be AI-generated disinformation. Deceptive AI in elections would not be posted on such sites. Another tactic election centers are pursuing is a rapid response to AI-generated disinformation and videos. Once detected, publicizing their falsities to the public and referring voters to accurate sources of information can help as well. None of these strategies actually target the source of the problem because identifying bad players is difficult. And detecting AI content is certainly getting more and more challenging for the average American.

“I 100 percent expect [deep fake AI disinformation] to happen this [election] cycle. It is going to be prevalent in election communications this year.” – New Mexico Secretary of State Maggie Toulouse Oliver

Looking for Help from Businesses

AI-generated disinformation in a newspaper
AI-generated disinformation is going to be even more prevalent come November!

In addition to efforts from election centers and officials, businesses operating AI platforms are also involved. Google, Open AI, Microsoft and others are shifting current policies related to AI to curb AI-generated disinformation. For example, Google’s Gemini will now refuse any queries or requests related to elections in the U.S. Google is also using DeepMind’s SynthID to provide a digital watermark on AI-generated images. Open AI is doing the same with a CR symbol on its AI images offered by the Coalition for Content Provenance and Authenticity. These policy shifts as well as redirecting visitors to site with legitimate election information reflects their best efforts. Certainly, this may help. But it too won’t go far enough to prevent deceptive AI in elections from interfering this year. The best we can hope for is greater scrutiny and discernment from voters through the process. This combined with heightened awareness represents the only real means to offset AI’s potential negative effects.

 

Programming an AI to Have Biases Creates a Whole Slew of Problems–Read Why in this Bold Story!

Don't miss out!

The Bold Wire delivers our latest global news, exclusive top stories, career
opportunities and more.

Thank you for subscribing!