The Federal Election Commission, petitioned by Public Citizen to require FCC-like disclosures for all political ads, has yet to act, despite plans to decide by early summer and the Senate Rules Committee passing three AI election regulation bills with no guarantee they’ll be implemented in time. (Source: Image by RR)

Indian Election Deepfake Crisis Serves as a Warning for U.S. Policymakers as November Looms

This week highlighted the growing concern over AI’s impact on U.S. elections, especially in light of a report revealing that voters in India received over 50 million deepfaked voice calls from candidates, causing significant confusion. Simultaneously, the Federal Communications Commission (FCC) announced new considerations for AI ad rules, following their ban on synthetic robocalls, raising questions about why the FCC is the only government body addressing these issues. Despite the urgency, legislative progress in the US has been slow, with notable inaction from Congress on new laws governing AI-generated political ads, despite prior incidents involving AI in political ads from the Republican National Committee and Florida governor Ron DeSantis.

Senate Majority Leader Chuck Schumer has been holding meetings with stakeholders to develop AI regulations to protect elections, but progress has been limited. The FCC has taken some steps, such as banning AI in robocalls and proposing disclosure requirements for synthetic content in political ads on broadcast media. However, these measures do not extend to digital ads, where the majority of voters are likely to encounter deepfakes. As reported in, the Federal Election Commission (FEC) has yet to act on petitions for similar rules for digital political ads, and while the Senate Rules Committee passed bills related to AI in elections, their future is uncertain.

With only 166 days until the presidential election, the lack of comprehensive regulations is alarming. The burden of managing AI-generated disinformation falls largely on tech companies, reminiscent of the 2020 election scenario. Platforms like Meta have implemented some AI content disclaimers, and TikTok requires labeling AI-generated realistic content, but the efficacy and enforcement of these measures remain in question. Without stringent regulations, these platforms’ ability to prevent disinformation effectively is doubtful.

The urgency for regulatory action is paramount, yet the window for implementing meaningful changes before the election is closing rapidly. If Congress or regulatory bodies do not act soon, the upcoming election could see unprecedented levels of AI-generated disinformation, undermining the integrity of the democratic process and leaving voters vulnerable to manipulation.