Google to Simplify Disclosures for AI-Altered Election Ads
The updated political content policy now requires advertisers to select a checkbox in the "altered or synthetic content" section of their campaign settings.
Google announced that it will simplify the process for advertisers to disclose election ads featuring digitally altered content depicting real or realistic events or people. This move aims to combat election misinformation.[1]
The updated political content policy now requires advertisers to select a checkbox in the "altered or synthetic content" section of their campaign settings. Google will automatically generate in-ad disclosures for feeds and shorts on mobile devices and in-streams on computers and televisions. For other ad formats, advertisers must create their own prominent disclosures. The required disclosure language will vary depending on the ad's context.
This update follows incidents like the viral spread of fake videos in April during India's general election, where AI-generated content falsely portrayed Bollywood actors criticizing Prime Minister Narendra Modi and endorsing the opposition Congress party.
Other tech companies, such as OpenAI and Meta Platforms, have also taken steps to address the use of AI and digital tools in altering or creating political ads, emphasizing the importance of transparency in political advertising.
Last month, a group of current and former employees at artificial intelligence (AI) companies, including Microsoft-backed OpenAI and Alphabet's Google DeepMind raised concerns about risks posed by the emerging technology.
They further warn of risks from unregulated AI, ranging from the spread of misinformation to the loss of independent AI systems and the deepening of existing inequalities, which could result in "human extinction."