(US) – Google is set to implement new rules requiring political advertisements on its platforms to disclose when they contain AI-generated images and audio.
This change comes in response to the increasing prevalence of tools that produce synthetic content, which has raised concerns about the potential for disinformation during political campaigns.
Google tightens rules on political ads to combat misinformation
The updated policy, scheduled to take effect in November, is aimed at maintaining transparency in political advertising and preventing the spread of misleading or deceptive content.
Under the new rules, election-related ads must prominently disclose if they include synthetic content that portrays real or realistic-looking people or events.
Google suggests using labels like “this image does not depict real events” or “this video content was synthetically generated” to alert viewers to the use of AI-generated elements.
Google’s existing policies already prohibit the manipulation of digital media for deceptive political purposes, including the spread of false claims that could undermine trust in the electoral process.
Political ads on Google are required to disclose their source of funding, and information about these ads is accessible through an online ads library.
The disclosure of digitally altered content in election-related advertisements must be clear and conspicuous, ensuring that viewers can easily identify synthetic imagery or audio that depicts events or statements that did not occur.