To prevent voters from being misled by misleading messages, Meta (Facebook, Instagram) will require political campaigns to be transparent about the use of artificial intelligence (AI) in ads, a topic of much concern as ‘the US presidential elections of 2024 are approaching. .
• Read also: What social media ‘transparency reports’ in Europe reveal
• Read also: Elon Musk’s company xAI will launch its first artificial intelligence model
• Read also: The US, China and the EU sign the first global declaration on the risks of AI
“Advertisers will be required to disclose whenever an election, political or social ad contains a photorealistic image or video, or realistic audio, that has been digitally created or altered to represent a real person saying or doing something that they do not say or do.” the social media giant announced in a statement on Wednesday.
This new rule will be applied worldwide next year.
It also covers ads that depict “a realistic-looking person who doesn’t exist or an apparently realistic event that didn’t happen” or “a realistic event that would have happened but is not a faithful image, video or audio recording of the event.
In all three cases, Meta will “add information about the ad.” Advertisers will not have to report digital modifications that do not affect the message, such as certain cropping or color corrections to a photo.
The rise of generative AI, which makes it possible to produce text, images and sounds at a simple request in everyday language, facilitates the creation of all kinds of content, including “deepfakes”, these photos or videos manipulated for deceptive purposes.
From Washington to Brussels, authorities are trying to regulate this new technology, concerned about the challenges to democracy in particular.
Since the Cambridge Analytica and Facebook scandal, which helped win over Donald Trump in the US and Brexit supporters in the UK in 2016, Meta has taken numerous steps to combat misinformation on its platforms.
“As always, we remove content that violates our regulations, whether created by AI or by a person,” reminds the Californian group.
“Our independent fact-checking partners review and evaluate viral misinformation, and we don’t allow an ad to run if it’s classified as false, altered, partially false, or out of context.”
AFP is one of dozens of media outlets paid by Meta worldwide under its content verification program.