As numerous electoral deadlines in the world approach, a trend is essential in political campaigns: the massive use of content generated by artificial intelligence. Synthetic videos, falsified articles, rigged images and automated comments redefine the codes of digital influence. In the absence of suitable regulation, AI tools become formidable instruments of electoral manipulation.
Viral disinformation, without contradiction
In several democracies, partisan groups exploit major flaws in social networks. Thanks to a decline in the visibility of traditional media on certain platforms, political pages with strong audience have imposed themselves as information relays, often without validation, without contradiction, without editorial responsibility.
These pages publish extracts from their context, memes with approximate content, or even false messages based on visuals generated by AI. In some cases, dummy sites imitating the appearance of the main media are used to give an appearance of legitimacy to false news.
Propaganda automation
This change of scale is allowed by generative AI models now accessible to the public. Producing a campaign video, a false interview, a sensationalist image or a series of outrageous comments only takes a few seconds.
The industrialization of propaganda is no longer a hypothesis. She is underway. These are no longer artisanal disinformation campaigns, but automated production chains of political content, calibrated to capture attention and polarize opinion.
A global regulatory vacuum
No current electoral legislation makes it possible to draw the content generated by AI, to demand transparency or to control its dissemination. The platforms, on the other hand, adopt ambivalent positions: between symbolic commitment for algorithmic verification and tolerance for viral content.
In this context, influence groups invest massively in political advertising on social networks. They broadcast targeted, sometimes misleading, often emotional messages, by bypassing media filters and traditional counterpowers.
A new democratic asymmetry
The heart of the problem is cognitive. The majority of users still perceive cross publications on social networks as equivalent to conventional information. Very few know how to distinguish journalistic content from synthetic content. Even less check the sources or identify biases.
This asymmetry feeds a dangerous imbalance: the campaigns that master the AI dominate public debate; Those who stick to an ethical communication lose visibility. The electoral debate then rocks in a competition of algorithms rather than ideas.
A global alert
This phenomenon is not isolated. It extends internationally, as political campaigns move online, that media budgets are redirected to the platforms, and that voters are most informed via their topical sons.
Generative AI is not a threat in itself. But its useless use, in an ecosystem already weakened by media disintermediation, creates an unprecedented situation: an overexposed electorate with partisan stories, not very equipped to decrypt them, and increasingly difficult to reconnect to reality.
Democracy, already jostled by the attention economy, must now deal with a new challenge: Synthetic truth.