With our partner Salesforce, unify sales, marketing and customer service. Accele your growth!
If artificial intelligence stands out as a strategic lever, it is also a major risk factor in terms of cybersecurity. The rise of generative models and decision -making algorithms transforms attack methods, making detection of threats more complex and exposure to acute informational manipulations. When Summit for Action on AI, States, researchers and companies to structure a collective response and supervise the uses of these technologies.
The 2025 edition of the summit registered in the continuity of the work undertaken during the meetings of Bletchley Park and Seouland laid the foundations for more structured governance through various regulatory initiatives, technical experiments and industrial commitments.
The experts gathered underlined the extent of the vulnerabilities. The Viginum report underlines the growing impact of information handling, where advanced models make it possible to generate misleading content on an unprecedented scale. The experiment led by France on the cyber robustness of IA models has revealed worrying flaws: some algorithms react differently depending on the language used, exhibiting biases exploitable by malicious actors. The ANSSI and its international counterparts alert on the opposing attacks, which divert the functioning of AI by injecting them specific signals, as well as on the increase in sophisticated Deepfakes, compromising the biometric authentication systems.
Several initiatives have been announced. France has formalized the creation of inesiaa national institute dedicated to the assessment and security of AI, integrated into the AI Safety Institutes network launched during the Bletchley Park summit. In parallel, new technical tools are emerging, as D3ltaan open-source detector designed to identify tactics of textual manipulation, or even the meta-detector of the Peren, which makes it possible to assess the performance of the detection algorithms of artificial content. A crisis exercise organized on February 11 brought together 200 experts in AI and cybersecurity to test attack scenarios operating generative models.
With the entry into force of IA regulation (RIA) in 2025Europe wants to establish itself as a central player in AI governance. The Hiroshima process, carried by the G7, aims to establish security principles adopted by large companies in the sector, but the fragmentation of approaches between Europe, the United States and Asia slows down the implementation of ‘A coherent global framework.
Startups and technological companies will have to adapt to a reinforced framework of compliance, involving robustness and safety certifications. For cybersecurity actors, AI can no longer be deployed without strict control of its resilience to threats. It remains to be seen how these commitments in operational standard will be able to set up without slowing down innovation