Can the company be held responsible for a leak from an AI tool used without validation?

Yes. In terms of GDPR, it is always the company that takes responsibility for data processing, even if the leak comes from an AI tool not validated by the DSI. The “wild” use of a notetaker or a free text generator by an employee does not exempt the organization of his obligations. A data violation revealed by the CNIL or a foreign authority can thus lead to financial sanctions, an obligation to notify the persons concerned and damage to reputation.

To do: Set up a white list of authorized tools, technically block non -validated applications via a CEB or a firewall, and train the teams so that they understand that the Shadow AI directly exhibits the company.

Legal references: RGPD Article 24 (responsibility of the controller), article 32 (security of the processing), article 33 (notification of data violations).

Practical solutions: Impose a supplier review process (Vendor Risk Assessment), integrate contractual clauses with IA subcontractors, and document any safety measure in the treatment register.