Who is responsible if an employee puts RGPD customer data in Chatgpt?

When an employee enters personal data in a third -party AI tool like Chatgpt, the responsibility comes back to businessas treatment manager. It must ensure that any use of data complies with the GDPR and that no sensitive information is communicated outside the contractual framework. The employee can be faulty and sanctioned, but it is the company which is exposed to a risk of regulatory sanction and a loss of confidence of his customers. The AI ​​supplier, as an external subcontractor or provider, can only be held responsible if a processing contract in accordance with the GDPR has been signed.

To do ::

  • Explicitly prohibit in the IT charter and the RGPD policy the use of public AI to process personal data.
  • Train employees at the risk of disclosure.
  • Set up a technical monitoring (SSO, Proxy, DLP) to detect sensitive data.
  • Offer compliant internal alternatives (sovereign AI, models deployed on-premise).

Legal references ::

  • GDPR : article 5 (minimization), article 32 (data security), article 44 (transfers outside the EU).
  • Labor code : Article L.1222-4 (surveillance of employees with prior information).
  • Cnil : Recommendations on the use of generative AI (2023).

Practical solutions ::

  • Write a policy of using generative AI.
  • Technically restrict access to external platforms from the company’s network.
  • Evaluate and contract with providers who guarantee the lack of reuse of the data entered.
  • Provide an alert procedure in the event of a disclosure incident.