Philippe Dufresne, Canada’s Privacy Commissioner, has determined that OpenAI failed to comply with Canadian federal and provincial privacy regulations while training its artificial intelligence models. After a thorough probe, Dufresne and officials from Alberta, Quebec, and British Columbia concluded that OpenAI’s methods for data acquisition and user consent breached several statutes, notably the Personal Information Protection and Electronic Documents Act (PIPEDA), which regulates corporate handling of personal data during standard operations.
The Privacy Commissioner notes that OpenAI cooperated fully with the inquiry and has pledged to implement several adjustments to ChatGPT to align with Canadian privacy standards. The firm has already discontinued earlier models that breached Canadian privacy rules and now employs “a filtering tool to detect and mask personal information (such as names or phone numbers) in publicly accessible internet data and licensed datasets used to train its models,” according to the Commissioner. The company has also agreed to introduce a new notice in the signed-out version of ChatGPT within three months, clarifying that conversations may be used for training and that users should avoid sharing sensitive data, and within six months:
-
Improve the clarity and usability of its data export tools, while providing clearer guidance on how users can dispute the accuracy of information generated by ChatGPT.
Improve the clarity and usability of its data export tools, while providing clearer guidance on how users can dispute the accuracy of information generated by ChatGPT.
Improve the clarity and usability of its data export tools, while providing clearer guidance on how users can dispute the accuracy of information generated by ChatGPT.
-
Assure Privacy Commissioners that robust safeguards have been established for retired datasets to prevent their use in ongoing development.
Assure Privacy Commissioners that robust safeguards have been established for retired datasets to prevent their use in ongoing development.
Assure Privacy Commissioners that robust safeguards have been established for retired datasets to prevent their use in ongoing development.
-
Evaluate protective measures for relatives of public figures who are not public figures themselves, ensuring the models refuse requests to disclose their names or dates of birth.
Evaluate protective measures for relatives of public figures who are not public figures themselves, ensuring the models refuse requests to disclose their names or dates of birth.
Evaluate protective measures for relatives of public figures who are not public figures themselves, ensuring the models refuse requests to disclose their names or dates of birth.
Although Canada’s probe into OpenAI’s privacy practices began in 2023, the company has faced increased regulatory attention recently due to its link to the mass shooting in Tumbler Ridge in February 2026. OpenAI had reportedly flagged the suspected shooter’s account in 2025 for containing threats of real-world violence but did not escalate these concerns to Canadian authorities. After the incident, regulators required the company to change its approach to safety, and OpenAI ultimately agreed to collaborate more closely with Canadian law enforcement and health agencies going forward.