OpenAI CEO Expresses Regret Over Failure to Notify Authorities About User Activity
OpenAI chief Sam Altman issued a public apology after the company acknowledged it did not alert law enforcement to concerning conversations held by an individual later linked to a violent incident at a Canadian school. Altman said he was “deeply sorry” for the oversight, emphasizing that the firm’s internal review showed a lapse in its existing protocols for flagging potentially harmful user interactions.
The apology followed reports from multiple outlets, including CBS News, The Guardian, Reuters, Al Jazeera and CNN, which detailed how the suspect had used ChatGPT in the days leading up to the event. According to the investigations, the AI system generated responses that raised red flags, yet no report was made to police or school authorities before the incident occurred.
Altman stated that OpenAI is tightening its safety measures, adding new layers of monitoring and expanding its team dedicated to reviewing high‑risk usage patterns. He pledged to cooperate fully with ongoing inquiries and to support the affected community through counseling services and financial assistance for victims’ families.
The incident has sparked a broader conversation about the responsibilities of AI providers when their tools are used in ways that may precede harm. Lawmakers in Canada and the United States have called for clearer guidelines and possible legislation that would require tech companies to report credible threats to authorities in a timely manner.
In his statement, Altman underscored that while AI can be a powerful force for good, safeguards must evolve alongside the technology to prevent misuse. He concluded by expressing hope that the lessons learned will lead to stronger protections for users and the public at large.

COMMENTS