OpenAI Chief Executive Sam Altman apologized Friday for his company's failure to alert police after it banned the ChatGPT account of the teenager who killed eight people in Tumbler Ridge, British Columbia, in February, conceding in a letter to the remote community that the company should have flagged the user to law enforcement eight months before the attack.
The apology, released on social media by British Columbia Premier David Eby and the Tumbler RidgeLines news site, is the first time Altman has personally acknowledged that OpenAI's internal review of 18-year-old Jesse Van Rootselaar fell short of what the case required. It lands days after Florida Attorney General James Uthmeier opened a criminal investigation into the company's handling of a separate campus-shooting suspect's ChatGPT use, escalating regulatory pressure on the most prominent U.S. artificial-intelligence firm.
What Altman said
"I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote in the letter, dated Thursday. He told residents that "The pain your community has endured is unimaginable" and pledged to "find ways to prevent tragedies like this in the future".
Van Rootselaar opened fire on Feb. 10 at Tumbler Ridge Secondary School and a nearby home, killing six people at the school and his mother and 11-year-old brother at the residence, according to authorities cited by CBS News and Al Jazeera. He died of a self-inflicted gunshot wound.
The June flag
OpenAI banned Van Rootselaar's ChatGPT account in June 2025 after automated tools and human reviewers identified misuse "in furtherance of violent activities", the company has said. At the time, OpenAI determined the activity did not meet its threshold of an imminent and credible risk of serious physical harm to others, and did not refer the case to police.
The company says ChatGPT is trained to refuse requests it judges illicit, and that human reviewers escalate users who indicate plans to harm others. Altman's letter is the clearest signal yet that OpenAI views the June decision as a misapplication of that policy.
Florida probe
Uthmeier this week subpoenaed OpenAI's records on its protocols for reporting possible crimes, citing what he said was "significant advice" ChatGPT provided to a Florida State University student charged in an April 2025 campus shooting that killed two people. An OpenAI spokesperson told CBS News the company "identified a ChatGPT account believed to be associated with the suspect and proactively shared this information with law enforcement" after learning of that incident.
The counterpoint
Friday's coverage came from Al Jazeera and CBS News, both lean-left outlets; right-leaning critics of OpenAI's content policies and conservative commentators skeptical of corporate self-regulation were not represented in the day's reporting. OpenAI itself supplied the chief countervailing argument: in February, the company told CBS News that "Our thoughts are with everyone affected by the Tumbler Ridge tragedy", said it had "proactively reached out to the Royal Canadian Mounted Police" after the shooting, and maintained that the June account did not, on the information then available, meet its referral threshold.
A Royal Canadian Mounted Police investigation remains open, and OpenAI has not said whether it will publish revised criteria for when a banned account is referred to police.

