OpenAI Under Fire for Not Reporting Mass Shooting Suspect's Chatbot Interactions
OpenAI faced criticism for its decision not to report a mass shooting suspect's interactions with its chatbot to Canadian authorities months before a deadly rampage, according to a report in the Wall Street Journal.
The company's statement follows the identification of Jessie Van Roesselar's body at the scene of the crime, which resulted in the deaths of nine people and injuries to 25 others on February 10. Following the identification, the company promptly contacted Canadian authorities to inform them of Roesselar's history of interactions with ChatGPT.
According to OpenAI, Roesselar had engaged extensively with ChatGPT eight months prior to the shooting, discussing various scenarios for indoor shootings. These conversations were flagged by the tool's automated review system, leading company employees to consider notifying Canadian authorities at the time.
Ultimately, company management decided that Roesselar's interactions with the tool were not sufficient to warrant notifying authorities and instead opted to simply ban the account.
The suspect's digital footprint reveals long-term planning and preparation for the act. The suspect, who was born male and identifies as a woman, had previously designed a Roblox game simulating mass shooting events.
Taylor Owen, an associate professor at McGill University and a member of the federal task force advising the Canadian government on its upcoming artificial intelligence strategy, told The Globe and Mail that federal and digital legislation must address AI platforms and enact specific laws for them.
Owen added that AI systems pose significant risks, noting that AI robots fail to respond to users experiencing mental health crises and cultivate a false sense of approval and emotional protection.
ChatGPT has been linked to several high-profile incidents in recent months, including the suicide of teenager Adam Rain in October 2025, according to a report in The Guardian at the time.
The tool was also central to the suicide of Stein-Erik Solberg after he killed his mother in August 2025. In both cases, the company faced legal scrutiny.
ChatGPT's actions in the Solberg case align with Owen's earlier concerns, as the tool exacerbated Solberg's psychosis and encouraged him to kill his mother.