OpenAI is scrambling to overhaul its safety protocols after a serious oversight allowed a Canadian ChatGPT user, who is now the suspected perpetrator of a horrific mass killing, to operate a second account without being reported to authorities. This tech giant is facing intense scrutiny for not alerting police about the user’s problematic online behavior. The company revealed on Thursday that the individual, Jesse Van Rootselaar, managed to create a second account after his first was shut down due to concerning entries. Van Rootselaar is the suspected shooter in a mass killing that occurred in Tumbler Ridge, British Columbia, on February 10, claiming the lives of eight people, including six at a secondary school. Per Politico, OpenAI confirmed it had banned Van Rootselaar’s initial ChatGPT account in June. This ban came after employees internally flagged some of his posts as “an indication of potential real-world violence” approximately eight months before the shooting. However, despite these internal flags, the company did not report the account to law enforcement at the time. Hindsight is 20-20 Ann O’Leary, OpenAI’s vice president of global policy, later stated that the company discovered the perpetrator had used a second ChatGPT account after his name was publicly released. This second account’s details have since been shared with the police. Following urgent meetings with the Canadian and British Columbian governments this week, OpenAI has committed to significant changes. “These immediate commitments are only the first step in the work we must do in partnership with the Canadian government to improve AI safety,” O’Leary wrote in a letter to members of Mark Carney’s Cabinet. The goal is clear: to help prevent such tragedies from happening again. School shooter used ChatGPT before killing 8. OpenAI saw the chats, shut down the account — but didn't call police.New safety protocols fix that.Why did it take a tragedy to add 'call 911' to the handbook?— Mati (@MatiBuildsWith) February 27, 2026 One of the key protocol changes is establishing a direct point of contact for the police. This will allow for instant information exchange about dangerous users, a crucial request from the federal government. OpenAI is also working to make it significantly harder for banned users to sneak back onto its platform. Additionally, the company plans to refer users to local resources when it appears they are in distress or exhibiting “prohibited behavior.” The criteria for alerting police about potentially dangerous users have also been made more flexible. Previously, OpenAI would only contact law enforcement if it identified an “imminent and credible” threat, often requiring a user to reveal a specific target, means, and timing of planned violence. Now, the company says police will be alerted if a ChatGPT conversation simply appears dangerous. OpenAI is overhauling its safety protocols following a tragedy in Canada. They're establishing a direct line to law enforcement and tightening detection for "violent activities" policy evaders. Safety is no longer just a benchmark: it's an operational necessity.#OpenAI…— Asteris – Socials on Autopilot with Your Content! (@asteris_ai) February 27, 2026 The Canadian government is upset that it took a “tragedy” to force this conversation and has threatened to regulate chatbots if tech companies cannot demonstrate robust safeguards. It is just one in many conversations around AI protocols recently, after Amazon admitted that its AI systems caused outages, and a man accidentally used AI to hack into thousands of homes.