
To my fellow chief executives, board members, and industry associations who shape the direction of AI development in Canada and globally: the moment for voluntary commitments and safety frameworks has passed.
The families of Tumbler Ridge deserve more than a meeting in Ottawa and press statements expressing concern. They deserve to know that the companies operating AI platforms and technologies have committed — publicly and with enforceable consequences — to standards that will prevent the same failure from occurring again.
Let me be clear that many concerns about the negative impacts of AI are overblown and non-factual. On balance, the value that AI technologies bring to our society overwhelmingly – literally – outweighs the downsides. Thanks to its power and reach, this technology will bring value that many cannot even imagine at this point in human history.
That is precisely why serious efforts must be made to control and regulate it – which, by definition, addresses the companies behind the AI technologies. The federal government around Minister Solomon is right to demand changes that increase safety for Canadians.
However, these rules and regulations should not come from the government alone. The companies designing these systems understand them better than any regulator. They are best positioned to define what constitutes a credible threat of violence, to establish escalation protocols that are both operationally sound and respectful of privacy, and to determine what level of human review is appropriate when automated systems flag dangerous content.
And it makes business sense. Companies that operate in regulatory vacuums invite the kind of blunt, reactive legislation that tends to follow tragedies. They also invite liability exposure, reputational damage, and erosion of the public trust that is, ultimately, the foundation on which their products depend.
OpenAI's handling of the Tumbler Ridge shooter's account — and its silence to B.C. officials in the meeting held the day after the shooting — has generated exactly the kind of scrutiny that no company seeking to expand its presence in Canada can afford.
Moreover, it creates negative consequences for the entire industry and, counterintuitively, for the entire society. Reasonable debate is difficult to uphold in moments of crisis. It is human nature to desire strong responses to obvious failures. However, if we, as a society, now make rash decisions that disregard nuances in the mere interest of signaling action, all Canadians will lose because we may put restrictions and on the most transformative sector in human history.
A serious, industry-designed code of conduct for AI safety — one that carries genuine force rather than serving as a public relations document — would need to address several core questions.
It must provide industry-wide standards to remove ambiguity and define clear and consistent thresholds for escalation to avoid one of the most troubling revelations from Tumbler Ridge: the decision not to contact police, which was made against the judgment of employees within the company who believed the content warranted it.
Additionally, clear and strict reporting structures and real accountability are essential. In practice, this means that if an automated system flags content, humans must step in and review detected content according to consistent criteria. Violating this should lead to serious investigations — whether through a dedicated regulatory body, a standards organization, or a third-party audit regime — that lead to transparent and impactful outcomes.
Lastly, any such framework must be established in genuine cross-border coordination. The internet does not recognize national boundaries. A Canadian-only framework will be incomplete so long as major AI platforms are headquartered and governed elsewhere. This might be the most difficult step, but Canada has signaled repeatedly – think back no further than to Mr. Carney’s celebrated speech in Davos – its hunger to lead.
Pursuing such meaningful changes and safety requirements requires a level of cooperation that, historically, only dark and sad events like Tumbler Ridge can inspire. Let’s respond to this tragedy with the adequate speed – but also seriousness and intellectual honesty.
The question is not whether AI companies bear sole responsibility for what happened in Tumbler Ridge. They do not. The question is whether the industry has adequate, binding, and consistently applied standards for what to do when credible evidence of planned violence surfaces. It does not.
The private sector has an opportunity here that it would be unwise to squander: to demonstrate that technological innovation and public safety are not competing values, and that industry is capable of governing itself with the seriousness this moment demands. If the industry does not seize that opportunity, governments will act — and they will do so on a timeline and in a manner over which the technology sector will have far less influence.
Lead now, or be led. The choice belongs to us.