The Tumbler Ridge Shooting Was Tragic. Making it an Excuse for Government Surveillance Would Be Too

OpenAI has been in the news here in Canada for a while thanks to the Tumbler Ridge shooting and how much it should disclose to the government.

Earlier this month, a gunman opened fire in a school at Tumbler Ridge, killing 8 people. This includes 6 school children as well as family members. The gunman killed himself in the process. Over top of that, 27 others were injured in the process. It resulted in a day of mourning, visits from the leaders of political parties as well as multiple ministers, and everything else this entails.

I know my reaction to this is pretty common. Shock and horror that this happened and the thought that this is only the kind of thing that happens in the US, not Canada. Unsurprisingly, this is considered the worst mass shooting in Canadian history. Simply put, this sort of thing just doesn’t happen in Canada, yet it did.

What is known is the fact that the shooter has a long and well known history of mental health problems. Police have been called to the shooters home multiple times.

Another thing that came out of this were discussions about how the local hospital only works certain hours of the day. Ambulances simply aren’t in operation during the night and some residents made this point clear because this has long been a limitation for Northern Health.

So, my hope was that this would kick start a healthy conversation about mental health and what should be considered proper medical health services in smaller communities. Yet, based on the headlines I’ve seen, the conversation in the media quickly turned to trying to blame technology – or at the very least, technology companies. After all, trying to address things like mental health and proper levels of medical services is hard, saying it’s all the internets fault is easy and financially convenient.

This would be by no means the first time Canada has taken advantage of tragedy to try and rush through legislation that is both unconstitutional and wildly unpopular. Over a decade ago, a big story was the Amanda Todd story. Faced with harassment and bullying by what would later be known as an international criminal from another country, Todd ended up committing suicide. Todd’s mother, Carol Todd, did so much to try and honour her memory while trying to bring the criminal to justice.

In 2014, however, things took a dark turn in that story. At the time, the Harper Conservative government was in power and they were pushing hard to implement warrantless wiretapping legislation. The Conservatives, at the time, took advantage of the story and used it as a selling point to try and pass this wildly unpopular legislation. The claimed, among other things, that warrantless wiretapping would allow law enforcement to go after the bad guys, so it was the perfect reason to do away with any semblance of privacy rights in Canada. Carol Todd was, understandably, very upset, and argued that implementing mass government surveillance would not honour Amanda’s memory. Realizing that Carol won’t cooperate as a pawn for their legislative efforts, the Conservatives threw Carol under the bus by shutting her out of the ensuing debates which really made the Conservative party look even worse. Thankfully, controversies including that ultimately stopped the legislation from passing and even cost the Harper Conservatives their government in followup elections.

The Liberal party under Trudeau wasn’t really any better on this front. While Trudeau was Prime Minister, numerous wildfires were spreading across Canada over the years. Whole towns were either threatened with wildfire, or in the case of Lytton, BC, burning down practically completely. It was a story of just how bad climate change had gotten and why urgent action is needed across Canada for the country to do better on this front. Yet, that’s not necessarily what happened.

Instead, Trudeau took the wildfire story and used it as a political opportunity. At the time, Trudeau was pushing the failed link tax policy. As wildfires were threatening Yellowknife, Northwest Territories in 2024, Trudeau and his government were saying that this whole situation was basically Meta’s fault because they chose to block news links. It was a slimy move by Trudeau and nothing about the accusations were true. Still, it showed that Trudeau was no better than Harper when it comes to tragedy and trying to push legislation that is nonsensical.

Fast forward to today and we are witnessing what appears to be the same old story starting to unfold. In the wake of the Tumbler Ridge shooting, the government – and the mainstream media for that matter – turned their attention to OpenAI. As it turns out, the Tumbler Ridge shooter had an account with OpenAI. The account was flagged for having problematic activity by the company. As a result, the decision was made to ban the account, but not alert police. After the mass shooting took place, OpenAI seemingly stepped forward and said that they knew about the shooter having an account. Let’s be real here: OpenAI could’ve said nothing and we would have never known that he had the account in the first place, but OpenAI seemingly decided to do the honest thing here (which is pretty unusual for an AI company).

Politicians, in response, freaked out and demanded, among other things, meetings with executives and asking why more wasn’t done. As the story played out, the story was basically that this was almost a scandal for OpenAI. During the investigation of what OpenAI knew, it turned out that the shooter had a second account that was able to evade detection. That happens. You’re probably not going to have perfect detection of ban evasion.

The important thing that I’ve seen missing in all of this is what the actual activity of the shooters first account entailed. Were the rule violations in question even related to anything violent? We don’t know. If they weren’t, then there is a very real plausible reason why the company decided to ban the account, but not alert police. They would only know what is on record on their end. That context, for the longest time, was missing. More recently, OpenAI said that the second account they found would’ve triggered a police report. From the CBC:

OpenAI says the company found a second ChatGPT account belonging to the Tumbler Ridge, B.C., shooter after her name was made public — even though another account was banned in June for posts about gun violence.

The revelation comes in a letter written by Ann O’Leary, OpenAI’s vice-president of global policy, addressed to Artificial Intelligence Minister Evan Solomon.

The second account was flagged to police, the letter said.

OpenAI said that they have changed their protocols and the first account, under the new rules, would’ve been flagged and reported to police:

O’Leary also wrote that the California-based company would’ve flagged the shooter’s initial account to police under new safety policies the company started to develop “several months ago.”

“Mental health and behavioural experts now help us assess difficult cases, and we have made our referral criteria more flexible to account for the fact that a user may not discuss the target, means and timing of planned violence in a ChatGPT conversation but that there may be potential risk of imminent violence,” O’Leary wrote in the letter that has been shared with media.

There’s a lot of complexity involved with this that the media isn’t really grasping here. There is a very tricky balance between what should be forwarded to police and respecting a users privacy. To put it another way, it’s one thing to say that potential criminal activity or imminent risk of harm should be forwarded to police, but it’s another thing to discuss the idea that everything should be forwarded to authorities. That point was made especially clear when the US government started flooding companies with subpoenas, demanding to know everything the companies know about people who were posting criticisms of the government. How does one strike that balance and find the line of what companies should disclose to police and when to protect the privacy rights of users? I think even some of my critics would agree that this is the million dollar question.

It’s not just me that is raising the concerns of potentially overreacting and requiring AI companies to report all to police. University law professor, Michael Geist, was also raising similar points here. From the Globe and Mail:

The desire to hold someone responsible for the potential prevention of the Tumbler Ridge tragedy is understandable. Add in the mounting pressure for AI regulation, and OpenAI makes for a perfect target for blame and threats of government action.

Yet holding AI chatbots liable for reporting to police what users privately post in their conversations creates its own risks, undermining privacy and effectively encouraging heightened corporate surveillance.

Most global AI regulation has to date adopted a risk-based analysis that seeks to mitigate potential harms. The European Union’s AI Act classifies AI systems on a spectrum of risk with steadily increasing regulations for those that pose the highest risks. General purpose AI systems such as ChatGPT face a range of regulatory requirements given their potential impact, but are not treated as high-risk AI systems.

Debates over regulating AI content typically focus on the potential harms that may arise from AI chatbot outputs. Last year the U.S. Congress held hearings on regulating outputs involving suicide or self-harm after a 16-year-old teen committed suicide that his parents blamed on ChatGPT, characterizing its role as that of a “suicide coach.” There have been similar fears about inaccurate health information that could lead users to follow dangerous medical advice or delay seeking medical attention. These issues have led AI companies to more proactively address the information generated by their services.

But using regulation to require more accurate information – or even to block certain topics from discussion altogether – is far different than mandating that companies monitor what their users say and establish lower thresholds for reporting suspicious behaviour.

If the standards for reporting are too low, there is the real risk that users could face police investigations or worse.

Moreover, given that internet intermediaries such as tech companies find themselves at the centre of virtually everything people write – whether text messages, e-mails, articles stored on cloud-based services, or exchanges with chatbots – these standards of disclosure would presumably apply to virtually all written expression.

So, it really comes down to where you draw the line between ‘must report this to police’ and ‘must protect the rights of the user’. It’s an extremely complex issue with no simple answers. The disappointing thing here is that, given the Canadian government’s track record of responding to complex issues with a giant sledge hammer of legislation along with the attitude of ‘shoot first, ask questions later’, it’s hard to be optimistic that the Canadian government would even come close to trying to figure out balance in all of this. The Online Streaming Act, the Online News Act, and the Digital Services Tax are all examples of the government saying ‘damn the consequences’ and swinging the legislative hammer without thinking anything through. Given the governments insane appetite to pass warrantless wiretapping legislation at the behest of the spy community, it’s not out of the question that the government would use the Tumbler Ridge shooting as a more recent excuse to push warrantless wiretapping. I can only hope it doesn’t come to that, but I can’t say this is an impossible scenario, either given that this very scenario has already happened more than a decade ago.

In these debates, there are a lot of very nuanced and important questions to raise here – especially if the government is looking towards the idea of using the legislative sledgehammer again. The track record of the government in the internet side of things is absolutely terrible, but one can hope that the government, for once, will think carefully on these issues.

Drew Wilson on Mastodon, Twitter and Facebook.


Discover more from Freezenet.ca

Subscribe to get the latest posts sent to your email.

1 thought on “The Tumbler Ridge Shooting Was Tragic. Making it an Excuse for Government Surveillance Would Be Too”

  1. Insert Name Here

    The Liberals seem rather primed to turn the internet into a snitcher’s paradise: ever since the election they’ve pivoted towards the ‘protect the kids’ messaging, and now they have a massive source of fuel to push their agenda in the form of Tumbler Ridge.

    First it’ll be Bill C-16’s expansion to the CSAM MRA (which amends the act to include all types of internet services, weather home or abroad into a mandated reporter), next will be this threat to legislate AI chatbots as obviously reported by the media (and this very Freezenet article) and from whatever statements the ministers blabbed about, then as the CBC reported earlier on in the week will be the next attempt to zerg rush yet another ‘lawful access’ bill through the Commons…and of course the yet-to-be-seen Online Harms v2.0 bill…coming soon to a order paper near Parliament.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top