Canadian Government Considers Expanding Online Harms Bill to Include AI

It seems like the Online Harms bill might be getting worse as the government seeks to rope in AI chat bots.

One bill being pushed by the government that has been a heck of a roller coaster is the Online Harms bill. This is the bill that, at one point, had me thinking that my career was over along with pretty much every single Canadian website being forced to shut down. This is because early prototypes of the bill said that all websites must take down anything that is deemed “harmful” (by that, what some random anonymous user might classify as “harmful”) within 24 hours or face a fine of 3% of annual turnover or $10 million, whichever is greater. You can see why my career was flashing before my eyes at that point.

Of course, as you know now, the government did something quite uncharacteristic at the time. They appeared to listen to the experts and the feedback they were getting and dropped those provisions, allowing the Canadian internet to live. While not all concerns were addressed – as there were some expression related concerns still lingering in the bill – the bill was a marked improvement over the prototype being floated earlier. As time went on, the bill was not making much progress. So, it got split into two separate bills in an effort to speed up passage. After that, there were not a huge number of developments and I was busy covering the insanity going on in the US as well as several other stories which meant I was probably missing a few things along the way on this story (I’m only one person). Either way, though, with the last federal election, the bill did die on the orderpaper.

While the Online Harms bill was still problematic, it seemed that, over time, it was steadily improving. If that trend hadn’t stopped earlier, it certainly stopped recently. Apparently, the government is looking to expand the forthcoming Online Harms bill to include AI chatbots. From the Globe and Mail (probably paywalled):

Two experts in artificial-intelligence policy say the forthcoming federal online harms bill must address AI chatbots and create a framework for reporting credible threats, after it emerged that concerning content from the Tumbler Ridge shooter was flagged but not reported to police.

Earlier this month, the 18-year-old shooter killed five children and a teacher’s aide at her former B.C. school after killing her mother and half-brother at her home.

Her posts were flagged by OpenAI’s automatic screening systems, the company confirmed Friday, and her ChatGPT account had been suspended because of concerning content. But the company did not notify law enforcement in June because it did not identify “credible or imminent planning.”

The experts say the online harms bill should take action to address a lack of guidelines on when AI platforms should report violent content to police. The platforms should have to be transparent about their policies for mitigating risk, particularly to children, they say.

“Internal flagging without a clear, legally defined escalation path is insufficient,” said Helen Hayes, associate director of policy at McGill University’s Centre for Media, Technology and Democracy (MTD).

“If staff identify credible indicators of imminent harm, there should be a defined regulatory framework telling them what to do next, not just a discretionary corporate judgment call,” she said in an e-mail.

Contrary to what the article implies, experts are actually opposed to the idea of expanding the forthcoming Online Harms bill to include AI chatbots. From Michael Geist:

Given that the Act is tailor made to address online harms, it isn’t surprising that some would suggest that it could be expanded to cover AI chatbots.

Yet the law was deliberately designed to avoid doing what politicians want the AI companies to do as it expressly exempted private communications and proactive monitoring from its scope. Indeed, applying the Online Harms Act to AI chatbots would not simply extend existing online safety rules to a new technology. It would require dismantling core privacy safeguards which were added after the government’s earlier online harms proposal faced widespread criticism for encouraging platform monitoring and rapid reporting to law enforcement. In effect, proposals to use online harms to regulate AI chatbots risks reviving many of the same surveillance concerns that forced the government back to the drawing board just a few years ago.

The Online Harms Act was crafted to regulate social media platforms, not all digital services. Section 2 defines a social media service as a “website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content.‍” Regulated services under the bill were defined as social media services that reached a certain threshold of users. The legislative focus was therefore on large-scale dissemination and amplification, namely platforms where harmful content can rapidly reach broad audiences through sharing and recommendation systems.

None of this fits with an AI chatbot. Interactions with chatbots such as ChatGPT do not involve user-to-user communication or public dissemination. A prompt entered into a chatbot is typically visible only to the individual user and the provider. There is no audience exposure risk, the central concern animating the Online Harms Act framework.

In fact, the bill reinforced this limitation through an explicit privacy safeguard. Section 6(1) provides that the Act’s duties do not apply in respect of any private messaging feature of a regulated service. Section 6(2) defines private messaging as communications sent to a limited number of users selected by the sender rather than to a potentially unlimited audience. This exclusion reflects a clear policy boundary as the government chose to regulate publicly amplified harms while leaving interpersonal digital communications outside the regime. Chatbot interactions align far more closely with private messaging than social media publishing since they involve one-to-one exchanges rather than public distribution. Bringing chatbot prompts within the Online Harms Act would therefore require narrowing or effectively bypassing the statute’s privacy protections.

Moreover, Section 7(1) states that nothing in the legislation requires an operator to proactively search content communicated on the service in order to identify harmful content (subject to a narrow exception involving child sexual victimization materials). The current push to apply the Online Harms Act to AI chatbots moves in precisely the opposite direction. Identifying potentially dangerous behaviour from AI chatbot interactions would almost inevitably require analysis of prompts and conversational patterns within private exchanges. In practical terms, it would introduce monitoring into the very environments the Act was structured to avoid regulating.

Indeed, one of the major fears is that the Online Harms bill would bring in surveillance provisions for user to user interactions. The idea that the government, or organizations acting on behalf of government, would go snooping around people’s private communications looking for anything deemed “harmful” is an extremely unpopular idea for reasons that should be obvious. This move, however, is seen as moving back in the direction of mass government (or government mandated) surveillance of user interaction.

The thing to remember is that when the original Online Harms consultations took place, multiple anti-racism groups argued, among other things, that increasing surveillance and encouraging over-policing would actually further undermine the rights of racialized communities. Here’s part of their submission at the time:

Particular aspects of concern regarding the proposed legislative framework from an anti-racism perspective include:

1. Incentivization of over-removal produced by: the short timeline for required response after content being flagged (24 hours); the obligation for online communication service providers (OCSPs) to take proactive measures to identify harmful content, including through use of automated systems (repeatedly shown susceptible to amplifying existing biases); vague definitions that will lead platforms to be over-inclusive in order to be “safe;” and significant financial penalties for non-compliance.

2. Conflation of very different types of online harms – for example, “hateful” or “terrorist” content with “child sexual exploitation” or “non-consensual sharing of intimate images” – under a single regulatory regime. This is particularly problematic given the existing deployment of categories of “hate speech” and “terrorist speech” to censor Black and Palestinian content online; abetted, in the Palestinian case, by efforts to institutionalize the International Holocaust Remembrance Alliance definition of antisemitism, widely critiqued for conflating criticisms of Israeli policy with antisemitism.

3. Increased information-sharing with law enforcement and security agencies regarding possibly harmful content. As law and technology scholar Michael Geist observes, this may “lead to the prospect of [artificial intelligence] identifying what it thinks is content caught by the law and generating a report to the RCMP” – likely intensifying the current state of over-policing and -surveillance of colonized and racialized communities.

4. Sweeping search powers for “inspectors” to verify compliance with the legislation, secret hearings, and new information-gathering powers for CSIS – allocating further police-like capacities to CSIS.

(emphasis mine)

So, such a move would directly go against what has been called for in the past. Increasing surveillance powers would undermine the rights of people – including racialized communities. As Geist notes, interactions with chat bots is a user to service communication. This means the risk of expanding the Online Harms bill to include AI chatbots would increase government surveillance powers on people interacting with such services.

Late last month, I warned about not using the Tumbler Ridge mass shooting as an excuse to push through laws that would worsen the state Canadian digital rights. At the time, I cited other examples of the government (both Conservatives and Liberals) taking advantage of tragedy to ram through otherwise deeply unpopular technology legislation. I worried that the same thing was going to happen with the more recent tragedy and, judging by some of the commentary by politicians now, it looks like those fears I had are coming to fruition.

Drew Wilson on Mastodon, Twitter and Facebook.


Discover more from Freezenet.ca

Subscribe to get the latest posts sent to your email.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top