We have finished analyzing the Online Harms Bill (Bill C-63). Here are some takeaways that I was able to get out of it.
When the Online Harms bill appeared on the notice paper, I had every reason to be apprehensive. The Canadian government, through Bill C-11 and Bill C-18, have developed a multi year long track record of doing what it wants, cutting off debate, ignoring warnings, intimidating experts and critics who dared to offer even the slightest hint of criticism at all, and ignoring all evidence. What’s more, the government was classifying anyone who was criticizing the 2021 version of the Online Harms bill as a “non-ally“, suggesting that if you have “lived experiences”, then you agree with everything the government has to say. Anyone else should be filtered out of the consultation process.
So, every sign leading up to the tabling of the legislation suggested that you had every reason to be worried about this bill. Even the mainstream media outlets weren’t all that credible when they got their insider copies of the legislation. After all, they had basically flushed their credibility down the toilet when they continually pumped out lies about Bill C-11 and Bill C-18 – the latter bill that would ultimately see them lose approximately $130 million per year overall in value after Meta dropped news links. What’s more, major media outlets have been off and on acting more like cheerleaders to the 2021 version of the Online Harms bill. Simply put, the mainstream media outlets had zero credibility. Even if the large media companies conclusions were accurate, people had no reason to believe their comments. This left smaller more credible outlets like us to try and sort out fact from fiction.
We, of course, had a number of things we were looking for as the bill was rumoured to be tabled. This includes whether or not smaller websites would get inundated with obviously fraudulent “complaints” and whether there was going to be mass censorship provisions. This was all inspired by what we saw in the 2021 version. We were always open to the idea that things have changed between 2021, but had little reason that there was anything other than minor change forthcoming in the legislation.
Then, the legislation got tabled. It was a huge and ran north of 100 pages. So, in response, we had to split our main analysis into multiple parts as we sifted through the legislation line by line. You can read both parts here:
Some of the initial reaction we saw while doing out analysis is that the bill had massively improved. A lot of the problems we were concerned about had been addressed. So, just for fun, I thought it would be a good idea to go back to my initial questions in my article on what I was looking out for and seeing how well I could answer these questions.
Obviously, none of this is considered legal advice. I’m just a private citizen making my own interpretations of this legislation.
Addressing My Own Concerns With the Bill
Is There an Actual Definition of “Harmful” Content?
Yes. In the definitions, you’ll see this in the bill:
harmful content means
(a) intimate content communicated without consent;
(b) content that sexually victimizes a child or revictimizes a survivor;
(c) content that induces a child to harm themselves;
(d) content used to bully a child;
(e) content that foments hatred;
(f) content that incites violence; and
(g) content that incites violent extremism or terrorism. (contenu préjudiciable)
This is known as the 7 categories of harmful content. One of the categories that gets a lot of attention is “content that foments hate”. The definitions are further expanded with this:
content that foments hatred means content that expresses detestation or vilification of an individual or group of individuals on the basis of a prohibited ground of discrimination, within the meaning of the Canadian Human Rights Act, and that, given the context in which it is communicated, is likely to foment detestation or vilification of an individual or group of individuals on the basis of such a prohibited ground. (contenu fomentant la haine)
This definition, in and of itself, is vague and could be open to interpretation, however, a separate definition (I don’t know why this is formatted this way as it’s confusing) further tightens the definition:
For greater certainty — content that foments hatred
(3) For greater certainty and for the purposes of the definition content that foments hatred, content does not express detestation or vilification solely because it expresses disdain or dislike or it discredits, humiliates, hurts or offends.
So, essentially, if you are posting stupid comments online expressing your distaste for someone, then your comments are not actually scoped into the bill. In other words, you have to actually be plotting to cause bodily harm openly with others or encouraging others to cause physical property damage before that content triggers provisions in this legislation. Honestly, that is very fair because you are specifically targeting actual criminal activity.
Is There Proper Scoping of Websites in the Legislation?
Yes. This did initially trip me up when reading this legislation, but there is a series of definitions involved here. Often, the term “operator” is used throughout the legislation. This is covered in the definitions:
operator means a person that, through any means, operates a regulated service. (exploitant)
OK, so what is a regulated service?
regulated service means a service referred to in subsection 3(1). (service réglementé)
Before we go to Section 3(1), let’s also grab the definition of a social media service because that is also important:
social media service means a website or application that is accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website or application by enabling them to access and share content. (service de média social)
For greater certainty — social media service
(2) For greater certainty, a social media service includes(a) an adult content service, namely a social media service that is focused on enabling its users to access and share pornographic content; and
(b) a live streaming service, namely a social media service that is focused on enabling its users to access and share content by live stream.
Alright, now lets go to Section 3(1) and see what it has to say:
Regulated service
3 (1) For the purposes of this Act, a regulated service is a social media service that(a) has a number of users that is equal to or greater than the significant number of users provided for by regulations made under subsection (2); or
(b) has a number of users that is less than the number of users provided for by regulations made under subsection (2) and is designated by regulations made under subsection (3).
Regulations — number of users
(2) For the purposes of subsection (1), the Governor in Council may make regulations(a) establishing types of social media services;
(b) respecting the number of users referred to in that subsection, for each type of social media service; and
(c) respecting the manner of determining the number of users of a social media service.
Regulations — paragraph (1)(b)
(3) For the purposes of paragraph (1)(b), the Governor in Council may make regulations designating a particular social media service if the Governor in Council is satisfied that there is a significant risk that harmful content is accessible on the service.
Information provided by Commission
(4) At the Minister’s request and within the time and in the manner specified by the Minister, the Commission must provide to the Minister any information that is relevant for the purposes of subsection (3).
So, when you string all of these bits and pieces together, everything points the fact that the only thing that is getting regulated are social media platforms. All other smaller websites are exempt. I know, it is a lot of steps to figure this out and it is annoyingly scattered throughout the legislation, but by rearranging these provisions and lining them all up, it allows you to get a clear picture that this really does only target social media platforms and no one else. I, for one, am breathing a huge sigh of relief over that.
Is the 24 Hour Requirement Still Intact?
No. The only remnant I was able to find that revolves around 24 hour removals is if the operator of a social media website identifies child sexual exploitation on their own on their own platform. Then the platform has to take action within a 24 hour period. At that point, yeah, that is more than reasonable.
Are Fines Scaled Properly?
Because this legislation only focuses on social media, most websites won’t even have to worry about this. Even then, there is, in fact, a scale thanks to the key phrase of “up to” in the following section:
Penalty
(2) Every operator that commits an offence under subsection (1) is liable(a) on conviction on indictment, to a fine of not more than 8% of the operator’s gross global revenue or $25 million, whichever is greater; or
(b) on summary conviction, to a fine of not more than 7% of the operator’s gross global revenue or $20 million, whichever is greater.
Is ISP Level Website Blocking Removed?
Yes. There is no reference I could find that requires ISPs to block websites. There’s a reference to asking platforms to provide an ability for users to block another user here:
Tools to block users
58 The operator of a regulated service must make available to users who have an account or are otherwise registered with the service tools that enable those users to block other users who have an account or are otherwise registered with the service from finding or communicating with them on the service.
This is, as far as I can tell, a standard feature. Only Elon Musk is dumb enough to openly contemplate removing this feature. Even then, X/Twitter is little more than a giant spam botnet these days, so nothing of value would be lost if Canadians no longer had access to that platform anyway.
Can Websites Appeal a Complaint?
Yes. Social media websites can challenge decisions:
Representations
(2) The Commission must give the operator a reasonable opportunity to make representations with respect to the complaint.
The legislation further states:
Representations
104 (1) A person that is served with a notice of violation may make representations to the Commission within the time and in the manner set out in the notice, in which case the Commission must decide, on a balance of probabilities, after considering any other representations that it considers appropriate, whether the person committed the violation.Decision — violation committed
(2) If the Commission decides that the person committed the violation, it may
(a) impose the penalty set out in the notice of violation, a lesser penalty or no penalty;
(b) suspend payment of the penalty subject to any conditions that the Commission considers necessary; and
(c) make an order requiring the person to take, or refrain from taking, any measure to ensure compliance with this Act.
So, yes, there is a process here that can be followed.
Who Enforces This Legislation and How Does Enforcement Work?
This is a bit of a complicated one. The legislation establishes three new “offices” as it were. Those “offices” are:
- The Digital Safety Commission of Canada
- The Digital Safety Ombudsperson of Canada
- The Digital Safety Office of Canada
What’s more, there can be active consultations with existing organizations such as the CRTC as seen here:
Consultation
137 The Commission and the Canadian Radio-television and Telecommunications Commission must consult with each other to the extent that they consider appropriate in carrying out their respective mandates.
Without getting into huge amounts of copy-pasta, the short of it is that the Digital Safety Commission of Canada (generally referred to as “the Commission” or “Commission” in the context of this legislation) handles the bulk of administering the legislation (at least, I think that’s a fair assessment in this lens). Complaints are generally sent to the Commission. The Commission then assesses whether the complaints are valid or not. If it is determined that the complaint is valid, the Commission can basically contact a social media platform and issue a notice asking the platform to disable the offending content. The platform can remove that content or, alternatively, make a representation before the Commission explaining why they feel that such content shouldn’t be removed.
Probably the sketchy part is the fact that the Commission can also act as a judicial body and generate a legal ruling against a social media platform. Presumably, that ruling could be appealed in an actual court setting afterwards if they feel they have a case. It’s a very bureaucratic process, sure, but there is a process here.
Is It Possible to Challenge a Decision in Court?
Yes. I didn’t honestly see anything in the bill that said otherwise. Even if there was a provision saying otherwise, it likely wouldn’t be constitutional.
Are There Going to Be Carve Outs?
No. I didn’t see any special carve outs. In fact, the legislation is very specifically focused on only scoping in social media and no one else. We’re fine there.
Other Concerns
There are a few concerns I’ve seen around the web that others have expressed about the bill after it was tabled. Some of these concerns are concerns I believe I can also quickly address:
Does This Legislation Affect Erotic Literature or Fictional Imagery?
Short answer: not really. There’s two sections in the definitions that I’m aware of that even gets close to that. The first is a very long-winded definition about child sexual exploitation:
content that sexually victimizes a child or revictimizes a survivor means
(a) a visual representation that shows a child, or a person depicted as being a child, who is engaged in or depicted as being engaged in explicit sexual activity;
(b) a visual representation that depicts the sexual organs or anal region of a child, if it is reasonable to suspect that the representation is created or communicated for a sexual purpose;
(c) written material or an audio recording whose dominant characteristic is the description, presentation or representation of explicit sexual activity with a child, if it is reasonable to suspect that the material or recording is created or communicated for a sexual purpose;
(d) a visual representation, written material or an audio recording that shows, describes, presents or represents any of the following, if it is reasonable to suspect that the representation, material or recording is created or communicated for a sexual purpose:
(i) a person touching, in a sexual manner, directly or indirectly, with a part of their body or with an object, any part of the body of a child or a person depicted as being a child,
(ii) a person who is engaged in or depicted as being engaged in explicit sexual activity in the presence of a child or a person depicted as being a child, or
(iii) a person exposing their sexual organs or anal region in the presence of a child or a person depicted as being a child;
(e) a visual representation, written material or an audio recording in which or by means of which sexual activity between a person who is 18 years of age or more and a child is advocated, counselled or planned, other than one in which or by means of which sexual activity between a person who is 16 years of age or more but under 18 years of age and another person who is less than two years older than that person is advocated, counselled or planned;
(f) a visual representation that shows a child who is being subjected to cruel, inhuman or degrading acts of physical violence;
(g) any excerpt of a visual representation referred to in paragraph (a), if it is reasonable to suspect that the communication of the excerpt perpetuates harm against a person who as a child appeared in the visual representation; and
(h) a visual representation, written material or an audio recording that, given the context in which it is communicated, is likely to bring to light a connection between a person and a visual representation, written material or audio recording referred to in any of paragraphs (a) to (d) in which the person appeared as a child, if it is reasonable to suspect that the communication of the representation, material or recording that is likely to bring to light that connection perpetuates harm against the person. (contenu représentant de la victimisation sexuelle d’enfants ou perpétuant la victimisation de survivants)
In short, you’d have to be counselling or advocating for child sexual abuse for this legislation to be triggered. What’s more, it revolves around a real human being.
The other related section that I’m aware of is this:
intimate content communicated without consent means
(a) a visual recording, such as a photographic, film or video recording, in which a person is nude or is exposing their sexual organs or anal region or is engaged in explicit sexual activity, if it is reasonable to suspect that
(i) the person had a reasonable expectation of privacy at the time of the recording, and
(ii) the person does not consent to the recording being communicated; and
(b) a visual recording, such as a photographic, film or video recording, that falsely presents in a reasonably convincing manner a person as being nude or exposing their sexual organs or anal region or engaged in explicit sexual activity, including a deepfake that presents a person in that manner, if it is reasonable to suspect that the person does not consent to the recording being communicated. (contenu intime communiqué de façon non consensuelle)
So, in other words, the content would have to generally be a recording of a real person being recorded without their knowledge. Additionally, the content would have to be a deepfake of a real person before provisions of this legislation are triggered. Other than that, as far as I’m aware, erotic literature or animations are unaffected by this legislation.
Am I Going to Get Life in Prison?
There are only two provisions in this legislation that I’m aware of that talks about putting someone in jail for life. The first is this:
Advocating genocide
318 (1) Every person who advocates or promotes genocide is guilty of an indictable offence and liable to imprisonment for life.
Yeah, in that case, you would have to be posting some seriously sick stuff to trigger that.
The only other provision found in this legislation is this (and I strongly encourage you to at least read the last paragraph in this excerpt because it provides critical guard rails):
Offence motivated by hatred
320.1001 (1) Everyone who commits an offence under this Act or any other Act of Parliament, if the commission of the offence is motivated by hatred based on race, national or ethnic origin, language, colour, religion, sex, age, mental or physical disability, sexual orientation, or gender identity or expression, is guilty of an indictable offence and liable to imprisonment for life.
Definition of hatred
(2) For the purposes of subsection (1), hatred has the same meaning as in subsection 319(7).
Exclusion
(3) For greater certainty, the commission of an offence under this Act or any other Act of Parliament is not, for the purposes of this section, motivated by hatred based on any of the factors mentioned in subsection (1) solely because it discredits, humiliates, hurts or offends the victim.
So, essentially, you have to be committing actual hate crimes (typically involves harm to the individual or property damage) in order to trigger something like this. Making a stupid flippant comment online does not trigger the provisions in this bill in this context as the exclusions section makes perfectly clear.
For those wondering about 319(7), I was able to find Section 319 in the Criminal Code here. I’m not sure what it’s talking about when mentioning subsection 7, though, but my takeaway is that this revolves around an actual crime being committed.
I did see some concerns about getting handed 2 year prison sentences. There was the odd comment of getting a 5 year prison sentence. I was unable to find any reference in the legislation specifically talking about prison sentences of either length of time.
Are There Still Concerns With This Legislation?
Yes. I was able to find some things in this legislation that I personally find concerning. I’m sure there are other experts out there that can find other elements.
The biggest concern I personally had was the fact that the Commission would act as a judicial body as well. I don’t know why such a body would be given that kind of power in the first place. You’d think that if there is a major dispute between the Commission and a social media platform, that the traditional court system would actually be an ideal third party to handle this. Not so according to this legislation:
Enforcement of Orders
Enforcement of orders
95 (1) An order of the Commission may be made an order of the Federal Court and is enforceable in the same manner as an order of that court.
Procedure(2) An order may be made an order of the Federal Court by following the usual practice and procedure of that court or by filing a certified copy of the order with the registrar of that court.
I think others have rightfully aired similar concerns about this sweeping new power being obtained by such a governmental body.
The other concern I noted has to do with prohibiting telling recipients of complaints that their activity has been sent to authorities. This has to do with Section 59 (2) (posting full section for greater context):
Tools and processes to flag harmful content
59 (1) The operator of a regulated service must implement tools and processes to(a) enable a user to easily flag to the operator content that is accessible on the service as being a particular type of harmful content;
(b) notify a user who flagged content as being a particular type of harmful content of the operator’s receipt of the flag as well as of any measures taken by the operator with respect to the content or of the fact that no measures were taken; and
(c) notify a user who communicated content that was flagged as being a particular type of harmful content of the fact that the content was flagged as well as of any measures taken by the operator with respect to the content or of the fact that no measures were taken.
Prohibition – notification of measures
(2) In notifying a user of any measures taken with respect to content in accordance with paragraphs (1)(b) and (c), the operator must not notify a user of any report that the operator has made to a law enforcement agency in relation to the content.
Personally, I think this sort of decision should be left to the discretion of the social media platform in question. If they choose not to notify the affected user of a complaint that the RCMP have been notified, fine, there’s not really much that can be done about that sort of thing. To outright bar platforms from disclosing that, however, opens the door for secret police investigations without the knowledge of the affected user. I find this something that is a pretty uncomfortable aspect in all of this.
There is a third section that wasn’t so much of a concern as it was a legal question mark for me. This has to do with the seemingly innocuous Section 63 which is this:
Duty to preserve certain harmful content
63 (1) If the operator of a regulated service makes inaccessible to all persons in Canada content that incites violence or content that incites violent extremism or terrorism, the operator must preserve that content, and all other computer data related to it that is in the operator’s possession or control, for a period of one year beginning on the day on which the content is made inaccessible.
Duty to destroy
(2) After the end of the one-year period, the operator must, as soon as feasible, destroy the content and any other computer data related to it that would not be retained in the ordinary course of business, as well as any document that was prepared for the purpose of preserving that content and data, unless the operator is required to preserve the content, data or document under a judicial order made under any other Act of Parliament or an Act of the legislature of a province or under a preservation demand made under section 487.012 of the Criminal Code.
Specifically, my legal question mark has to do with the “Duty to Destroy” section. The section says that the social media platform must destroy the retained evidence after a period of one year unless Canadian authorities require the platform to hold on to that data for longer or unless it isn’t feasible to destroy that data any sooner. OK, all fine and dandy for the Canadian side of the equation.
Where things get unclear is what happens if that same evidence is being requested to be retained by another country. What if that second country requires the platform to hold on to that evidence for a period of two years? At that point, the platform has to choose between abiding by that other countries law to retain that data or abide by the Canadian law to destroy it after a period of one year. As far as I can tell, the provision really isn’t that clear on what to do in that scenario.
Personally, I would think an amendment to include language that is something along the lines of “Exclusion: If the operator is required by law in another jurisdiction to preserve the retained harmful content for a period of time greater than 1 year, then the operator is not required to destroy that material.” You know, something along those lines at least. In that scenario, there won’t be the potential for an international legal conflict.
I mean, I can speculate the original language suggests that the length of such material to be retained is dictated by the jurisdiction that has the longest retention requirements, but I’m only speculating here.
Now, I’m not saying that there aren’t any other concerns with regard to this bill out there. These are just the points of interest I was able to find on my own.
So, those are my concluding thoughts on the legislation. It’s not problem free, but it is certainly a marked improvement over the 2021 legislation. I, for one, feel relieved that the bill, as currently drafted, won’t be a threat to my website in any way. It’s a huge load off of my shoulders at least and I’m grateful I have one less thing to worry about. Obviously, in theory, this can change if amendments are introduced that changes things up, yet again, but in its current form, the Online Harms bill isn’t something that will be keeping my awake at night after all.
Drew Wilson on Twitter: @icecube85 and Facebook.
Though I’m not Canadian, my Aunt and Uncle live in Canada and I have many Canadian friends. This part concerned me because what if saying “Israel has a right to defend itself” is construed as advocacy for genocide? Same for the opposite: What if saying “Free Palestine from the river to the sea” would give a Canadian a lifetime in a cage?
That concerns me.