Why 24 Hour Takedowns in Canada’s Online Harms Proposal Was So Heavily Rejected

One of the themes for the online harms consultation is that the 24 hour takedown requirements is a terrible idea.

Canada’s online harms proposal has seen pretty much universal and international condemnation. The reasons for this near universal condemnation is wide-ranging. It’s not that observers are divided over what is so terribly wrong with the proposal, but rather, there are so many things wrong with it that it’s difficult to encapsulate it into a brief summary.

One of the big themes that came out of the consultation surrounds the 24 hour takedown system. Essentially, anyone can flag content as “harmful” and the operator of that site has 24 hours to take it down. From an outside observer who knows nothing about how the Internet works, it sounds like a great idea in motivating sites to not drag their feet in taking down content that might actually be illegal.

Of course, for those who know how the Internet works, this is an absolute nightmare. Operationally, trying to take down “harmful” content within 24 hours is an extremely difficult prospect. For a smaller operation, hiring enough staff to police such content is an extremely tall order. For larger operations, having to deal with thousands of complaints immediately is a logistical nightmare. Because of this, there are very real fears that automation would be employed in that content is just taken down without oversight of any kind.

The idea that content might just automatically get taken down raises significant concerns about free speech. If anyone can flag content at any time, and it gets removed with no questions asked, then this opens the door for massive amounts of abuse. If someone maliciously starts flagging content they don’t like, then they have a powerful censorship tool. It’s basically the equivalent of looking at the problem of judicial backlogs and automatically handing out guilty sentences for everyone to clear that backlog. It’s a terrible idea that throws any semblance of justice straight out the window.

These views are far from novel these days. Almost everyone recognizes the inherent problems this has on free speech. Even as the consultation got started, the concept was met with near universal condemnation. At the time, the international community was just catching wind of this with some calling it the war on the open Internet. That condemnation expanded via the EFF who condemned the 24 hour window as “a hair-trigger 24-hour takedown requirement (far too short for reasonable consideration of context and nuance)”.

Cory Doctorow echoed that sentiment, saying, “It’s a worst-in-class mutation of a dangerous idea that’s swept the globe, in which governments demand that hamfisted tech giants remove broad categories of speech — too swiftly for meaningful analysis.”

I, of course, issued a response to the “consultation” and responded specifically to the 24 hour window to take down content with this, “I wouldn’t even know where to begin with trying to be in compliance with this. As a result, I find myself wondering if my site has a future under these heavy regulations. This further raises the question, “If someone who actually follows these issues can’t even begin to figure out how to be in compliance, what about the millions of others who don’t even have my level of experience with these issues?””

After a bit, other responses started pouring in for the consultation. The Internet Society Canada Chapter responded with this: “If objectionable content evades the platform-internal censorship regime, that speech may be subject of a complaint from a member of the public (not restricted to complaints from Canadians). Once the content has been flagged by a complainant, the platform must, within 24 hours, decide whether to render the content inaccessible to persons in Canada.

The decision whether to censor the content must be conveyed to the complainant and (though this is not clearly spelled out) to the person who posted the content. They are to have an easy-to-use opportunity to have the decision promptly reviewed and reconsidered. The Proposal does not mention providing an opportunity for an exchange of competing views by the complainant and the content poster. It is hard to contemplate a serious back-and-forth within a 24 hour timeframe.

The Independent Press Gallery commented, “As has been noted by many commentators, the Proposal is likely to result in responses by OCSP that favour risk aversion over freedom of expression. These risks to Charter-protected rights will be amplified if the Governor in Council is granted regulation making authority to broaden the scope “harmful content”. The risk aversion outcome is amplified by the obligation imposed on an OCSP to remove harmful content within 24 hours of being flagged and the excessive administrative monetary penalties which are not proportional to the supposed harms.”

University Law Professor, Michael Geist, commented, “The proposed approach includes a requirement for OCSs to implement measures to identify harmful content and to respond to any content flagged by any user within 24 hours. The OCSs would be required to either identify the content as harmful and remove it or respond by concluding that it is not harmful. The OCSs can seek assistance from the new Digital Safety Commissioner on content moderation issues. The proposed legislation would then incorporate a wide range of reporting requirements, some of which would be subject to confidentiality restrictions, so the companies would be precluded from notifying affected individuals.”

Geist further comments, “By mandating such rapid takedowns of content, there is a clear risk of over-removal of content since it is difficult to give the content a proper assessment to understand its context. Furthermore, since many companies will use automatic systems to meet their legal obligations, experience elsewhere suggests that there will be significant over-removal of otherwise lawful content.”

Open Media commented on this aspect of the proposal: “As we explain in our first blog analyzing the consultation’s proposals, if the government’s current proposal becomes law, we’ll see Facebook and Twitter deputized into being surveillance and censorship agents of the state, overzealously removing many of our lawful posts and reporting them directly to CSIS and the RCMP.”

Open Media adds, “Why? Because our government’s proposed system harshly penalizes any online platform that fails to proactively detect and remove actually-illegal content within 24 hours — and all our legitimate posts will be so much collateral damage. It triples down on this harm by suggesting obligating platforms to report all removed posts directly and automatically to law enforcement.”

CIPPIC also weighed in on the 24 hour takedown requirement, “The proposal’s requirement that OCSPs block unlawful content within 24 hours of being notified that such content is available on their services should be scrapped, in view of the serious free expression concerns it raises. The proposed requirement is more heavy-handed even than Germany’s controversial NetzDG law, given that the latter’s 24-hour blocking requirement applies only to “manifestly” unlawful content. NetzDG has served as a prototype for online censorship by authoritarian regimes around the globe, and Canada places its long history of leadership in advocating for human rights at risk by following such an approach.”

CIPPIC adds, “The proposal’s draconian requirements are in sharp contrast to the immunity provided to service providers in the US for user-generated content,21 and the requirement for “expeditious” removal of unlawful content in the UK and the EU. They may also be inconsistent with Canada’s international obligations under Article 19.17 of the Canada-US-
Mexico Agreement (CUSMA).”

Citizen Lab commented on the 24 hour removal as well, “Moreover, 24 hours may be both too long and too short a window in which to require a platform
company to act, depending on the type of content in question. The proposed legislation thus combines he worst of all worlds—potentially providing too little, too late in the case of NCDII, while courting unconstitutional overreach in the case of potential “hate speech” and “terrorist content”. TFGBV experts consistently emphasize that speed is of the essence in the case of NCDII, given the danger in, devastating consequences of, and ease of downloading, reproducing, and further distributing the image or video, leading to further and repeated revictimization of the person depicted. As mentioned above, identifying when something is NCDII (or child sexual exploitation) also poses fewer challenges compared to, in contrast, the likely more careful and nuanced analysis required for some situations of potential hate speech or “terrorist” content, for example.

Citizen Lab further commented, “Indeed, even Germany’s NetzDG system, which has been deemed one of the more demanding platform regulation regimes, allows for up to seven days to assess and remove content that is not “manifestly unlawful”. Even to the extent that “hate speech” and “terrorist content” are unlawful, which has been the government’s justification for their selection, it is far from the case that any given piece of content will manifestly fall within or outside of the relevant legal definitions. This is yet another instance demonstrating the incoherence, impracticality, constitutional fragility, and danger of addressing five legally, substantively, and sociopolitically different categories of content within the single blunt legal regime proposed. Addressing any of these issues in good faith requires separate, targeted legal regimes tailored to each category of content.”

Internet Archive Canada also commented, “As we understand it, the government’s proposal would impose substantial costs, financial and otherwise, on any entity which is deemed to fall within the definition of an Online Communication Service. The definition could change by regulation at any time. This would make it a risky proposition to participate in online life in any way close to the definition of an OCS; with a change in definition, or even in interpretation, substantial nvestments of time, energy, and other resources could evaporate. And for those clearly within the concept of an OCS—whatever that is deemed to be—the costs of automated systems, the technical and human resources required to implement twenty-four hour takedowns, and all the actual and possible associated requirements, would be extraordinarily high. How could these be met by small libraries, not-for-profits, or startups? How could any but the largest multinational corporations play a part in shaping the online world? Would that situation truly address the problems at hand?”

Canadian Association of Research Libraries (CARL) notes the following: “Canadian libraries are also concerned that the proposal requires the use of algorithmic filters and AI driven tools to facilitate the removal of content. These problems are exacerbated by the 24-hour removal timelines and massive penalties for companies that fail to remove banned content. This will all-but guarantee that the system will lead to the mass removal of content. In addition, with no penalties in place for companies that over-remove content, there will be no incentive to restore content that was removed erroneously.”

The Canadian Civil Liberties Association (CCLA) commented on this as well, “The proposal includes 24-hour takedown requirements for platforms for a wide variety of content and fails to consider the significant risk to lawful expression posed by this requirement. There are few meaningful due process protections built into this scheme.”

Meanwhile, the International Civil Liberties Monitoring Group issued these comments, “Concerns also exist around the short time period in which platforms must render a decision. It is clear that some forms of harmful content are readily identifiable as illegal, and would not be impacted by a mandatory 24-hour response time. However, large amounts of content including in regard to “terrorist content” will likely fall in a grey area of lawfulness or harmfulness, requiring examination of context or seeking out further information. To expect a decision within 24 hours, under penalty of non-compliance, would likely force a “render inaccessible first, ask questions later” approach. While the proposal makes explicit mention of setting different time periods by regulation (including shorter time periods), it positions 24 hours as the standard by which to decide all moderation decisions; it will be necessary to justify going forward why there should be longer time frames, rather than needing to justify a short time frame such as 24 hours. For
example, in Germany, for grey area content, platforms have up to a week to make a moderation decision, and are able to request a further extension if necessary. If this is the ultimate goal for the Canadian system, presenting these options clearly would have ensured a more comprehensive consultation and understanding of the moderation process.”

Access Now remarks with the following, “The timeframes for removing flagged content are onerous and will lead to significant impacts on freedom of expression and speech. The technical paper proposes that OCSPs “address all content that is flagged by any person in Canada as harmful content expeditiously.” The term expeditiously is defined as “twenty-four(24) hours from the content being flagged.” The Governor in Council has the authority to adjust this timeframe for different types of harmful content, including the power to shorten the timeline.13 Within that timeline, OCSPs have two options: if flagged content qualifies as “harmful,” the OCSP must remove the content from its platform; if flagged content does not qualify as “harmful” the OCSP must provide an explanation to the person who reported the content as to why it does not fall under the definition of harmful content. OCSPs that violate the framework are subject to financial penalties of up to three percent of global revenue or $10 million.

Access Now adds, “A twenty-four-hour deadline to determine whether online speech meets the definition of harmful content and should be removed from a platform is an unreasonable and onerous obligation. Without adequate time to make a content moderation decision, OCSPs will by default remove flagged content regardless of its illegality or harmfulness. Content moderators are often overworked, have many cases to review, and are not truly qualified to make legal determinations. This makes over-reliance on legal criteria and inadequate, biased, or subjective censorship of content inevitable under harsh restrictive time frames for content removals. With such a short timeframe for review, it would be almost impossible for a content moderator to understand the full context of certain content. And for OCSPs that operate in multiple time zones, short time frames allocated for response would likely impose onerous burdens on smaller OCSPs with limited staff. Even worse, the harsh twenty-four hour deadline for content removals could compel OCSPs to deploy automated filtering technologies at a scale that could further result in the general monitoring of online content, ultimately violating the rights to freedom of expression and privacy. Any revisions to the proposal should consider these nuances and the capabilities of smaller OCSPs on the market.”

We really could go on and on with these comments, but one thing is clear: the 24 hour removal requirement is a terrible idea. The submissions speak for themselves on why this is a big reason why the Online Harms proposal is a threat to freedom of expression among other things. Whether the government listens to sound reasoning or not, of course, remains to be seen. Still, this is one area almost everyone agrees with: that the 24 hour removal time window, as envisioned in the technical paper, is not workable.

Drew Wilson on Twitter: @icecube85 and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top