International Civil Liberties Monitoring Group, Access Now, and CPE Responses Posted

The International Civil Liberties Monitoring Group, Access Now, and the Cybersecure Policy Exchange have posted their responses to the online harms “consultation”.

How large can this list get? Well, that list is still growing of those who responded and condemned the online harms proposal as-is. At first, it was just me that posted a response to the online harms consultation. What followed was a major movement to put a stop to this. The list includes the Internet Society Canada Chapter, the Independent Press Gallery of Canada, Michael Geist, Open Media, CIPPIC, Citizen Lab, the Internet Archive, CARL, Canadian Civil Liberties Association, and the CCRC. With seemingly the exception of the CCRC, everyone that we documented so far has opposed the online harms proposal in its current form.

Now, we are adding to that list. First, we see the response from the International Civil Liberties Monitoring Group. Many of their concerns are concerns others have raised. From their submission (PDF):

Our interest in the consultation regarding the government’s proposal to address online harms lies in several areas. As a coalition whose mandate is to protect civil liberties, we also recognize the need to address real threats of violence and believe it is important and urgent that action is taken to address hate-based violence, and support government efforts to do so. It is clear that in supporting various freedoms, including freedom of expression, it is not enough to simply protect against censorship, but to also address actions and environments that make it impossible for individuals and communities to fully exercise their rights. We hope that our submission helps to strengthen and support that crucial policy goal.

However, we see several worrisome and even troubling aspects to this current proposal that may in fact undermine the stated goals. These include:

  • the further expansion of problematic definitions of terrorism and enforcement online, which have been shown to more often target the many of very communities which the government proposes to support with this new regime.
  • a questionable conflation of efforts to address wildly different harms which need very specific solutions
  • a monitoring and removal process that threatens freedom of expression and privacy rights, and which will likely have severe and significant impacts on already marginalized communities
  • new obligatory reporting rules to law enforcement and intelligence agencies
  • new warrant powers for CSIS
  • transparency and accountability requirements that require the addition of more robust policies

So, this generally does follow the overarching themes we’ve seen up to this point. There is a bit more focus on the law enforcement side of this proposal, but nevertheless, does fall nicely into the categories of concern. This revolves around the requirements of website owners being required to silently send information to law enforcement without the need of a warrant. In addition to that, there is the concern surrounding the idea of putting a bunch of content considered “harmful” into the same basket with the same consequences for all of it. It’s actually an interesting contrast to my submission where I worry about how I could possibly have a chance at complying with these requirements. This submission looks at the consequences if someone like me were somehow successful at that (fat chance of complying with that, mind you).

Next up is Access Now. In their submission, they say that the proposal would not solve the problems it seeks to address and, instead, threatens fundamental freedoms. From their submission (PDF):

Access Now writes to express its concerns regarding the Government of Canada’s (the “Government”) proposed approach to address harmful content online released on July 29, 2021. The Government’s goals are laudable as everyone, including the Government, should seek to reduce harmful speech, including hate speech, online. However, the Government’s proposal will not achieve these goals. Instead, the proposed framework threatens fundamental freedoms and human rights.

The Government should ensure any legislative framework enacted into law protects human rights, including the rights to freedom of expression and speech, while also making it easier to address illegal content, hate speech, and other harmful online content. With this in mind, Access Now offers human rights-centered guidelines for content governance and urges the Government to substantially revise its approach to comply with international standards. Specifically, Access Now argues that the Government should reconsider the scope of vague definitions and overly broad categories of “harmful content,” provide adequate time frames for content removal, avoid imposing proactive monitoring or filtering obligations, make fines and other sanctions proportionate, and refrain from mandating overly broad website blocking at the internet service provider level.

The timeframes for removing flagged content are onerous and will lead to significant impacts on freedom of expression and speech. The technical paper proposes that OCSPs “address all content that is flagged by any person in Canada as harmful content expeditiously.” The term expeditiously is defined as “twenty-four(24) hours from the content being flagged.” The Governor in Council has the authority to adjust this timeframe for different types of harmful content, including the power to shorten the timeline.13 Within that timeline, OCSPs have two options: if flagged content qualifies as “harmful,” the OCSP must remove the content from its platform; if flagged content does not qualify as “harmful” the OCSP must provide an explanation to the person who reported the content as to why it does not fall under the definition of harmful content.14 OCSPs that violate the framework are subject to financial penalties of up to three percent of global revenue or $10 million.

A twenty-four-hour deadline to determine whether online speech meets the definition of harmful content and should be removed from a platform is an unreasonable and onerous obligation. Without adequate time to make a content moderation decision, OCSPs will by default remove flagged content regardless of its illegality or harmfulness. Content moderators are often overworked, have many cases to review, and are not truly qualified to make legal determinations. This makes over-reliance on legal criteria and inadequate, biased, or subjective censorship of content inevitable under harsh restrictive time frames for content removals. With such a short timeframe for review, it would be almost impossible for a content moderator to understand the full context of certain content. And for OCSPs that operate in multiple time zones, short time frames allocated for response would likely impose onerous burdens on smaller OCSPs with limited staff. Even worse, the harsh twenty-four hour deadline for content removals could compel OCSPs to deploy automated filtering technologies at a scale that could further result in the general monitoring of online content, ultimately violating the rights to freedom of expression and privacy. Any revisions to the proposal should consider these nuances and the capabilities of smaller OCSPs on the market.

The response hits on a number of other topics, but again, we are seeing the 24 hour deadline being mentioned as being excessive.

finally, there is the Cybersecure Policy Exchange. Their submission offers a more analytical approach to the “consultation”. From their submission (PDF):

Key Recommendations:
1. Clarify the online platforms in scope to exclude journalism platforms and platforms where user communication is a minor ancillary feature of a platform (e.g., fitness, shopping, travel).
2. Establish platform size thresholds to place fewer obligations on smaller and non-profit platforms, to avoid entrenching incumbents.
3. Require minimum standards of user reporting features and transparency for private platforms with very large user reach.

4. Clarify the definitions of harmful content as they relate to online content moderation, and consider adding identity fraud to the list of harmful content in scope.

5. Narrow the requirement for platforms to take “all reasonable measures” to identify harmful content, to avoid over-censorship and ensure wrongful takedown is appealable.
6. Ensure the length of time provided for content moderation decisions can evolve through regulatory changes.

7. Limit any requirements for mandatory platform reporting to law enforcement to cases where imminent risk of serious harm is reasonably suspected, and consider narrowing to only child sexual exploitation and terrorist content.

8. Ensure platform transparency requirements are publicly accessible in a manner that respects individual privacy and work with international allies to ensure data comparability.

9. Require larger platforms to cooperate with independent researchers, and annually review and mitigate their systemic risks.

10. Remove or significantly narrow the ability to block access to platforms for non-compliance.

So, after quite a bit of analysis, they essentially came to the same conclusions as so many others have. That includes that site blocking is unworkable, remove the one-size-fits-all approach, and to remove the requirements of websites to silently report to law enforcement. In other words, another voice calling for, at the very least, a substantial rework of the proposal.

The opposition continues to grow louder to this proposal. We’ll continue to cover those voices as we move forward.

(Hat Tip: Michael Geist)

Drew Wilson on Twitter: @icecube85 and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top