Several Experts Also Weigh in on the Online Harms “Consultation”

It’s not just organizations weighing in on the “consultation”. Several experts have also weighed in as well.

The barrage of responses is continuing in the online harms “consultation”. It started with my submission which denounced the online harms proposal. What followed was an absolute flood of responses. We saw the Internet Society Canada Chapter, the Independent Press Gallery of Canada, Michael Geist, Open Media, CIPPIC, Citizen Lab, the Internet Archive, CARL, Canadian Civil Liberties Association, the CCRC, the International Civil Liberties Monitoring Group, Access Now, the CPE, multiple anti-racism groups, Global Network Initiative, News Media Canada, LEAF, PIAC, and Ranking Digial Rights. With the exception of the CCRC and News Media Canada, the responses were unanimously against the online harms proposal as it stands now.

Now, we are checking out the numerous experts who have also weighed in on this consultation. Natasha Tusikov and Blayne Haggart have published their response to the consultation. They referred to the proposal and process as not much of a consultation and not much of a plan. From their posting:

We are strongly in favour of government regulation of internet intermediaries and the goal of creating an online environment that is more conducive to socially healthy exchanges. The primary issue when it comes to internet intermediaries is not, should they be regulated by government, but how should government regulate them. However, we have significant substantive and procedural concerns with the government’s proposed approach to address harmful content online. In this note we highlight four in particular:

1. The presentation of a detailed fait accompli before engaging in meaningful, substantive public consultations;
2. The lack of evidence presented explaining and justifying these particular interventions;
3. Its ineffectiveness in addressing fundamental structural problems with social media that give rise to the problems the government says it wants to address.
4. The focus on regulating social media companies overlooks the necessity of regulating the broader digital economy, including online marketplaces and the financial technology industry.

Based on these concerns, which we outline below, we call on the government:

1. To undertake substantive, open consultations to determine how Canada should address these and other related issues.
2. To present research and evidence outlining the problems being addressed and justifying the government’s chosen approach.
3. To pursue a regulatory framework that involves structural reforms to address incentives baked into social media companies’ advertising-dependent and data-fuelled business models.
4. To consider deep institutional reforms to regulate the digital economy, including regulation to address monopolistic behaviour and institutional reforms to strengthen and promote in-house digital policymaking expertise.

So, we are seeing a continuation of the criticism that the consultation process itself was inherently flawed. A smaller theme we are seeing is also echoed here in that there was no evidence that accompanied their proposal. In other words, the government was unable to justify their approach. Also, the comments suggests that the proposal, as it stands now, does not solve the problems it wants to solve in the first place.

Two other experts, Darryl Carmichael and Emily Laidlaw, also weighed in on the consultation. In their response, they argue that the proposal “can include anything and everything posted online”. From their submission (PDF):

All blog posts include a caveat that the analysis is not fulsome, but it seems crucial to emphasize that here. The scope of online harms is broad and can include anything and everything posted online, and the regulatory environment is global, even if what is discussed is domestic law. Indeed, the broad scope of this proposal is a point of criticism, with scholars such as Cynthia Khoo arguing that this should be broken down into subject-matter specific legislation. All this to say that what is offered here are the highlights of some of the key issues of debate and concern.

The Department of Heritage is open for feedback on the proposal until September 25th, depending on the election result. Therefore, this post is organized to provide such feedback. The analysis focuses on some of the major points of reform: scope and definitions, the proposed regulator, proactive monitoring, 24-hour takedown requirements, website blocking, mandatory reporting, and transparency obligations. Each point is explained and critiqued, and recommendations are made to address the criticisms. Because of the nature of this post and the breadth of the proposal, many of the recommendations are relatively general and have the same theme: implementation of this proposal needs to be slowed down and significant consultation undertaken.

By way of introduction, the proposal aims to regulate social media platforms concerning how they moderate certain types of harmful content: terrorist content, content that incites violence, hate speech, non-consensual sharing of intimate images, and child sexual exploitation content. It proposes the creation of a new regulator, the Digital Safety Commission, which would provide recourse concerning specific items of content, oversee and investigate platforms concerning their moderation systems, and enable major administrative penalties to be levied against non-complying platforms. The proposal would also impose significant new obligations on platforms, such as to action content within 24 hours of it being flagged and to proactively monitor content on their services.

This is a continuation of the themes in that all forms of “harmful content” (whether already illegal or something new being added to the list) are all put in the same basket of content with a specific kind of repercussion. There’s also the continuation of the criticism of 24 hour takedowns. There is also the criticism of lack of consultation that we’ve seen many times before like with civil liberties organizations and organizations that represent marginalized communities.

After that, we saw the response from Fenwick McKelvey. In McKelvey’s submission, there is concern with what this proposal means on a number of fronts:

At present, the technical paper too often frames online harms as a policing problem at a time when the biases and oversight of Canada’s policing services are evident and calls for reforms clear and needed. The distinction between online harms and criminal activities remains ambiguous in the technical paper. Conflating harm and criminal acts risks deputizing OSCPs with both enforcement and police reporting for criminal activities (1). Proposal 20 specifically requires a separate consultation phase and it should not be assumed that because automated content takedowns are happening that automated content takedowns are an effective or central instrument to address online harms. Furthermore, the technical paper’s overall focus on OSC regulation is diluted with its discussion of new measures for Internet services and new blocking/filtering obligations for Telecommunications Service Providers. These powers are out of scope and arguably within the power of the CRTC to implement if needed already.

The five online harms need further definition. Furthermore, the nuances of each online harm, such as the national and international dimensions of terrorist activities, for example, may not be well suited for an omnibus framework (2). Protecting Canada’s democracy, ostensibly another online harm, has been addressed through reforms to Canada’s Elections Act.

More accountability to commercial content moderation has not and will not resolve the root causes of online (3). Rather, better regulation of already-existing content moderation is enough of a regulatory accomplishment without the added challenge of suggesting that content moderation as a first response to systemic injustice.

With these primary concerns in mind, I move to the administrative aspects. I acknowledge that content moderation is a needed part of inclusive communication systems, but certainly not more important than matters of access, affordability, and inclusion.

The present risk is that the 24-hr takedown requirement along with a lack of penalties for false positives may encourage OSCPs to further invest in automated content moderation, especially artificial intelligence as a form of media governance.

The consideration of automated content regulation is lacking in the current working paper and needs substantive consideration. The technical paper does not address its responsibility nor its legitimization of artificial intelligence as used by OSCPs to classify, filter, and demote harmful content. The technical paper proposed a regime legitimating automated content regulation at scale without sufficient records of the efficacy of the systems in Canada’s both official languages and in Canada’s multicultural society. The technical paper needs a substantively expanded discussion of AI accountability including times when the potential risks require the prohibition of automated solutions (8).

The DSC may need powers to designate standards for content moderation work that then prohibit AI as high-risk applications and better accountability mechanisms. Inversely, outsourcing and ghost labour in commercial content moderation require better labour standards and safer working environments. At present, the labour of moderation is assumed to be automatable and without long-term harm to the workers.

Definitely a number of major themes are getting hit here. This includes concerns about the 24 hour takedowns, the lack of clarity on what is considered “harmful content”, and the risk of automation. We are also seeing some other ideas being brought forward that is interesting. This includes enhancing inclusivity by making the Internet more affordable and increasing opportunities for access. It’s actually interesting how that is tied into this debate because if this online harms proposal is supposed to tackle injustices, then increasing access and affordability would be a better and more effective way moving forward.

Another interesting point being raised is the subject of outsourcing moderation. Labour standards are extremely weak to non-existent in this industry in the first place. Bringing this point up is certainly valid because automation isn’t the only way moderation can be tackled as envisioned by this proposal. There are underlying issues with outsourced moderation like mental health problems, pay, and a whole lot more. This topic often gets left in the shadows because companies don’t like to admit that they outsource moderation of content in the first place. Is there a system in place today that can handle this level of moderation? This adds another reason to say that it probably is not.

Finally, there is the submission by Valerie Webber and Maggie MacDonald. In their submission, they raise serious concerns about the proposal:

When detection and removal requirements are unrealistic, this encourages a chilling effect among platforms and providers to simply blanket-prohibit a wide range of content,rather than tighten their own moderation standards around what is being posted. Both human and automated systems for flagging content as unsafe disproportionately impact sex workers, activists and organizers, sexual health educators, 2SLGBTQ+ people and the queer community at large, as well as other purportedly protected classes and communities who are routinely technologically marginalized on the basis of race, sexuality, and gender presentation.

Given that content moderation has been proven to disproportionately target marginalized populations–indeed, the same populations this framework claims to protect–the requirement that regulated entities contact law enforcement over perceived infractions is extremely concerning for freedom of expression being stratified based on identity signifiers. Whatever the threshold for triggering such a reporting obligation,history has shown that faced with similar legislation,regulated entities will err on the side of caution around sexual material of all stripes and proactively moderate their platforms in order to avoid steep penalties. This will result in the disproportionate criminal pursuit of already targeted and marginalized people, without requiring any actual criminal offence to take place. Regulation of user content already targets non-normative sexualities and acts disproportionately and has the potential for devastating consequences on the lives of Canadians who do not conform to whiteness, able-bodiedness, or normative gender presentation, such that POC, queer, disabled,and especially sex-working Canadians will face the greatest burden of scrutiny under the new measures. That the regulated entities would be required to retain data related to these potential cases could further produce innumerable privacy and confidentiality concerns for these populations. Finally, that the regulated entities could be required to block entire online communication platforms in Canada–platforms that many sex workers use to earn a safe living–raises a tumult of free speech and human rights concerns.

We share the Government’s concerns regarding the rise of white supremacist, fascist hate groups. However, we are concerned that the expansion of CSIS powers to monitor “Online Ideologically-Motivated Violent Extremist communities” will also result in context collapse that conflates those dangerous activities with sex workers, who are too frequently painted as ungovernable or amoral by antiporn and religious groups. If this framing holds without clear distinctions, it will be used to target any number of groups or associations around the 2SLGBTQ+ community and sexual subcultures, as well as workers and activist efforts around sex work who are exercising their democratic right to criticize government policy and practice.

There is a distressing trend among governments to consult primarily with groups that seek to conflate all manner of sex work with abuse and “human trafficking”, and go on to develop prohibitive and ill-informed regulatory measures in response. These testimonies are not based on reliable research findings, or even meaningful consultation with industry players, and have led to mistrials in recent platform regulation. Canada has the benefit of getting to witness how similar legislative attempts to regulate online communication service providers have failed in the United States. We do not need to make the same mistakes, but have the opportunity to lead regulatory movements with evidence and consultation-based strategy. The United States Government Accountability Office recently published a report documenting the complete failure of FOSTA, the Fight Online Sex Trafficking Act of 2017. FOSTA was ostensibly intended to protect people from sexual exploitation by holding platform operators responsible for user-generated content facilitating sex trafficking. As a response, platforms instead adopted widespread censorship of all forms of sexual content, including advertising and other resources sex workers used to ensure their own safety while working. Even more potently, FOSTA has only been used a single time since its passage, and furthermore the loss of cooperative online platforms and the migration of abusers to platforms hosted overseas has made it even more difficult for the government to pursue cases of sexual exploitation and human trafficking. The conflation of sex work and abuse fails to respect and protect the consensual choice of many individuals to earn a living through sex work, while also failing to address the actual sources of violence.

Indeed, Canada can easily learn from the mistakes made in the US. With respect to the SESTA/FOSTA debate, we can confirm that, although the idea was to make the Internet safer, it actually made the internet less safe for those working in the sex industry. Essentially, tools that were used to protect sex workers were taken offline because of concerns that it would trigger legal liability over so-called “human trafficking”. The end result was that tools such as those that help identify problem clients became inaccessible. SESTA/FOSTA was long criticized for it creating more problems then it solves, but the government chose to ignore those concerns. Those concerns then became reality.

In that light, there are disturbing similarities between the SESTA/FOSTA debate and the online harms proposal. The government is marching ahead with their own plans, potentially ignoring expertise and failing to consult with various organizations, and risk causing a lot of damage on the open Internet as a result. The problem on Canada’s side is that this online harms proposal is much more broad than the vague “human trafficking” concept that was part of the SESTA/FOSTA debate. So, raising this angle is certainly a good move.

Along with that, we are seeing the continuing themes of over-moderation, the continued marginalization of marginalized communities, and the threat of over-policing.

So, definitely quite the wide array of interesting points in this series. As of now, we can safely say that, based on the evidence we have seen, almost everyone disagrees with the online harms proposal as it stands in the technical paper. There’s serious concerns about freedom of expression, over-moderation, automatic moderation, moderation used to further crack down on marginalized communities, the 24 hour takedowns, the lack of clarity on what constitutes a certain kind of harmful content, the fact that harmful content is all being placed in the same basket and treated identically, concerns about site blocking and the ineffectiveness of such blocking in the first place, the concerns about network neutrality in light of this, the inability of smaller operations to comply with such regulations due to lack of funding and/or resources, the fact that this proposal could stymie competition and re-enforce the dominant players positions in the tech world, the way this consultation was even handled in the first place, and, really, this huge array of other concerns.

An overarching theme among many submissions is whether or not the government really intended on soliciting feedback. The fact that many positions were in political platforms, the fact that there were no alternatives offered in the technical paper (it was a case of “this is the plan, period.”), and the fact that this “consultation” was held during an election among other things really contributed to this sense that this is not a serious consultation in the first place. The obvious hope is that the government actually takes this public feedback seriously and admits that it needs to go back to the drawing board would be a huge win. As things stand now, it looks like this is a very far off possibility at this point.

Still, it was great to see this amount of feedback. When I published my response, I felt like I was alone in opposing this online harms proposal. It does raise the concern that if we are alone on this, can anyone treat my position seriously. After seeing all of this, my submission felt more like part of a global consensus. It’s easier to ignore one voice in a process. It’s a whole lot harder to ignore a massive number of voices acting in unison.

Drew Wilson on Twitter: @icecube85 and Facebook.

1 Trackback or Pingback

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: