Big Publisher Article Conflates Online Harms With Moderation

Big publishers are at it again. One is trying to push for online harms legislation, but seems to be confusing the issue.

We’ve been following the online harms debate for quite a while now. It’s with good reason because of the threats it represents to online journalism outlets such as our site. In our efforts to track the submissions in the consultation process, we noted that News Media Canada was the only organization we could find that actively supports the idea without question. As we note in our coverage, this represents a problem because the media has the largest bull horn to push their agenda in the debate.

… and use it, they have.

We’ve already documented instances of the big publishers pushing misleading information and even outright disinformation in various debates in recent years. So, these days, if the media starts pushing disinformation, it wouldn’t necessarily be anything new. Still, as responsible journalists, we need to periodically set the record straight on the issues at play.

Today, we are learning that outlets have already started drumming up articles that can very easily be considered misleading. Take, for instance, an “opinion” piece in the Toronto Star. The piece kicked things of with a sensationalized headline of “Enough is enough. Online harm threatens our democracy” (note that the article doesn’t really do a good job at showing how it is a threat to democracy). The article then says this:

Death threats. Racial slurs. Misogynistic attacks. Every day, Canadian media professionals encounter a barrage of online hate. This abuse harms our journalists and crews, and it directly targets a cherished democratic ideal — the freedom of the press.

Over the past two weeks, media professionals and experts gathered to tackle online harassment at the #NotOk forum, hosted by CBC/Radio-Canada. We heard demoralizing stories from journalists about the attacks they face online: rape threats, racial insults and threats against their families. All with the aim of “shutting them up.”

Joanna Chiu, a journalist at the Toronto Star, received so many death threats she lost track of them. Tristan Péloquin from La Presse spoke of the stress of receiving messages that suggested a bomb may be placed in his car. Hearing these harrowing accounts, I wondered: who will still want to work in media today?

These anecdotes are not the exception. This week, Ipsos released the first Canadian survey of online harassment against journalists and media professionals. More than 72 per cent experienced some form of harassment in the past year, and 65 per cent experienced online harassment. One in five were harassed online on a weekly or even daily basis.

This is a classic case of misleading by omission. On the surface and on quick blush, there doesn’t seem to be anything inherently wrong with any of this. Of course, as soon as any reader engages in any sort of critical thinking, that’s when the holes in this become abundantly clear.

For instance, the problem being identified is harassment. People are harassing journalists and many are coming from online sources. The first question that comes to mind is, what are the tools available today to stop this? One tool that comes to mind is that problem accounts can be reported. Did these accounts get reported? What was the response? The excerpt doesn’t really make that clear. Were problem accounts reported to police if there is a threat of violence or other crime? Again, not clear. Did these reporters opt to block these people? If you CTRL+F for the phrase “block” on the article, no results come up. Tools like this exist today. They can be used today.

Now, this leads to the next question: what is the nature of the harassment? We’re not talking about how it is racist, misogynistic, etc. What we mean is if it is just random anonymous individuals around the world or people the journalist knows. That actually has a huge impact on how we treat the issue.

In fact, I’ll use myself as an example. I have been the target of threats and personal attacks online. Of course, a lot of it isn’t exactly a recent phenomenon. Almost all of it, if not, all of it, came from people who either followed me or even worked with me. In other words, those people knew me to varying degrees. I worked with other colleagues at the time to straighten it out. Some people have been banned for their behaviour while others got warned. I’ve also been the recipient of legal threats as well. As of today, it all resulted from the work I did. When you expose lies or tell the truth about something, sooner or later, you’re going to receive pushback from those who have ill intent. Sometimes, those people happen to be particularly powerful or wealthy and are willing to unleash the SLAPP lawyers.

So, where does social media come into play in all of it? Actually, non of it. A lot of this comes from message boards, comments sections of articles, and e-mails (a big reason why I don’t directly offer a public facing e-mail address these days). The thing is, if it isn’t social media, that harassment is probably going to come from somewhere else. The point is, online harassment is not exclusively tied to social media – not by any stretch of the imagination. The bottom line is this: when you enter the realm of journalism, chances are, you are holding people accountable for their actions somewhere along the line. That is going to ruffle feathers somewhere along the line. Sadly, this is the nature of quality journalism today.

As a result, we have two really big holes judging by this excerpt: the nature of harassment and the way it got dealt with. Because of this, the article is already quite weak coming out of the gate. The article then offers this:

Newsrooms everywhere are grappling with this ugly reality. At CBC/Radio-Canada, we have a task force pushing back against online hate. We are supporting our employees, and responding more quickly when they are targeted. We are sharing what we learn with other media. And we are talking with social media companies, government and police.

Social media platforms must take swifter action to enforce their own “anti-harm” guidelines. When journalists receive hundreds of hateful messages as part of a co-ordinated campaign, platforms need to act in hours, not days. Governments can enact regulations requiring platforms to be more transparent about their algorithms, which enable attacks to go viral.

Right now, many local police are not equipped to deal with online abuse that can come from anywhere in the world. Media companies need more effective, co-ordinated ways to track threats and to get action when needed.

So, the first paragraph suggests that the outlets are already taking action on this. The fact that they are talking to the platforms and the police are the two big constructive ways to solve these problems. How they are going about this might impact results, but those are two steps in the right direction. Is someone being mean to you on a platform? Report and block. Is a crime being committed? Report it to police.

Of course, once you get to the second paragraph, that’s where the article suddenly pivots in a bizarre direction. Paragraph 2 in this excerpt, specifically, makes absolutely no sense. On the one hand, they are calling for platforms to better enforce their community guidelines. The second sentence then makes reference to the online harms proposal which actually has little to no relevance to the issues they are speaking to. The third sentence then calls on governments to mandate more transparency for the platforms algorithms which is a different topic entirely.

The first sentence specifically calls for better enforcement of community guidelines. The question is, in what way? Moderation on a large platform was always going to be a tall order to begin with. The way it is handled – especially with the nature of outsourcing the work – obviously doesn’t help. So, maybe a debate can be had about how platforms moderate their content. What can users do to protect themselves if they see users breaking community guidelines? What is reasonable turnaround times for handling complaints? Do the platforms have a policy in place when it comes to crimes being committed on their platform? Do they forward everything they know to police if the situation warrants it? This is actually a legitimate debate to have, but we need to call it what it actually is: moderation.

Now, the second sentence tries to suggest that this is an online harms issue. It is not because online harms is a completely different topic. When we talk about online harms (as referenced in the online harms proposal), we’re talking about, for example, someone encouraging people to murder people of a specific ethnicity. Another example might be posting child pornography on social media (which is obviously a crime). As anyone who follows the submission knows, experts largely agree that the way the online harms proposal was framed is not the way to move forward with this. An example of this is how “harmful content” was poorly defined and the fact that it requires 24 hour takedowns (which many point out will cause more harm than this solves).

What’s more is that, even with the online harms proposal, the issue of harassment will be largely unchanged. The absolute best case scenario (and also unlikely scenario) is that the harassment is going to continue in other forms. In fact, even journalists will be negatively impacted by what will inevitably come from this: abusing the new levers of power to silence the journalist – what the article complains about in the first place. It encourages the behaviour of reporting everything posted by a journalist in an effort to silence them. You don’t stop someone who is threatening to shoot you by handing them a loaded shotgun and saying, “come at me, bro!”

As for the last sentence in that second paragraph in the excerpt, it’s just plain confused at best. How algorithms generally work in the real world is that it’s a piece of code that can try to predict what might be of interest to you. So, for instance, if you go on YouTube and look up poker video’s, you’ll see the recommendations list shows a bunch of video’s showing more poker video’s. YouTube is actively looking for potential video’s that might be of interest to you by the very nature of you looking at that specific video. If you look up hockey bloopers, Youtube might recommend a bunch of other hockey compilation clips.

Now, if you look at that, you’ll very obviously see nothing inherently wrong with that. In fact, it enhances the user experience because content is specifically tailored to your tastes. Algorithms can obviously take things a step further and remember what you viewed in the past and make further recommendations if you pop back on that site a day or so later.

What algorithms don’t actually do, as insinuated by the article, is make someone send threatening messages to a journalist. The article doesn’t appear to establish the nature of these messages nor does it explain how algorithms specifically enable this. The best case scenario for this point is that content is being generated and the algorithms promote that content. If that’s the case, this more or less circles back to the previous point: did that content get reported? Was it taken down? Did the user get suspended or banned permanently? If it’s a crime, was this incident reported to police?

If anything, it looks as though the term was just randomly thrown out there in the hopes of making the piece sound like an intelligent, thoughtful piece when it really is a shallow piece that is full of holes. What’s more, if the whole point of this article is to actually point out how bad moderation practices are, slamming algorithms in this light is actually a peripheral issue.

The last paragraph in the excerpt seems to pivot again to making vague references to anonymous online speech. Once again, we changed the subject to something completely different. As a general rule, social media platforms do everything they can to make sure that, you, as an individual, can be properly identified. They probably have your IP address and probably have your real name and address to boot. The reason for this is so they can have advertising targeted at the individual person. They are already financially motivated to find out who is really using their platforms as it is.

So, the problem is that the complaint surrounds anonymous speech on a site like Facebook where anonymous speech is often non-existent. In fact, CBS got caught making a similar mistake in their 60 Minutes propaganda piece where they accidentally admitted that the “anonymous speech” wasn’t so anonymous and the platform actually took action against the individual – this as they tried (and failed) to argue that Section 230 was at fault for all of this.

At any rate, the piece never really elaborates on the point they are trying to make.

The piece then ends by pivoting (somewhat) back to the core argument that platforms and police need to better coordinate with authorities to track down people making online threats. So, after meandering to the unrelated online harms proposal to government involvement to anonymous speech, we finally got back on topic.

The problem with this piece is that it struggles to really make a point here. It throws in a bunch of completely unrelated debates, but it looks like it is actually trying to demand change for moderation. Conflating moderation and online harms (among other things) didn’t really help with their case here. What’s more is that this lends support to the perspective that the media not only is biased, but has a specific agenda.

If media outlets truly believes that the issue that needs to be addressed is harassment of journalists, there is space for that. It’s a legitimate debate to have. A major problem is that non of what was proposed in the past government even comes close to addressing these problems. In fact, the online harms proposal would make matters much worse because it hands even more powerful censorship tools to the very people the media claims to be perpetrating this in the first place. As we said when analyzing News Media Canada’s submission, you need to properly identify the problem that needs to be solved. Apparently, that is up to us responsible journalists to do that work for them and say that the problem being identified is actually moderation.

The next step is to start having a discussion about what will actually start the process of fixing said problem. What is reasonable expectations for moderation on the social media front? What kind of information should the platforms have access to? Under what circumstances should those platforms hand that information to law enforcement? What standards are already in place and what can be enforced? If a platform fails to properly protect their users through negligence or otherwise, what repercussions are there and what needs to be put in place?

That is a start to a proper debate on this issue. It would be exceedingly helpful if the media actually started following this line of thinking if this problem really does matter to them.

Drew Wilson on Twitter: @icecube85 and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top