The Kids Are Alright: Study Finds YouTube Isn’t Radicalizing People

Yet another study concludes that social media like YouTube isn’t this monolithic public safety threat the mainstream media would have you believe.

If you follow mainstream media, you’ll probably have seen an off again, on again drumbeat about how social media is causing all sorts of social harm. Whether it is radicalizing people. You get articles proclaiming that social media is fuelling anxiety and aggression or radicalization to commit acts of terrorism, the mainstream media would have you think that this is all just long ago settled science and it’s an established fact that social media has nothing but negative impacts on people and it’s up to politicians to do something about this supposed growing health crisis caused by these networks.

This drumbeat has been even more felt in the US where social media has been blamed for similar things. American mainstream media have long touted the theory that social media is causing immense harm on multiple levels. Last year, there was a study released by the US Surgeon General on social media and the American mainstream media showcased that as even more heaping evidence that social media causes teen anxiety and depression.

Of course, there is just one teeny tiny little problem with all of the above. It’s not what the research actually found.

You know that US Surgeon General report? It turns out, it was actually pointing out that social media had benefits to teenagers who are suffering from anxiety and depression. From Techdirt:

So I was curious earlier this week when Surgeon General, Vivek Murthy, released his Social Media and Youth Mental Health report. Nearly all the reporting I saw on it suggested that it was like the opposite of the APA release, and that it talked up how social media was absolutely putting kids at risk and something needed to be done.

But… that’s not exactly what the report says.

Indeed, somewhat bizarrely, it reads kinda like the off-brand version of the APA report, with fewer details, less nuance, and a less clear plan. It cites some of the same studies.

Like the APA report, it also says the evidence of a causal impact is lacking, and (like the APA report) it says that it appears social media is good for some and not good for others. Like the APA report, it clearly lays out the benefits of social media for kids:

Social media can provide benefits for some youth by providing positive community and connection with others who share identities, abilities, and interests. It can provide access to important information and create a space for self-expression. The ability to form and maintain friendships online and develop social connections are among the positive effects of social media use for youth. , These relationships can afford opportunities to have positive interactions with more diverse peer groups than are available to them offline and can provide important social support to youth. The buffering effects against stress that online social support from peers may provide can be especially important for youth who are often marginalized, including racial, ethnic, and sexual and gender minorities. , For example, studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support. Seven out of ten adolescent girls of color report encountering positive or identity-affirming content related to race across social media platforms. A majority of adolescents report that social media helps them feel more accepted (58%), like they have people who can support them through tough times (67%), like they have a place to show their creative side (71%), and more connected to what’s going on in their friends’ lives (80%). In addition, research suggests that social media-based and other digitally-based mental health interventions may also be helpful for some children and adolescents by promoting help-seeking behaviors and serving as a gateway to initiating mental health care.

Yeah, that is different from the narrative portrayed by the mainstream media by… a lot.

Another famous example was a study by the APA which actually pointed out that social media isn’t inherently harmful or beneficial for adolescents. From Techdirt:

But it seems that the media is so bought into the moral panic narrative, that they’re completely misrepresenting the study, claiming it supports the moral panic.

The core findings, similar to what we’ve been saying all along, and which is supported by multiple other studies, is that social media is not inherently bad for kids. For the vast majority, it’s neutral or positive. There is a small percentage who seem to have issues with it, and we should focus our attention on dealing with those cases, rather than pushing for things like outright bans. From the findings of the APA report:

Using social media is not inherently beneficial or harmful to young people. Adolescents’ lives online both reflect and impact their offline lives. In most cases, the effects of social media are dependent on adolescents’ own personal and psychological characteristics and social circumstances—intersecting with the specific content, features, or functions that are afforded within many social media platforms. In other words, the effects of social media likely depend on what teens can do and see online, teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.

Adolescents’ experiences online are affected by both 1) how they shape their own social media experiences (e.g., they choose whom to like and follow); and 2) both visible and unknown features built into social media platforms.

Not all findings apply equally to all youth. Scientific findings offer one piece of information that can be used along with knowledge of specific youths’ strengths, weaknesses, and context to make decisions that are tailored for each teen, family, and community.

So, another swing and a miss for the mainstream media who were basically busted for lying about the research. Of course, there is much more research that continues to disprove any link between social media and harm on society in general:

Gee, for something that is supposedly settled science (according to mainstream media), there sure is a lot of scientific evidence that says otherwise.

Today, we are learning that there has been another study that adds to the growing body of evidence that contradicts the narrative by mainstream media. That research concludes that social media like YouTube is not radicalizing people. The research tested the YouTube algorithm using bots to determine if the algorithm is actually funnelling people towards radicalizing content. From TechXplore:

In recent years, there has been a popular narrative in the media that videos from highly partisan, conspiracy theory-driven YouTube channels radicalize young Americans and that YouTube’s recommendation algorithm leads users down a path of increasingly radical content.

However, a new study from the Computational Social Science Lab (CSSLab) at the University of Pennsylvania finds that users’ own political interests and preferences play the primary role in what they choose to watch. In fact, if the recommendation features have any impact on users’ media diets, it is a moderating one.

“On average, relying exclusively on the recommender results in less partisan consumption,” says lead author Homa Hosseinmardi, associate research scientist at the CSSLab.

During two experiments, the bots, each with its own YouTube account, went through a “learning phase”—they watched the same sequence of videos to ensure that they all presented the same preferences to YouTube’s algorithm.

Next, bots were placed into groups. Some bots continued to follow the watching history of the real life user it was trained on; others were assigned to be experimental “counterfactual bots”—bots following specific rules designed to separate user behavior from algorithmic influence.

In experiment one, after the learning phases, the control bot continued to watch videos from the user’s history, while counterfactual bots deviated from users’ real-life behavior and only selected videos from the list of recommended videos without taking the user preferences into account.

Some counterfactual bots always selected the first (“up next”) video from the sidebar recommendations; others randomly selected one of the top 30 videos listed in the sidebar recommendations; and others randomly selected a video from the top 15 videos in the homepage recommendations.

The researchers found that the counterfactual bots, on average, consumed less partisan content than the corresponding real user—a result that is stronger for heavier consumers of partisan content.

“This gap corresponds to an intrinsic preference of users for such content relative to what the algorithm recommends,” Hosseinmardi says. “The study exhibits similar moderating effects on bots consuming far-left content, or when bots are subscribed to channels on the extreme side of the political partisan spectrum.”

Generally speaking, the research concluded that if the recommendations are recommending extremist material, it’s because the users are choosing to watch it. Recommendations shift when users change their social media consumption habits. If you are consuming partisan extreme material, the recommendation system is going to recommend more of that content. If the user decides to shift towards more moderate content, then the recommendation system will start recommending more moderate material.

All of this lines up with my own personal experience with YouTube. There have been many instances where I see people complain about how far right conspiracy theories are constantly being jammed down their throat by the YouTube algorithm. Yet, I never experience this. If I am looking at political stuff, the occasional far right conspiracy theorist content pops up, but I choose not to watch it. If I see it crop up in shorts, I downvote it. That is generally enough for the algorithm to recommend something different.

Of course, more importantly, I seek content on the platform that doesn’t have anything to do with right wing conspiracy theories. For instance, the other day, I checked out some financial video’s about investing. I wouldn’t necessarily recommend getting your financial advice from YouTube since it is probably much better to speak with financial advisors who can figure out what is suited for you personally (after all, everyone’s financial situation is going to be different and a one size fits all approach won’t likely work), but thanks to checking out those videos, I get recommended… financial advice videos like this. Shocking, I know.

At one point, I saw some professional eating videos and the YouTube algorithm recommended to me… restaurant food eating challenge videos. The algorithm also happens to know that I watch gaming videos and it’s recommendation system recommended… video game content. The algorithm noticed that I have a history of watching poker video’s and the recommendation system recommended… poker videos. The algorithm noticed I like comedy video’s and it recommended me… comedy videos. I know, this is pretty mind blowing stuff, but all of this is what I see on the home page of YouTube thanks to my watch history.

It’s almost as if to say that if you don’t like certain kinds of content… stop watching it. If you want to watch different kinds of content, start watching those instead. The recommendation system is supposed to learn what kinds of content you are in to and… it does exactly what it says on the tin. If you have this problem of constant conspiracy theories, then maybe use the search system and look for completely unrelated content and get into the habit of watching video’s you actually want to watch. I’m willing to bet that some people who have this problem actively looked up this stuff for a laugh and now has that in the watch history. As a result, the algorithm thinks you are in to these videos when, in fact, you are not.

Either way, it can be as simple as ignoring videos you don’t like and looking at video’s you actually want to watch. If you want to make something like YouTube an even better experience, find other kinds of content you want to actually enjoy using the search features and get to watching those videos as well. This idea that the algorithm is simply funnelling people into watching extremist material without input from the user is, as of today, a myth.

Drew Wilson on Twitter: @icecube85 and Facebook.

3 thoughts on “The Kids Are Alright: Study Finds YouTube Isn’t Radicalizing People”

  1. This is a restored comment to this post. If this is yours, feel free to copy the contents and paste it into a new comment below with your e-mail credentials to restore e-mail notifications. Sorry for the inconvenience this might’ve caused:

    (Originally posted February 25, 2024)
    DB wrote:

    Books, comic books, music, lyrics, movies, TV, video games, and pinball have all been accused of corrupting young people. Now it’s social media turn to face the wraith of the morality police, when they aren’t cheating on their spouses.

    A lot of time and energy will be wasted demanding action to protect children. In the end, we will end up with useless warnings, like viewer discretion is advised, on social media.

    1. This is a restored comment to this post. If this is yours, feel free to copy the contents and paste it into a new comment below with your e-mail credentials to restore e-mail notifications. Sorry for the inconvenience this might’ve caused:

      (Originally posted February 27, 2024)
      DB wrote:

      Why is your homepage blank

  2. YouTube is shoving trending and “people also watched” videos into the search results of searches on YT. People are not just seeing stuff they like anymore except in their Subs tab thanks to what YouTube has been doing.

    There’s a problem in the gaming sphere of YouTube. People looking for game news or criticism coming across far right dirtbags who intersperse their game news with alt-right views about how game companies are trying to “indoctrinate” people and push “real gamers” out. It’s all garbage, but watching gaming content from people I like and then having horrid people like Asmongold pop up in my recommendations. The radicalization attempts are there, it’s just that you try and handwave it away with an “It works fine on my machine” style explanation.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top