The “algorithmic radicalization” trope remains a pervasive force in our conceptions of online platforms — this fact was made clear in the commenntary surrounding Gonzalez v. Google.
Take, for example, the argument made by a group of former national security officials. Their brief, featured in a Tech Policy Press symposium, argues amplification of terrorist content “appreciably increases the risk” of radicalization and violence. But it does a rather poor job of supporting that point, focusing on Facebook and the broader literature on “filter bubbles” (itself unsatisfactory). When the authors do address radicalization on YouTube, they rely on journalistic anecdotes and a single paper as evidence.1
The brief spends more time arguing for a broader causal connection between social media and radicalization. This is understandable, because the specific evidence for algorithm-driven, transformational experiences online is pretty equivocal. If it concludes anything, it’s that radicalization is rare, largely user-driven, and facilitiated by broader sociopolitical trends.
In fact, the same week SCOTUS heard oral arguments in Gonzalez, a group of researchers at EPFL in Switzerland released a preprint reinforcing the need for nuanced studies of “algorithmic amplification” that actually examine the role users play in their content consumption choices.
Manoel Ribeiro, the primary author of the paper, previously wrote a study documenting clear “pathways” between fringe YouTube channels. Yet he struggled to find evidence this movement was caused by the reccomender system.2
Other authors have found evidence suggesting the algorithm suggests progressively extreme content. In the new paper, the authors describe this as a “paradox” in the rabbit hole literature:
“blindly following recommendations leads users to increasingly partisan, conspiratorial, or false content. At the same time, studies using real user traces suggest that recommender systems are not the primary driver of attention toward extreme content”
To address this “paradox,” they explore how different prototypical users along the political spectrum would be served theoretical content. The authors construct two scenarios — in the first, they simulate users who choose a topic, then blindly follow reccomendations for that topic. In the second, they simulate users who choose videos according to the “utitlity” of that video, as defined by their political leanings.
Ribeiro et. al find their reccomendation algorithm did suggest increasingly niche content to passive users; however, these items were deamplified by the recommender system for active users. They conclude the collaboration between algorithms and users, and the obscurity of fringe content, could explain why real users rarely experience the rabbit hole effect.
I find myself agreeing with Brian Fishman’s assessment that these are “findings that ought to be intuitive.” Users do not sit idly on the conveyor belt of algorithmic extremism; this idea not only reinforces the idea of omnipotent algorithms, but also suggests that radical political views are reached by a relatively-linear series of logical steps. But I was happy to read the paper, which hopefully signals a larger departure from clichéd understandings of radicalization online. 3
Footnotes
Here we find a prototypical invocation of the rabbit hole narrative in the wild: “As users get inured to mainstream content and interested in more extreme views, more radical information is served to them and, before they know it, they have fallen down the rabbit hole, each piece of extremist content validating their evolving world view, creating an echo chamber or ‘filter bubble,’ which mainstream content can no longer penetrate”↩︎
Ribeiro also discusses issues related to the algorithm on his blog, including the “supply and demand framework for YouTube politics” proposed by Kevin Munger.↩︎
An interesting study cited by the authors does just that, clarifying the various ways users conceive of the TikTok reccomendation algorithm. As a sidenote, I’m wary of analogizing too heavily between YouTube and TikTok — anecdotally, a large portion of my YouTube consumption comes from subscriptions and external links, as well as the algorithm. On the other hand, I ignored the “Following” tab on TikTok when I used the app, from late 2018 to early 2021.↩︎