Name: Causally estimating the effect of YouTube’s recommender system using counterfactual bots
Authors: Homa Hosseinmardi, Amir Ghasemian, Miguel Rivera-Lanas, Manoel Horta Ribeiro, Robert West, and Duncan J. Watts
Overview and Key Results
This research paper seeks to causally estimate the effect of YouTube’s recommendation algorithm on consumption of “partisan content” on the platform. The authors accomplish this by comparing bots that replicate the YouTube consumption patterns of real users with what they call, “counterfactual bots” wherein consumption preferences “rely exclusively on recommendations” from YouTube’s algorithm. Here are some key findings:
- Relying solely on Youtube’s recommendation system “results in a more moderate experience on YouTube relative to the real user.”
- When Youtube users shift from consuming partisan content to more moderate content, the sidebar is quick to reflect the change in content preferences while “homepage recommendations react more slowly.”
Takeaway: While recommendation algorithms may shape content exposure and user preferences on online platforms like YouTube, this paper suggests that narratives about widespread algorithmic manipulation may be overstated.
Why is it important?
With over 2.5 billion active monthly users, YouTube is one the biggest online platforms in the world. While in some ways, the platform democratized video sharing and consumption, it’s also been criticized for hosting radical content, much like other Big Tech companies like Meta. But is the consumption and proliferation of radical content a consequence of user choice or recommendation algorithms that are optimized to drive engagement? Disentangling the effects of algorithmic amplification from user intentions is difficult and this paper provides a framework to do so.
By causally estimating the role of recommendation algorithms in driving partisan content consumption, this paper’s findings have important implications for policymakers seeking to hold platforms accountable for the content they host.