Skip to main content



YouTube’s Recommendation Algorithm

https://www.theguardian.com/technology/2018/feb/02/how-youtubes-algorithm-distorts-truth

 

In this article, Paul Lewis criticizes YouTube’s recommendation system, arguing that its algorithm has been promoting disturbing content, subcultures, and misinformation on the web.

YouTube is a global media platform with 1.5 billion users, and in the article, Lewis asserts that a large reason for the site’s success is its recommendation algorithm. The system goes through billions of videos and suggests 20 “up next” videos, which are identified as being similar content to the current video or being related to the individual’s preferences. The algorithm’s deep and complex network incorporates huge amounts of data on videos and individuals and keeps the user engaged through its recommendation algorithm. Engineers explain that the video recommendations are constantly changing based on the viewing patterns of the user, such as how long a video is watched before the user clicks on another.

Lewis believes that the algorithm, however, is skewed in that it only focuses on maximizing the time users spend online in order to generate greater advertising revenue, instead of promoting truthful, ethical content. Two key concerns raised in the article are the loss of diversity of videos and the controversial content arising on YouTube. Users are shown similar content to their preferences, which creates “filter bubbles” and reinforces personal perspectives, but fails to expose them to new, diverse content. Connecting this idea with concepts from class, YouTube’s algorithm can essentially be seen as videos as nodes and the edges as the relationship between videos. Similar to how we discussed PageRank, YouTube also has a ranking algorithm for the videos that it chooses to list in the “Up Next” section and “Channel Suggestions.” However, this article gave new insight into the different factors that ranking algorithms could use for different sites. While PageRank is based off of the number of links on a webpage and the quality of those links, the rankings in YouTube’s algorithm accounts for wide-range of factors, such as quantity of clicks, time spent watching the video, the videos people choose to watch, etc.

The main issue Lewis discusses is the controversial topics that are exploding due to media sites such as YouTube. He references Chaslot’s software, called Algotransparency.org, which researches into the biases of YouTube’s algorithm. The software clears search history and compiles data about the suggestions of YouTube given a seed video or a single node. Based off of the concepts in class, we can regard this software as a way to identify the twenty edges that a single node links to. A major claim that resulted from Algotransparency.org is that YouTube is promoting conspiracy theories, although YouTube has denied such claims. Based on the program’s findings, YouTube was six times more likely to recommend videos that favored Trump than Clinton, and many of these videos reflected edgy and hateful content. Instinctively clicking on a disturbing and misinformative video, Lewis argues, can lead “people down hateful rabbit holes.” Thus, seeing the processes behind Algotransparency shows us how algorithms that study links can reveal surprising results about some of the major multi-channel networks and how their processes affect the type of information that people are fed. While YouTube attempts to remove harmful or disturbing content, new nodes are constantly appearing and the link to one video can link to a mass network of new nodes and videos with similar content. Thus, one disturbing video that a person instinctively clicks on can bring the user to a network of thousands of new nodes with misinformative content.

Comments

Leave a Reply

Blogging Calendar

October 2018
M T W T F S S
1234567
891011121314
15161718192021
22232425262728
293031  

Archives