The Toxic Potential of YouTube’s Feedback Loop

WIRED | 7/13/2019 | Guillaume Chaslot
loranseen (Posted by) Level 3
Click For Photo: https://media.wired.com/photos/5d25192226e4d20008812e69/191:100/pass/The%20Toxic%20Feedback%20Loop%20of%20Youtube-810905908.jpg




Using recommendation algorithms, YouTube’s AI is designed to increase the time that people spend online. Those algorithms track and measure the previous viewing habits of the user—and users like them—to find and recommend other videos that they will engage with.

In the case of the pedophile scandal, YouTube's AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes—that is, the more data it has—the more efficient it will become at recommending specific user-targeted content.

AI - Predict - Content - Content - Stage

Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it's also less likely to recommend such content to those who aren't. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.

But this incident is just a single example of a bigger issue.

Year - Researchers - Google - Deep - Mind

Earlier this year, researchers at Google’s Deep Mind examined the impact of recommender systems, such as those used by YouTube and other platforms. They concluded that “feedback loops in recommendation systems can give rise to ‘echo chambers’ and ‘filter bubbles,’ which can narrow a user’s content exposure and ultimately shift their worldview.”

The model didn’t take into account how the recommendation system influences the kind of content that's created. In the real world, AI, content creators, and users heavily influence one another. Because AI aims to maximize engagement, hyper-engaged users are seen as “models to be reproduced.” AI algorithms will then favor the content of such users.

Feedback - Loop - People - Time

The feedback loop works like this: (1) People who spend more time on...
(Excerpt) Read more at: WIRED
Wake Up To Breaking News!
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!