Four ways social media platforms could stop the spread of hateful content in aftermath of terror attacks

phys.org | 3/19/2019 | Staff
jenny1246 (Posted by) Level 3
Click For Photo: https://3c1703fe8d.site.internapcdn.net/newman/gfx/news/hires/2019/fourwayssoci.jpg

The deadly attack on two mosques in Christchurch, New Zealand, in which 50 people were killed and many others critically injured, was streamed live on Facebook by the man accused of carrying it out. It was then quickly shared across social media platforms.

Versions of the livestream attack video stayed online for a worrying amount of time. A report by the Guardian found that one video stayed on Facebook for six hours and another on YouTube for three. For many, the quick and seemingly unstoppable spread of this video typifies everything that is wrong with social media: toxic, hate-filled content which goes viral and is seen by millions.

Platforms - Twitter - Facebook - YouTube - Google

But we should avoid scapegoating the big platforms. All of them (Twitter, Facebook, YouTube, Google, Snapchat) are signed up to the European Commission's #NoPlace4Hate programme. They are committed to removing illegal hateful content within 24 hours, a time period which is likely to come down to just one hour.

Aside from anything else, they are aware of the reputational risks of being associated with terrorism and other harmful content (such as pornography, suicide, paedophilia) and are increasingly devoting considerable resources to removing it. Within 24 hours of the Christchurch attack, Facebook had banned 1.5m versions of the attack video – of which 1.2m it stopped from being uploaded at all.

Content - Systems - Attacks - Platforms - Challenges

Monitoring hateful content is always difficult and even the most advanced systems accidentally miss some. But during terrorist attacks the big platforms face particularly significant challenges. As research has shown, terrorist attacks precipitate huge spikes in online hate, overrunning platforms' reporting systems. Lots of the people who upload and share this content also know how to deceive the platforms and get round their existing checks.

So what can platforms do to take down extremist and hateful content immediately after terrorist attacks? I propose four special measures which...
(Excerpt) Read more at: phys.org
Wake Up To Breaking News!
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!