Published: Tue, May 15, 2018
Markets | By Otis Pena

Facebook reports spike in violent content

Facebook reports spike in violent content

We're often asked how we decide what's allowed on Facebook - and how much bad stuff is out there. Three weeks ago, for the first time, we published the internal guidelines we use to enforce those standards. Continued conflict in Syria may be one factor, said Alex Schultz, vice president of data analytics: "Whenever a war starts, there's a big spike in graphic violence". But it's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works.

Numbers for other hot button issues caught in the cull include: adult nudity and sexual activity (21 million posts), graphic violence (3.4 million), hate speech (2.5 million), and terrorist propaganda (1.9 million). Facebook has revealed that more than half a billion fake accounts have been removed so far this year. The company said the report has data on "how much content people saw that violates our standards; how much content we removed; and how much content we detected proactively using our technology - before people who use Facebook reported it".

The 583 million disabled accounts is down from the 694 million accounts Facebook disabled in the fourth-quarter of 2016. Facebook estimates that out of every 10,000 pieces of content viewed, seven to nine of those were of content that violated its pornography and nudity rules.

Chaos at Tej Pratap's wedding; unruly crowd loots food items, crockery
The unruly crowd, which proved hard to be controlled, went on to upturn tables and chairs and even broke the crockery. Several media persons, including cameramen, complained of having been manhandled and their equipment damaged.

The company also enforced its guidelines on 21 million pieces of nudity and sexual activity, 1.9 million pieces of terrorist propaganda, 2.5 million hate speech examples, and 3.5 million posts with violent content. Facebook's automated systems spotted nearly 100 percent of spam and terrorist propaganda while 99 percent of fake accounts were also spotted by them.

Facebook took down or applied warnings labels to about 3.5 million pieces of violent content in Q1 2018 - 86% of which was identified by our technology before it was reported to Facebook.

Rosen noted that the technology to spot hate speech is still not up to scratch, so content review teams check for instances. Of the total 2.5 million hate speech posts removed, only 38 percent were pulled by Facebook's tech before users reported it. Compare that to the 95.8 percent of nudity or 99.5 percent of terrorist propaganda that Facebook purged automatically.

Like this: