- Jim Wilson/The New York Times
- Mark Zuckerberg at a Facebook conference in San Francisco, April 12, 2016.
By SHEERA FRENKEL
© 2018 New York Times News Service
SAN FRANCISCO — Facebook has been under pressure for its failure to remove violence, nudity, hate speech and other inflammatory content from its site. Government officials, activists and academics have long pushed the social network to disclose more about how it deals with such posts.
On Tuesday, the Silicon Valley company published numbers for the first time detailing how much and what type of content it takes down from the social network. In an 86-page report, Facebook revealed that it deleted 865.8 million posts in the first quarter of 2018, the vast majority of which were spam, with a minority of posts related to nudity, graphic violence, hate speech and terrorism.
Facebook also said it removed 583 million fake accounts in the same period, or the equivalent of 3 to 4 percent of its monthly users.
Guy Rosen, Facebook’s vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content.
On Monday, as part of an attempt to improve protection of its users’ information, Facebook said it had suspended roughly 200 third-party apps that collected data from its members while it undertook a thorough investigation.
Jillian York, the director for international freedom of expression at the Electronic Frontier Foundation, said she welcomed Facebook’s report.
“It’s a good move and it’s a long time coming,” she said. “But it’s also frustrating because we’ve known that this has needed to happen for a long time. We need more transparency about how Facebook identifies content and what it removes going forward.”
According to Tuesday’s report, about 97 percent of all the content that Facebook removed from its site in the first quarter was spam. About 2.4 percent of the deleted content had nudity, Facebook said, with even smaller percentages of posts removed for graphic violence, hate speech and terrorism.
Facebook attributed the increase in content removal in the first quarter to improved artificial intelligence programs that could detect and flag offensive content. Chief executive Mark Zuckerberg has long highlighted AI as the main solution to helping Facebook sift through the billions of pieces of content that people post to its site every day.