Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Entrance to Facebook's Menlo Park office

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 percent increase.

The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users.

Facebook pulled or slapped warnings on almost 30 million posts containing sexual or violent images, terrorist propaganda or hate speech in the first three months of 2018, the social media giant said Tuesday.

It attributed the increase to the enhanced use of photo detection technology.

It took down 21 million pieces of adult nudity and sexual activity in Q1 2018 - the company's systems had already flagged 96% before being reported.

On Tuesday, the Silicon Valley company published numbers for the first time detailing how much and what type of content it takes down from the social network.

PUBG Mobile Getting Miramar Map and More in Update 0.50
Miramar has been in the PC version of PlayerUnknown's Battlegrounds since late 2017, although it has yet to come to the Xbox. In this post, we'll go over everything you need to know about the May PUBG Mobile update.

Facebook "took action" on 3.4 million pieces of content that contained graphic violence. During the first quarter, Facebook found and flagged just 38% of such content before it was reported, by far the lowest of the six content types.

Hate speech: In Q1, the company took action on 2.5 million pieces of such content, up about 56% from 1.6 million during Q4. The company found and flagged 95.8% of such content before users reported it.

Responses to rule violations include removing content, adding warnings to content that may be disturbing to some users while not violating Facebook standards; and notifying law enforcement in case of a "specific, imminent and credible threat to human life". "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards". "Hate speech content often requires detailed scrutiny by our trained reviewers to understand context", explains the report, "and decide whether the material violates standards, so we tend to find and flag less of it".

Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the US presidential election and the Brexit vote to leave the European Union, both in 2016.

The social network says when action is taken on flagged content it does not necessarily mean it has been taken down. Current estimates by the firm suggest 3-4% of active Facebook accounts on the site between October 2017 and March were fake. It says it found and flagged almost 100% of spam content in both Q1 and Q4.

Facebook shares slid as much as 2% Tuesday morning after it announced it had disabled 583 million fake accounts over the last three months. Those tools worked particularly well for content such as fake accounts and spam: The company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and "nearly 100%" of the spam.

Latest News