Friday, 17 August 2018
Latest news
Main » Facebook closed 583m fake accounts in first three months of 2018

Facebook closed 583m fake accounts in first three months of 2018

16 May 2018

Facebook said Tuesday it took down 21 million "pieces of adult nudity and sexual activity" in the first quarter of 2018, and that 96 percent of that was discovered and flagged by the company's technology before it was reported. Among the most noteworthy numbers: Facebook said that it took down 583 million fake accounts in the three months spanning Q1 2018, down from 694 million in Q4 2017. It attributed the increase to "improvements in our ability to find violating content using photo-detection technology, which detects both old content and newly posted content".

Some 3.4 million pieces of content were either removed or labelled with a warning during the period covered by the report, with Facebook's improved detection systems picking up 85.6 per cent of the content it subsequently took action on.

Facebook said in a written report that of every 10,000 pieces of content viewed in the first quarter, an estimated 22 to 27 pieces contained graphic violence, up from an estimate of 16 to 19 late previous year. The company estimates that between 0.22 percent and 0.27 percent of content violated Facebook's standards for graphic violence in the first quarter of 2018.

The company has a policy of removing content that glorifies the suffering of others.

These releases come in the wake of the Cambridge Analytica scandal, which has left the company battling to restore its reputation with users and developers - though employees have said the decision to release the Community Standards was not driven by recent events.

More news: NFL's Panthers to be sold to Tepper

Facebook's vice president of product management, Guy Rosen, said that the company's systems are still in development for some of the content checks.

However, it declined to say how many minors - legal users who are between the ages of 13 and 17 - saw the offending content.

In four of the six categories, it increased deletions over the previous quarter: spam (up 15% from the previous quarter), violent content (up 65%), hate speech (up 56%), and terrorist content (up 73%), while deletions of fake accounts were down 16%, and deletions of nudity and sexual activity saw no change.

This led to old as well as new content of this type being taken down.

But hate speech is a problem for Facebook today, as the company's struggle to stem the flow of fake news and content meant to encourage violence against Muslims in Myanmar has shown.

More news: Twitter Will Now Hide Mean Tweets From Trolls

With regards to graphic violence, 3.5 million items were removed in Q1 2018 and 86% was flagged before it was reported.

Facebook also disclosed that it disabled almost 1.3 billion fake accounts in the six months ending in March. "Hate speech content often requires detailed scrutiny by our trained reviewers to understand context", explains the report, "and decide whether the material violates standards, so we tend to find and flag less of it".

Now, however, artificial intelligence technology does much of that work.

Facebook plans to continue publishing new enforcement reports, and will refine its methodology on measuring which bad content circulates over the platform.

It disabled 583 million fake accounts. The post said Facebook found nearly all of that content before anyone had reported it, and that removing fake accounts is the key to combating that type of content. It says it found and flagged almost 100% of spam content in both Q1 and Q4.

More news: Pope Francis auctions off his blessed Lamborghini

Facebook closed 583m fake accounts in first three months of 2018