Home / New Media / Facebook shuts down 583 million fake accounts as it reveals it is packed with abusive content
th

Facebook shuts down 583 million fake accounts as it reveals it is packed with abusive content

Facebook shut down 583 million fake accounts in the first quarter of 2018, the social media giant said as it revealed the huge scale of abuse on the platform.
The site has revealed just how much malicious content was being published and shared on the site for the first time ever.
It comes as Facebook has been criticised for the way the platform has been abused, with accusations it has helped changed the course of elections and enabled criminal behaviour.
It said it had taken down 837 million pieces of spam content between January and March of this year.
The social network said “nearly 100 per cent” of this content was found and removed before it was reported, and was accompanied by the removal of around 583 million fake accounts, most of which were disabled within minutes of being activated.
However, the social network admitted its automated tools were still struggling to pick up hate speech, with only 38 per cent of the more than 2.5 million posts removed having been spotted by the firm’s technology.
Facebook’s vice president of product management, Guy Rosen, said more work needed to be done to improve such detection tools.
“We have a lot of work still to do to prevent abuse. It’s partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important,” he said.
“For example, artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.
“In addition, in many areas – whether it’s spam, porn or fake accounts – we’re up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts.
“It’s why we’re investing heavily in more people and better technology to make Facebook safer for everyone.”
The figures also revealed that Facebook believes between 3 and 4 per cent of its more than 2 billion monthly active users were fake accounts.
The site also said it took down 21 million pieces of nudity and sexual activity-related content, 96 per cent of which was found and flagged by its systems before being reported.
In terms of graphic violent content, Facebook said more than 3.4 million posts were either taken down or given warning labels, 86 per cent of which were spotted by its detection tools.
However, the social network said this was up from 1.2 million at the end of 2017 and while the majority of the increase was down to improvements in its detection technology, some of the rise is due to an increase in such content appearing on the platform.
Mr Rosen said making the figures public will “push” the company to improve more quickly.
“We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too,” he said.
“This is the same data we use to measure our progress internally – and you can now see it to judge our progress for yourselves.”
However, it comes after a House of Commons committee labelled written evidence from the firm to its fake news inquiry disappointing.
Culture committee chair Damian Collins said Facebook failed to provide “a sufficient level of detail and transparency” in its response following an appearance by chief technology officer Mike Schroepfer.
The committee has also urged Facebook boss Mark Zuckerberg to appear before them, adding that it would be open to taking evidence from the billionaire company founder via video link if he would not attend in person.

The Independent

Leave a Reply

Your email address will not be published. Required fields are marked *