Facebook admits 4% of accounts were fake


Facebook admits 4% of accounts were fake

Small doses of nudity and graphic violence still make their way onto Facebook, even as the company is getting better at detecting some objectionable content, according to a new report. Tuesday's report said Facebook disabled 583 million fake accounts during the first three months of this year, down from 694 million during the previous quarter. It said the rise was due to improvements in detection.

Facebook also took down 837 million pieces of spam in [the first three months of the year], "nearly 100% of which we found and flagged before anyone reported it" said Rosen.

The figures are contained in an updated transparency report published by the company which for the first time contains data around content that breaches Facebook's community standards. Facebook said users were more aggressively posting images of violence in places like war-torn Syria.

[Image: courtesy of Facebook] That's not to say, of course, that such content never shows up-just that, at scale, Facebook is able to remove most of it, often before its 2.2 billion users ever see it.

"We're not releasing that in this particular report", said Alex Schultz, the company's vice president of data analytics.

Facebook's vice president of product management, Guy Rosen, said that the company's systems are still in development for some of the content checks. The rest came after Facebook users flagged the offending content for review.

The report did not cover the spread of false news directly, which it has previously said it was trying to stamp out by increasing transparency on who buys political ads, strengthening enforcement and making it harder for so-called "clickbait" from showing up in users' feeds.

The first of what will be quarterly reports on standards enforcement should be as notable to investors as the company's quarterly earnings reports.

To distinguish the many shades of offensive content, Facebook separates them into categories: graphic violence, adult nudity/sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

Facebook took down 3.4 million pieces of graphic violence during the first three months of this year, almost triple the 1.2 million during the previous three months.

Facebook took action on 1.9 million pieces of content over terrorist propaganda.

"We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards", the report says. The report also doesn't cover how much inappropriate content Facebook missed.

Facebook took down or applied warning labels to 3.4 million pieces of violent content in the three months to March - a 183 per cent increase from the final quarter of 2017.

He said technology like artificial intelligence is still years from effectively detecting most bad content because context is so important.

It disabled 583 million fake accounts. "Our metrics can vary widely for fake accounts acted on", the report notes, "driven by new cyberattacks and the variability of our detection technology's ability to find and flag them". Most recently, the scandal involving digital consultancy Cambridge Analytica, which allegedly improperly accessed the data of up to 87 million Facebook users, put the company's content moderation into the spotlight.