Published: Wed, May 16, 2018
World News | By Sandy Lane

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Facebook has revealed that more than half a billion fake accounts have been removed so far this year.

While Facebook uses what it calls "detection technology" to root out offending posts and profiles, the software has difficulty detecting hate speech. Why, for example, did Facebook make it possible for people to sell opiates on the site, even though it says that content is prohibited?

Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 percent increase.

Moving on, Facebook said that it removed 21 million pieces of content that depicted adult nudity and sexual activity and 96% of that was discovered by its technology before a user reported the content. Facebook said that for every 10,000 pieces of content on the service, seven to nine views were made on content that in some way violates its pornography regulations.

The report did not cover the spread of false news directly, which it has previously said it was trying to stamp out by increasing transparency on who buys political ads, strengthening enforcement and making it harder for so-called "clickbait" from showing up in users' feeds.

More news: White House blames Gaza deaths on Hamas: 'Propaganda attempt'

Facebook Inc. said it took down 583 million fake profiles in the first three months of the year, usually within minutes of their creation.

The report covers the period between October, 2017 to March, 2018 and deals with content removed for graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.

In the first quarter of 2018, Facebook removed 2.5 million pieces of hate speech from its social network.

The information from Facebook comes a few weeks after the company unveiled internal guidelines about what is - and isn't - allowed on the social network.

The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to manipulate Facebook users in the US and elsewhere.

More news: Drake announces tour with Migos | Tickets go on sale Tuesday

"For serious issues like graphic violence and hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams", Mr Rosen said. The estimate is taken from a global sampling of all content in the first quarter, weighted by popularity of that content. Facebook doesn't yet have a metric for prevalence of other types of content. Instead, Facebook's approach is to have bigger groups residing in "centres of excellence" in order to review the content on its platform, he explained.

The report and the methods it details are Facebook's first step toward sharing how they plan to safeguard the news feed in the future.

"All of this is under development". By releasing these numbers, Facebook can claim that it's getting a grip on its community. To that end, the company is scheduling summits around the globe to discuss this topic, starting Tuesday in Paris.

Facebook intends to hold a series of public engagements this year to get people's feedback on its community standards, and Singapore will hold one of these "this year", according to one of its executives.

More news: Portsmouth MP pays tribute to Tessa Jowell

Like this: