Facebook Reports Progress In Cracking Down On Hate Speech, Other Violations
UPDATED with comments from Mark Zuckerberg
Facebook updated on its progress in combatting hate speech, depictions of graphic violence, terrorist propaganda and other violations of its community standards.
The social network said it is making strides in proactively identifying hate speech, saying it’s able to remove the majority of posts before users report them. It’s also gotten better at finding violent or graphic content.
CEO Mark Zuckerberg published an extensive Facebook post, which he labeled his blueprint for content governance and enforcement, that lay out his strategy for keeping people safe on the company’s platforms. Facebook issued its report a day after a bombshell report in the New York Times that revealed how the company attempted to deny and deflect blame for Russia’s manipulation of its platform.
“The past two years have shown that without sufficient safeguards, people will misuse these tools to interfere in elections, spread misinformation, and incite violence,” Zuckerberg wrote. “One of the most painful lessons I’ve learned is that when you connect two billion people, you will see all the beauty and ugliness of humanity.”
Zuckerberg said Facebook has gotten better at proactively identifying problematic content, as artificial intelligence helps flag content for its team of reviewers. The system prioritizes content that can cause real-world harm: terrorist content, or indications that a user might be suicidal. Facebook also is working aggressively to stop the spread of hate in countries with crises, like Myanmar. He said the social network proactively identified 63% of the content it removed before users reported it, thanks to investments in technology and the addition of 100 Burmese language experts reviewing posts.
“Overall, we know we have a lot more work to do when it comes to preventing abuse on Facebook,’ said Guy Rosen, the company’s vice president of product management.
Facebook said it has removed more than 15.4 million pieces of violent and graphic content in the third quarter, including removing it, putting a warning screen over it, disabling the offending account and/or contacting law enforcement. It took down more than 1.5 fake accounts.
The company also has stepped up its removal of terrorist content, using technology that can proactively detect photos that are extremely similar to old images of terrorist propaganda that had been removed previously.
Facebook also started reporting instances of bullying and harassment and child nudity and exploitation. In the last quarter, the company took action on 2.1 million pieces of content that violated our policies for bullying and harassment — removing 15% of it before it was reported.
The company removed 8.7 million pieces of content that violated our child nudity or sexual exploitation of children policies — 99% were identified before anyone reported them.
Zuckerberg said Facebook’s next priorities are discouraging “borderline” content, sensational and provocative posts that attract disproportionate engagement.
“Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average — even when they tell us afterwards they don’t like the content,” Zuckerberg wrote.
Facebook will begin penalizing clickbait and misinformation so it gets less traction on the social network.
Lastly, the social network is creating an appeals process, to be handled by an independent body, to deal with errors.
“This matters,” Zuckerberg wrote, “both for ensuring we’re not mistakenly stifling people’s voices or failing to keep people safe, but also for building a sense of legitimacy in the way we handle enforcement and community governance.”
Labels: DEADLINE
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home