As Facebook works to become more transparent about its strategy to combat hate speech, one of its public policy leaders published a lengthy explainer on the process Tuesday. For all of the post’s enlightening details, though, it felt as if Facebook was saying, “Sorry, guys, it’s hard.”
Richard Allan, Facebook’s VP of Public Policy in Europe, the Middle East, and Asia, took to Facebook’s blog to lay it all out as part of the platform’s “Hard Questions” series.
There’s a lot of satisfying context and background here, especially in the wake of the platform’s recent agreement to fight online extremism alongside Twitter and YouTube as well as the Online Civil Courage Initiative it unveiled earlier this year.
And while Facebook’s still dealing with untangling a complex issue, it’s a welcome example of transparency for a platform that hasn’t had the best track record in terms of how it handles offensive content.
Defining and contextualizing hate speech
There’s a lot of discussion about the boundaries of hate speech and how the platform decides on what, exactly, constitutes hate speech. Allan even compares the issue in Germany, which we’ve touched on before, to that in America.
In Germany, for example, laws forbid incitement to hatred; you could find yourself the subject of a police raid if you post such content online. In the US, on the other hand, even the most vile kinds of speech are legally protected under the US Constitution.
It makes sense, in a way, to name-check Germany here; after all, the raids Allan refers to have actually happened.
And Germany is one of the countries that have gone all-in on forcing Facebook to deal with the hate speech issue (and also the reason that a VP from Facebook EMEA is addressing this as opposed to an American exec).
What’s especially interesting is that Allan offers up actual numbers as an example of how Facebook removes hate speech.
Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week thats around 288,000 posts a month globally. (This includes posts that may have been reported for hate speech but deleted for other reasons, although it doesnt include posts reported for other reasons but deleted for hate speech.*)
See that asterisk at the very end, though? It refers to all the caveats that come with reporting those numbers. Here’s a complete screenshot of all of those.
And that’s really the issue here.
Facebook is working to fight hate speech (which is good!) and, in this report, there are actually statistics (including the move to add “3,000 people to our community operations team around the world”). Plus, there’s a reference to AI experiments which sounds an awful lot like what Alphabet has been doing to screen toxic comments.
But, guys, it’s, like, super hard and stuff. And Facebook really, really wants us to know that to the point of over-explaining the problem.
There are so many qualifiers to one set of numbers, it’s easy to lose track about where, if any, ground was gained. For all the transparency, the actual progress still remains clouded, both for us as users and, apparently, for the platform itself.
Time to get going
Allan is exhaustive in the examples he lists and the explanation of how “context” and “intent” matter. And with good reason: as the platform has grown, the monitoring of content has become an even bigger issue as evidenced by The Guardian‘s recent “Facebook Files” series, which highlighted numerous issues, from suicides streamed on Facebook Live to the ins-and-outs of Facebook’s moderators.
Look, this stuff is hard and good on Facebook for continuing to work on it (we’re looking at you, Twitter) and sharing this long (looooooong) post about it.
Free speech has always been a real sticky wicket, particularly here in the United States, and Facebook is still a relatively new platform (you know, compared to, say, the printing press). And nothing about this is going to be smooth and graceful and whatever solution they come up with is going to be a bit more complicated than flooding ISIS’ Facebook page with niceties to combat the hate.
But we’re also smart enough to understand words have different meanings to different cultures. We also understand it’s a messy thing and that it’s going to take time and (a lot of) work and that, in the end, someone’s going to be mad.
There are now 2 billion Facebook users globally so these issues are only to keep circulating until the platform implements its solutions, and even then there will still be work to do.
So hurrah to Facebook for being more open about the process, but it’s time to move forward with more concrete solutions.
More From this publisher : HERE