Facebook is finding out the hard way that it’s not much fun being at the forefront of the political and cultural wars.
The social media giant made efforts following the deadly 2017 Charlottesville, Va., demonstrations, in which a protester was killed by a white supremacist, to crack down on hate groups and hate speech appearing on its network.
But it’s one thing to say you’ll crack down on hate speech. It’s another thing entirely to identify what qualifies as hate speech. As it turns out, Facebook may have bitten off more than it can chew. Its staff may not be the best equipped to grapple with issues as complex and weighty as what constitutes “hate speech.”
For example, internal documents show moderators were instructed after Charlottesville to be on the lookout for language specifically praising, supporting, or representing white supremacy. Confusingly enough, moderators were told not to worry about white nationalism and white separatism. As the New Republic’s Matt Ford asked Friday, “How did ethnic cleansing make the cut when Jim Crow-style apartheid didn’t?”
Nationalism is an “extreme right movement and ideology, but it doesn’t seem to be always associated with racism (at least not explicitly),” reads the document, which was obtained and published by Motherboard.
The paper continues, explaining that “some white nationalists carefully avoid the term supremacy because it has negative connotations.”
Though the document shows Facebook made an effort to explain the distinctions, it mostly reveals the social media company doesn’t have a clear grasp of the issue.
In a section titled “challenges” for white supremacy, Facebook instructed moderators that “overlaps with white nationalism/separatism, even orgs and individuals define themselves inconsistently.”
“Can you say you’re a racist on Facebook?” the document asks.
“No,” it responds. “By definition, as a racist, you hate on at least one of our characteristics that are protected.”
Facebook explained Friday in a statement to Motherboard that it evaluates “whether an individual or group should be designated as a hate figure or organization based on a number of different signals, such as whether they carried out or have called for violence against people based on race, religion or other protected categories.”
The social media company added the following (oddly resorting to British orthography): “Our policies against organised hate groups and individuals are longstanding and explicit — we don’t allow these groups to maintain a presence on Facebook because we don’t want to be a platform for hate. Using a combination of technology and people we work aggressively to root out extremist content and hate organisations from our platform.”
OK, that sounds noble and all, but how does the company plan to do this effectively and consistently when it seems like it hasn’t even made it out of the definition of terms stage yet?
