Facebook has acknowledged that it needs to do more to combat hate speech in a blog post for its “Hard Questions” series.
The company’s current definition for hate speech is that which directly attacks anyone based on a “protected characteristic”. This includes their race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.
No ad to show here.
Richard Allen, VP of Public Policy for Europe, the Middle East and Africa, noted that this definition is not universal.
Countries like South Africa and Germany have their own definitions due to their history, while the US protects all kinds of speech under its constitution.
The company has committed to participating in academic work like the Free Speech Debate and Dangerous Speech Project that seeks to define the boundaries of speech online.
Facebook acknowledged that it needs to do more to combat hate speech on its platform
Over the last two months, Facebook has deleted around 66 000 posts reported as hate speech — though Allen says he knows it isn’t enough.
“It’s clear we’re not perfect when it comes to enforcing our policy. Often there are close calls — and too often we get it wrong.”
He mentions the incident last year when Facebook temporarily blocked activist Shaun King after he posted a racist message he’d been sent. The company says that at first it did not recognise that King was condemning the email.
For now, Facebook is experimenting with technology that can “filter the most obviously toxic language in comments so they are hidden from posts.” Allen reiterates that Facebook will employ 3000 people to its community operations, which works on all matters of report and removal — including that of terrorist content.
The company is working on releasing its data to the public.