Facebook, Twitter, Microsoft and YouTube have all agreed to new EU regulations that will require them to review hateful content online, putting law into the hands of private companies.
The new ruling follows the establishment of the EU Internet Forum in December 2015, which aimed to bring together EU Interior Ministers, high-level representatives of major internet companies, Europol, the EU Counter Terrorism Co-ordinator and the European Parliament, to discuss matters relating to harmful material online.
The rule, which was passed earlier today by the European Commission, is part of a new ‘code of conduct’ in response to earlier terror attacks in the European Union. It states that the companies have “a collective responsibility and pride in promoting and facilitating freedom of expression throughout the online world.”
By signing the code, they agree to have a clear procedure in place for removing hate speech within 24 hours, while also “strengthening ongoing partnerships with civil society organisations.”
This includes co-operating with the Member State authorities quickly and effectively to provide information on content that could endanger the lives of others.
Say no more
While there’s no arguing that certain types of content needs some form of policing, the implications of leaving it to private firms is a little worrying. European Digital Rights (EDRi) and Access Now, two not-for-profit associations dedicated to preserving net neutrality within the EU, have already addressed concerns in a joint statement released earlier today.
Both companies argued that civil society organisations were left in the dark during discussions, meaning they had no say in the matter whatsoever. Further to this, they also stated that the ruling effectively downgrades the law to a “second-class status”, leaving the private companies free to internally govern what they deem to be controversial content.
We’ve already seen the lines between hate speech and acceptable content blurred in the past, when Twitter revoked the ‘verified’ status of controversial user Milo Yiannopolous, an outspoken campaigner of male rights. In an interview with CNNMoney, he argued that the reasons for his removal were unfair.
“Ridicule and criticism are being re-branded abuse and harassment,” he told the website.
“They don’t like the jokes I make so they’re coming at me.”
In the case of Yiannopolous, it was likely a combination of public outcry and media response that resulted in his removal, and there’s every chance that future decisions on ‘hateful content’ could be done with the interests of the private company themselves, rather than what is legally correct. Speaking to Kantar Consumer Insight Director Imran Choudhary, he believes it’s a matter of how strictly these companies stick to the ruling. “The potential downside here is that the nature of the subject matter might be open to debate.”
“Only time will tell how this ruling is executed and how rigidly the letter of the ruling is interpreted”.
A preliminary assessment of the ruling is due to be reported to the High Level Group by the end of 2016.
For more on Google, visit What Mobile’s dedicated Google page.