Countering online hate speech: How does human rights due diligence impact terms of service?

Eva Nave*, Lottie Lane

*Bijbehorende auteur voor dit werk

OnderzoeksoutputAcademicpeer review

2 Citaten (Scopus)
64 Downloads (Pure)


The Internet is a global forum largely governed by private actors driven by profit concerns, often disregarding the human rights of historically marginalised communities. Increased attention is being paid to the corporate human rights due diligence (HRDD) responsibilities applicable to online platforms countering illegal online content, such as hate speech. At the European Union (EU) level, cross-sector initiatives regulate the rights of marginalised groups and establish HRDD responsibilities for online platforms to expeditiously identify, prevent, mitigate, remedy and remove online hate speech. These initiatives include the Digital Services Act, the Audiovisual Media Services Directive, the proposed Directive on Corporate Sustainability Due Diligence, the proposed Artificial Intelligence Act and the Code of conduct on countering illegal hate speech online. Nevertheless, the HRDD framework applicable to online hate speech has focused mostly on the platforms’ responsibilities throughout the course of their operations - guidance regarding HRDD requirements concerning the regulation of hate speech in the platforms’ Terms of Service (ToS) is missing. This paper employs a conceptualisation of criminal hate speech as explained in the Council of Europe Committee of Ministers’ Recommendation CM/Rec(2022)16, Paragraph 11, to develop specific HRDD responsibilities. We argue that online platforms should, as part of emerging preventive HRDD responsibilities within Europe, respect the rights of historically oppressed communities by aligning their ToS with the conceptualisation of criminal hate speech in European human rights standards.
Originele taal-2English
Aantal pagina's18
TijdschriftComputer Law & Security Review
StatusPublished - nov.-2023

Citeer dit