HateBERT: Retraining BERT for Abusive Language Detection in English

Tommaso Caselli, Valerio Basile, Jelena Mitrović, Michael Granitzer

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

136 Citations (Scopus)
136 Downloads (Pure)

Abstract

In this paper, we introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have collected and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the abuse-inclined version obtained by retraining with posts from the banned communities on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the generic pre-trained language model and its corresponding abusive language-inclined counterpart across the datasets, indicating that portability is affected by compatibility of the annotated phenomena.
Original languageEnglish
Title of host publicationProceedings of the 5th Workshop on Online Abuse and Harm
EditorsAida Mostafazadeh Davani, Douwe Kiela, Mathias Lambert, Bertie Vidgen, Vinodkumar Prabhakaran, Zeerak Waseem
PublisherAssociation for Computational Linguistics (ACL)
Pages17-25
Number of pages9
DOIs
Publication statusPublished - 27-Jul-2021

Keywords

  • hate speech
  • offensive language
  • language models

Fingerprint

Dive into the research topics of 'HateBERT: Retraining BERT for Abusive Language Detection in English'. Together they form a unique fingerprint.

Cite this