I Feel Offended, Don’t Be Abusive! Implicit/Explicit Messages in Offensive and Abusive Language

Tommaso Caselli*, Valerio Basile, Jelena Mitrović, Inga Kartozya, Micheal Granitzer

*Corresponding author for this work

    Research output: Contribution to conferencePaperAcademic

    105 Citations (Scopus)
    159 Downloads (Pure)

    Abstract

    Abusive language detection is an unsolved and challenging problem for the NLP community. Recent literature suggests various
    approaches to distinguish between different language phenomena (e.g., hate speech vs. cyberbullying vs. offensive language) and factors
    (degree of explicitness and target) that may help to classify different abusive language phenomena. There are data sets that annotate the
    target of abusive messages (i.e.OLID/OffensEval (Zampieri et al., 2019a)). However, there is a lack of data sets that take into account the
    degree of explicitness. In this paper, we propose annotation guidelines to distinguish between explicit and implicit abuse in English and
    apply them to OLID/OffensEval. The outcome is a newly created resource, AbuseEval v1.0, which aims to address some of the existing
    issues in the annotation of offensive and abusive language (e.g., explicitness of the message, presence of a target, need of context, and
    interaction across different phenomena)
    Original languageEnglish
    Pages1-11
    Number of pages12
    Publication statusPublished - 2020
    Event12th Language Resources and Evaluation Conference
    : LREC 2020
    - Marseille, France
    Duration: 11-May-202016-May-2020
    https://lrec2020.lrec-conf.org/en/

    Conference

    Conference12th Language Resources and Evaluation Conference
    Country/TerritoryFrance
    CityMarseille
    Period11/05/202016/05/2020
    Internet address

    Fingerprint

    Dive into the research topics of 'I Feel Offended, Don’t Be Abusive! Implicit/Explicit Messages in Offensive and Abusive Language'. Together they form a unique fingerprint.

    Cite this