LCT-1 at SemEval-2023 Task 10: Pre-training and Multi-task Learning for Sexism Detection and Classification

Konstantin Chernyshev*, Ekaterina Garanina*, Duygu Bayram, Qiankun Zheng, Lukas Edman

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

10 Downloads (Pure)

Abstract

Misogyny and sexism are growing problems in social media. Advances have been made in online sexism detection but the systems are often uninterpretable. SemEval-2023 Task 10 on Explainable Detection of Online Sexism aims at increasing explainability of the sexism detection, and our team participated in all the proposed subtasks. Our system is based on further domain-adaptive pre-training (Gururangan et al., 2020). Building on the Transformer-based models with the domain adaptation, we compare fine-tuning with multi-task learning and show that each subtask requires a different system configuration.

Original languageEnglish
Title of host publication17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop
EditorsAtul Kr. Ojha, A. Seza Dogruoz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
PublisherAssociation for Computational Linguistics, ACL Anthology
Pages1573-1581
Number of pages9
ISBN (Electronic)9781959429999
DOIs
Publication statusPublished - 2023
Event17th International Workshop on Semantic Evaluation, SemEval 2023, co-located with the 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023 - Hybrid, Toronto, Canada
Duration: 13-Jul-202314-Jul-2023

Conference

Conference17th International Workshop on Semantic Evaluation, SemEval 2023, co-located with the 61st Annual Meeting of the Association for Computational Linguistics, ACL 2023
Country/TerritoryCanada
CityHybrid, Toronto
Period13/07/202314/07/2023

Cite this