Unmasking Contextual Stereotypes: Measuring and Mitigating BERT’s Gender Bias

Marion Bartl, Malvina Nissim, Albert Gatt

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    175 Downloads (Pure)

    Abstract

    Contextualized word embeddings have been replacing standard embeddings as the representational knowledge source of choice in NLP systems. Since a variety of biases have previously been found in standard word embeddings, it is crucial to assess biases encoded in their replacements as well. Focusing on BERT (Devlin et al., 2018), we measure gender bias by studying associations between gender-denoting target words and names of professions in English and German, comparing the findings with real-world workforce statistics. We mitigate bias by fine-tuning BERT on the GAP corpus (Webster et al., 2018), after applying Counterfactual Data Substitution (CDS) (Maudslay et al., 2019). We show that our method of measuring bias is appropriate for languages such as English, but not for languages with a rich morphology and gender-marking, such as German. Our results highlight the importance of investigating bias and mitigation techniques cross-linguistically,especially in view of the current emphasis on large-scale, multilingual language models.
    Original languageEnglish
    Title of host publicationProceedings of the Second Workshop on Gender Bias in Natural Language Processing
    Subtitle of host publicationCOLING 2020
    EditorsMarta R. Costa-jussà, Christian Hardmeier, Will Radford, Kellie Webster
    PublisherAssociation for Computational Linguistics (ACL)
    Number of pages16
    Publication statusPublished - 2020
    EventCOLING Workshop on Gender Bias in Natural Language Processing - Online, Barcelona, Spain
    Duration: 13-Dec-2020 → …
    Conference number: 2

    Workshop

    WorkshopCOLING Workshop on Gender Bias in Natural Language Processing
    Country/TerritorySpain
    CityBarcelona
    Period13/12/2020 → …

    Cite this