Skip to main navigation Skip to search Skip to main content

Visually Grounded Speech Models have a Mutual Exclusivity Bias

    Research output: Working paperPreprintAcademic

    40 Downloads (Pure)

    Abstract

    When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: a novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high variability of spoken words. We investigate the ME bias in the context of visually grounded speech models that learn from natural images and continuous speech audio. Concretely, we train a model on familiar words and test its ME bias by asking it to select between a novel and a familiar object when queried with a novel word. To simulate prior acoustic and visual knowledge, we experiment with several initialisation strategies using pretrained speech and vision networks. Our findings reveal the ME bias across the different initialisation approaches, with a stronger bias in models with more prior (in particular, visual) knowledge. Additional tests confirm the robustness of our results, even when different loss functions are considered.
    Original languageEnglish
    PublisherAssociation for Computational Linguistics (ACL)
    Number of pages15
    DOIs
    Publication statusSubmitted - 20-Mar-2024

    Keywords

    • cs.CL
    • eess.AS

    Fingerprint

    Dive into the research topics of 'Visually Grounded Speech Models have a Mutual Exclusivity Bias'. Together they form a unique fingerprint.

    Cite this