Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer

    OnderzoeksoutputAcademicpeer review

    30 Citaten (Scopus)
    131 Downloads (Pure)

    Samenvatting

    Scarcity of parallel data causes formality style transfer models to have scarce success in preserving content. We show that fine-tuning pre-trained language (GPT-2) and sequence-to-sequence (BART) models boosts content preservation, and that this is possible even with limited amounts of parallel data. Augmenting these models with rewards that target style and content –the two core aspects of the task– we achieve a new state-of-the-art.
    Originele taal-2English
    TitelProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
    RedacteurenChengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
    Plaats van productieBangkok, Thailand
    UitgeverijAssociation for Computational Linguistics, ACL Anthology
    Pagina's484-494
    Aantal pagina's11
    Volume2
    DOI's
    StatusPublished - 2021

    Vingerafdruk

    Duik in de onderzoeksthema's van 'Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer'. Samen vormen ze een unieke vingerafdruk.

    Citeer dit