Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    29 Citations (Scopus)
    124 Downloads (Pure)

    Abstract

    Scarcity of parallel data causes formality style transfer models to have scarce success in preserving content. We show that fine-tuning pre-trained language (GPT-2) and sequence-to-sequence (BART) models boosts content preservation, and that this is possible even with limited amounts of parallel data. Augmenting these models with rewards that target style and content –the two core aspects of the task– we achieve a new state-of-the-art.
    Original languageEnglish
    Title of host publicationProceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
    EditorsChengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
    Place of PublicationBangkok, Thailand
    PublisherAssociation for Computational Linguistics, ACL Anthology
    Pages484-494
    Number of pages11
    Volume2
    DOIs
    Publication statusPublished - 2021

    Fingerprint

    Dive into the research topics of 'Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer'. Together they form a unique fingerprint.

    Cite this