Comparing Intrinsic and Extrinsic Evaluation of MT Output in a Dialogue System

Ielka van der Sluis, S. Luz, A. Schneider

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    6 Citations (Scopus)
    129 Downloads (Pure)


    We present an exploratory study to assess machine translation
    output for application in a dialogue system using an intrinsic
    and an extrinsic evaluation method. For the intrinsic
    evaluation we developed an annotation scheme to determine
    the quality of the translated utterances in isolation. For the
    extrinsic evaluation we employed theWizard of Oz technique
    to assess the quality of the translations in the context of a dialogue
    application. Results differ and we discuss the possible
    reasons for this outcome.
    Original languageEnglish
    Title of host publicationProceedings of the 7th International Workshop on Spoken Language Translation
    Subtitle of host publicationIWSLT 2010
    EditorsMarcello Federico
    Place of PublicationParis
    Number of pages7
    Publication statusPublished - 2010
    Event7th International Workshop on Spoken Language Translation - Paris, France
    Duration: 2-Dec-20103-Dec-2010


    Conference7th International Workshop on Spoken Language Translation

    Cite this