Comparing Intrinsic and Extrinsic Evaluation of MT Output in a Dialogue System

Ielka van der Sluis, S. Luz, A. Schneider

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    117 Downloads (Pure)

    Abstract

    We present an exploratory study to assess machine translation
    output for application in a dialogue system using an intrinsic
    and an extrinsic evaluation method. For the intrinsic
    evaluation we developed an annotation scheme to determine
    the quality of the translated utterances in isolation. For the
    extrinsic evaluation we employed theWizard of Oz technique
    to assess the quality of the translations in the context of a dialogue
    application. Results differ and we discuss the possible
    reasons for this outcome.
    Original languageEnglish
    Title of host publicationProceedings of the 7th International Workshop on Spoken Language Translation
    Subtitle of host publicationIWSLT 2010
    EditorsMarcello Federico
    Place of PublicationParis
    Pages329-336
    Number of pages7
    Publication statusPublished - 2010
    Event7th International Workshop on Spoken Language Translation - Paris, France
    Duration: 2-Dec-20103-Dec-2010

    Conference

    Conference7th International Workshop on Spoken Language Translation
    Country/TerritoryFrance
    CityParis
    Period02/12/201003/12/2010

    Cite this