A New Model for Generating Multimodal Referring Expressions

    OnderzoeksoutputAcademicpeer review

    87 Downloads (Pure)


    We present a new algorithm for the generation
    of multimodal referring expressions
    (combining language and deictic
    gestures).1 The approach differs from
    earlier work in that we allow for various
    gradations of preciseness in pointing,
    ranging from unambiguous to vague
    pointing gestures. The model predicts
    that linguistic properties realized in the
    generated expression are co-dependent
    on the kind of pointing gesture included.
    The decision to point is based on a tradeoff
    between the costs of pointing and the
    costs of linguistic properties, where both
    kinds of costs are computed in empirically
    motivated ways. The model has
    been implemented using a graph-based
    generation algorithm.
    Originele taal-2English
    TitelProceedings of the 9th European Workshop on Natural Language Generation (ENLG'03)
    Plaats van productieBudapest Hungary
    Pagina's47- 54
    StatusPublished - 2003

    Citeer dit