Read my points: Effect of animation type when speech-reading from EMA data

Kristy James, Martijn Wieling

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    1 Citation (Scopus)
    245 Downloads (Pure)

    Abstract

    Three popular vocal-tract animation paradigms were tested for intelligibility when displaying videos of pre-recorded Electromagnetic Articulography (EMA) data in an online experiment. EMA tracks the position of sensors attached to the tongue. The conditions were dots with tails (where only the coil location is presented), 2D animation (where the dots are connected to form 2D representations of the lips, tongue surface and chin), and a 3D model with coil locations driving facial and tongue rigs. The 2D animation (recorded in VisArtico) showed the highest identification of the prompts.
    Original languageEnglish
    Title of host publicationProceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, August 2016, Berlin, Germany
    PublisherAssociation for Computational Linguistics (ACL)
    Pages87-92
    Number of pages6
    Publication statusPublished - 2016

    Cite this