Abstract
Three popular vocal-tract animation
paradigms were tested for intelligibility
when displaying videos of pre-recorded
Electromagnetic Articulography (EMA)
data in an online experiment. EMA tracks
the position of sensors attached to the
tongue. The conditions were dots with
tails (where only the coil location is
presented), 2D animation (where the dots
are connected to form 2D representations
of the lips, tongue surface and chin), and
a 3D model with coil locations driving
facial and tongue rigs. The 2D animation (recorded in VisArtico) showed the
highest identification of the prompts.
Original language | English |
---|---|
Title of host publication | Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, August 2016, Berlin, Germany |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 87-92 |
Number of pages | 6 |
Publication status | Published - 2016 |