Gestures and speech are clearly synchronized in many ways. However, previous studies have shown that the semantic similarity between gestures and speech breaks down as people approach transitions in understanding. Explanations for these gesture-speech mismatches which focus on gestures and speech expressing different cognitive strategies, have been criticized for disregarding gestures’ and speech’s integration and synchronization. In the current study, we applied three different perspectives to investigate gesture-speech synchronization in an easy and a difficult task: temporal alignment, semantic similarity, and complexity matching. Participants engaged in a simple cognitive task, and were assigned to either an easy or a difficult condition. We automatically measured pointing gestures, and we coded participant’s speech, to determine the temporal alignment and semantic similarity between gestures and speech. Multifractal Detrended Fluctuation Analysis (MFDFA) was used to determine the extent of complexity matching between gestures and speech. We found that task difficulty indeed influenced gesture-speech synchronization in all three domains. We thereby extended the phenomenon of gesture-speech mismatches to difficult tasks in general. Furthermore, we investigated how temporal alignment, semantic similarity, and complexity matching were related in each condition, and how they predicted participants’ task performance. Our study illustrates how combining multiple perspectives, originating from different research areas (i.e., coordination dynamics, complexity science, cognitive psychology) provides novel understanding about cognitive concepts in general, and about gesture-speech synchronization and task difficulty in specific.
|Publication status||Accepted/In press - 2021|
- gesture-speech mismatches
- complexity matching
- Multi Fractal Detrended Fluctuation Analysis