Virtual reality facial emotion recognition in social environments: An eye-tracking study

C.N.W. Geraets*, S. Klein Tuente, B.P. Lestestuiver, M. van Beilen, S.A. Nijman, J.B.C. Marsman, W. Veling

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

41 Citations (Scopus)
299 Downloads (Pure)

Abstract

Background: Virtual reality (VR) enables the administration of realistic and dynamic stimuli within a social context for the assessment and training of emotion recognition. We tested a novel VR emotion recognition task by comparing emotion recognition across a VR, video and photo task, investigating covariates of recognition and exploring visual attention in VR.

Methods: Healthy individuals (n = 100) completed three emotion recognition tasks; a photo, video and VR task. During the VR task, emotions of virtual characters (avatars) in a VR street environment were rated, and eye-tracking was recorded in VR.

Results: Recognition accuracy in VR (overall 75%) was comparable to the photo and video task. However, there were some differences; disgust and happiness had lower accuracy rates in VR, and better accuracy was achieved for surprise and anger in VR compared to the video task. Participants spent more time identifying disgust, fear and sadness than surprise and happiness. In general, attention was directed longer to the eye and nose areas than the mouth.

Discussion: Immersive VR tasks can be used for training and assessment of emotion recognition. VR enables easily controllable avatars within environments relevant for daily life. Validated emotional expressions and tasks will be of relevance for clinical applications.

Original languageEnglish
Article number100432
Pages (from-to)100432
Number of pages8
JournalInternet Interventions
Volume25
DOIs
Publication statusPublished - Sept-2021

Fingerprint

Dive into the research topics of 'Virtual reality facial emotion recognition in social environments: An eye-tracking study'. Together they form a unique fingerprint.

Cite this