Multimodal Emotion Recognition from Art Using Sequential Co-Attention

Tsegaye Misikir Tashu*, Sakina Hajiyeva, Tomas Horvath

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

24 Citations (Scopus)
52 Downloads (Pure)

Abstract

In this study, we present a multimodal emotion recognition architecture that uses both feature-level attention (sequential co-attention) and modality attention (weighted modality fusion) to classify emotion in art. The proposed architecture helps the model to focus on learning informative and refined representations for both feature extraction and modality fusion. The resulting system can be used to categorize artworks according to the emotions they evoke; recommend paintings that accentuate or balance a particular mood; search for paintings of a particular style or genre that represents custom content in a custom state of impact. Experimental results on the WikiArt emotion dataset showed the efficiency of the approach proposed and the usefulness of three modalities in emotion recognition.
Original languageEnglish
Article number157
Number of pages12
JournalJournal of Imaging
Volume7
Issue number8
DOIs
Publication statusPublished - 21-Aug-2021
Externally publishedYes

Fingerprint

Dive into the research topics of 'Multimodal Emotion Recognition from Art Using Sequential Co-Attention'. Together they form a unique fingerprint.

Cite this