Multimodal Emotion Recognition from Art Using Sequential Co-Attention

Tsegaye Misikir Tashu*, Sakina Hajiyeva, Tomas Horvath

*Corresponding author voor dit werk

OnderzoeksoutputAcademicpeer review

24 Citaten (Scopus)
52 Downloads (Pure)

Samenvatting

In this study, we present a multimodal emotion recognition architecture that uses both feature-level attention (sequential co-attention) and modality attention (weighted modality fusion) to classify emotion in art. The proposed architecture helps the model to focus on learning informative and refined representations for both feature extraction and modality fusion. The resulting system can be used to categorize artworks according to the emotions they evoke; recommend paintings that accentuate or balance a particular mood; search for paintings of a particular style or genre that represents custom content in a custom state of impact. Experimental results on the WikiArt emotion dataset showed the efficiency of the approach proposed and the usefulness of three modalities in emotion recognition.
Originele taal-2English
Artikelnummer157
Aantal pagina's12
TijdschriftJournal of Imaging
Volume7
Nummer van het tijdschrift8
DOI's
StatusPublished - 21-aug.-2021
Extern gepubliceerdJa

Vingerafdruk

Duik in de onderzoeksthema's van 'Multimodal Emotion Recognition from Art Using Sequential Co-Attention'. Samen vormen ze een unieke vingerafdruk.

Citeer dit