Fusion of domain-specific and trainable features for gender recognition from face images

George Azzopardi, Antonio Greco, Alessia Saggese, Mario Vento

OnderzoeksoutputAcademicpeer review

30 Citaten (Scopus)
270 Downloads (Pure)

Samenvatting

The popularity and the appeal of systems which are able to automatically determine the gender from face images is growing rapidly. Such a great interest arises from the wide variety of applications, especially in the fields of retail and video surveillance. In recent years there have been several attempts to address this challenge, but a definitive solution has not yet been found. In this paper we propose a novel approach that fuses domain-specific and trainable features to recognize the gender from face images. In particular, we use the SURF descriptors extracted from 51 facial landmarks related to eyes, nose and mouth as domain dependent features, and the COSFIRE filters as trainable features. The proposed approach turns out to be very robust with respect to the well known face variations, including different poses, expressions and illumination conditions. It achieves state-of-the-art recognition rates on the GENDER- FERET (94.7%) and on the LFW (99.4%) datasets, which are two of the most popular benchmarks for gender recognition. We further evaluated the method on a new dataset acquired in real scenarios, the UNISA- Public, recently made publicly available. It consists of 206 training (144 male, 62 female) and 200 test (139 male, 61 female) images that are acquired with a real-time indoor camera capturing people in regular walking motion. Such experiment has the aim to assess the capability of the algorithm to deal with face images extracted from videos, which are definitely more challenging than the still images available in the standard datasets. Also for this dataset we achieved a high recognition rate of 91.5%, that confirms the generalization capabilities of the proposed approach. Of the two types of features, the trainable COSFIRE filters are the most effective and, given their trainable character, they can be applied in any visual pattern recognition problem.
Originele taal-2English
Pagina's (van-tot) 24171 - 24183
TijdschriftIEEE Access
Volume6
DOI's
StatusPublished - 24-mei-2018

Citeer dit