Abstract
© 2016 IEEE.Most research in image classification has focused on applications such as face, object, scene and character recognition. This paper examines a comparative study between deep convolutional neural networks (CNNs) and bag of visual words (BOW) variants for recognizing animals. We developed two variants of the bag of visual words (BOW and HOG-BOW) and examine the use of gray and color information as well as different spatial pooling approaches. We combined the final feature vectors extracted from these BOW variants with a regularized L2 support vector machine (L2-SVM) to distinguish between classes within our datasets. We modified existing deep CNN architectures: AlexNet and GoogleNet, by reducing the number of neurons in each layer of the fully connected layers and last inception layer for both scratch and pre-trained versions. Finally, we compared the existing CNN methods, our modified CNN architectures and the proposed BOW variants on our novel wild-animal dataset (Wild-Anim). The results show that the CNN methods significantly outperform the BOW techniques.
Original language | English |
---|---|
Title of host publication | 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
ISBN (Print) | 9781509042401 |
DOIs | |
Publication status | Published - 9-Feb-2017 |
Publication series
Name | 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016 |
---|
Fingerprint
Dive into the research topics of 'Comparative study between deep learning and bag of visual words for wild-animal recognition'. Together they form a unique fingerprint.Datasets
-
Wild-Anim Dataset
Okafor, E. (Contributor), Schomaker, L. (Contributor) & Wiering, M. (Contributor), DataverseNL, 25-Mar-2019
DOI: 10.34894/mwe8s8
Dataset