TY - JOUR
T1 - Place and Object Recognition by CNN-Based COSFIRE Filters
AU - Lopez-Antequera, Manuel
AU - Vallina, Maria Leyva
AU - Strisciuglio, Nicola
AU - Petkov, Nicolai
PY - 2019
Y1 - 2019
N2 - COSFIRE filters are an effective means for detecting and localizing visual patterns. In contrast to a convolutional neural network (CNN), such a filter can be configured by presenting a single training example and it can be applied on images of any size. The main limitation of COSFIRE filters so far was the use of only Gabor and DoGs contributing filters for the configuration of a COSFIRE filter. In this paper, we propose to use a much broader class of contributing filters, namely filters defined by intermediate CNN representations. We apply our proposed method on the MNIST data set, on the butterfly data set, and on a garden data set for place recognition, obtaining accuracies of 99.49%, 96.57%, and 89.84%, respectively. Our method outperforms a CNN-baseline method in which the full CNN representation at a certain layer is used as input to an SVM classifier. It also outperforms traditional non-CNN methods for the studied applications. In the case of place recognition, our method outperforms NetVLAD when only one reference image is used per scene and the two methods perform similarly when many reference images are used.
AB - COSFIRE filters are an effective means for detecting and localizing visual patterns. In contrast to a convolutional neural network (CNN), such a filter can be configured by presenting a single training example and it can be applied on images of any size. The main limitation of COSFIRE filters so far was the use of only Gabor and DoGs contributing filters for the configuration of a COSFIRE filter. In this paper, we propose to use a much broader class of contributing filters, namely filters defined by intermediate CNN representations. We apply our proposed method on the MNIST data set, on the butterfly data set, and on a garden data set for place recognition, obtaining accuracies of 99.49%, 96.57%, and 89.84%, respectively. Our method outperforms a CNN-baseline method in which the full CNN representation at a certain layer is used as input to an SVM classifier. It also outperforms traditional non-CNN methods for the studied applications. In the case of place recognition, our method outperforms NetVLAD when only one reference image is used per scene and the two methods perform similarly when many reference images are used.
KW - COSFIRE filter
KW - CNN
KW - object recognition
KW - place recognition
KW - VESSEL DELINEATION
U2 - 10.1109/ACCESS.2019.2918267
DO - 10.1109/ACCESS.2019.2918267
M3 - Article
SN - 2169-3536
VL - 7
SP - 66157
EP - 66166
JO - IEEE Access
JF - IEEE Access
ER -