Learning representations of sound using trainable COPE feature extractors

Nicola Strisciuglio*, Mario Vento, Nicolai Petkov

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

16 Citations (Scopus)
157 Downloads (Pure)


Sound analysis research has mainly been focused on speech and music processing. The deployed methodologies are not suitable for analysis of sounds with varying background noise, in many cases with very low signal-to-noise ratio (SNR). In this paper, we present a method for the detection of patterns of interest in audio signals. We propose novel trainable feature extractors, which we call COPE (Combination of Peaks of Energy). The structure of a COPE feature extractor is determined using a single prototype sound pattern in an automatic configuration process, which is a type of representation learning. We construct a set of COPE feature extractors, configured on a number of training patterns. Then we take their responses to build feature vectors that we use in combination with a classifier to detect and classify patterns of interest in audio signals. We carried out experiments on four public data sets: MIVIA audio events, MIVIA road events, ESC-10 and TU Dortmund data sets. The results that we achieved (recognition rate equal to 91.71% on the MIVIA audio events, 94% on the MIVIA road events, 81.25% on the ESC-10 and 94.27% on the TU Dortmund) demonstrate the effectiveness of the proposed method and are higher than the ones obtained by other existing approaches. The COPE feature extractors have high robustness to variations of SNR. Real-time performance is achieved even when the value of a large number of features is computed.

Original languageEnglish
Pages (from-to)25-36
Number of pages12
JournalPattern recognition
Early online date21-Mar-2019
Publication statusPublished - Aug-2019


  • Audio analysis
  • Event detection
  • Peaks of energy
  • Representation learning
  • Trainable feature extractors
  • TIME

Cite this