Understanding the causes and consequences of variability in infant ERP editing practices

Claire Monroy*, Estefanía Domínguez-Martínez, Benjamin Taylor, Oscar Portolés Marin, Eugenio Parise, Vincent M. Reid

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

3 Citations (Scopus)
18 Downloads (Pure)

Abstract

The current study examined the effects of variability on infant event-related potential (ERP) data editing methods. A widespread approach for analyzing infant ERPs is through a trial-by-trial editing process. Researchers identify electroencephalogram (EEG) channels containing artifacts and reject trials that are judged to contain excessive noise. This process can be performed manually by experienced researchers, partially automated by specialized software, or completely automated using an artifact-detection algorithm. Here, we compared the editing process from four different editors—three human experts and an automated algorithm—on the final ERP from an existing infant EEG dataset. Findings reveal that agreement between editors was low, for both the numbers of included trials and of interpolated channels. Critically, variability resulted in differences in the final ERP morphology and in the statistical results of the target ERP that each editor obtained. We also analyzed sources of disagreement by estimating the EEG characteristics that each human editor considered for accepting an ERP trial. In sum, our study reveals significant variability in ERP data editing pipelines, which has important consequences for the final ERP results. These findings represent an important step toward developing best practices for ERP editing methods in infancy research.

Original languageEnglish
Article numbere22217
Number of pages12
JournalDevelopmental Psychobiology
Volume63
Issue number8
DOIs
Publication statusPublished - Dec-2021

Keywords

  • artifact rejection
  • data editing
  • ERP methodology
  • infant EEG
  • infant event-related potential

Cite this