Can you trust predictive uncertainty under real dataset shifts in digital pathology?

Jeppe Thagaard*, Søren Hauberg, Bert van der Vegt, Thomas Ebstrup, Johan D. Hansen, Anders B. Dahl

*Corresponding author for this work

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    24 Citations (Scopus)
    22 Downloads (Pure)

    Abstract

    Deep learning-based algorithms have shown great promise for assisting pathologists in detecting lymph node metastases when evaluated based on their predictive accuracy. However, for clinical adoption, we need to know what happens when the test set dramatically changes from the training distribution. In such settings, we should estimate the uncertainty of the predictions, so we know when to trust the model (and when not to). Here, we i) investigate current popular methods for improving the calibration of predictive uncertainty, and ii) compare the performance and calibration of the methods under clinically relevant in-distribution dataset shifts. Furthermore, we iii) evaluate their performance on the task of out-of-distribution detection of a different histological cancer type not seen during training. Of the investigated methods, we show that deep ensembles are more robust in respect of both performance and calibration for in-distribution dataset shifts and allows us to better detect incorrect predictions. Our results also demonstrate that current methods for uncertainty quantification are not necessarily able to detect all dataset shifts, and we emphasize the importance of monitoring and controlling the input distribution when deploying deep learning for digital pathology.

    Original languageEnglish
    Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings
    EditorsAnne L. Martel, Purang Abolmaesumi, Danail Stoyanov, Diana Mateus, Maria A. Zuluaga, S. Kevin Zhou, Daniel Racoceanu, Leo Joskowicz
    PublisherSpringer Science and Business Media Deutschland GmbH
    Pages824-833
    Number of pages10
    ISBN (Print)9783030597092
    DOIs
    Publication statusPublished - 2020
    Event23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020 - Lima, Peru
    Duration: 4-Oct-20208-Oct-2020

    Publication series

    NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume12261 LNCS
    ISSN (Print)0302-9743
    ISSN (Electronic)1611-3349

    Conference

    Conference23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020
    Country/TerritoryPeru
    CityLima
    Period04/10/202008/10/2020

    Keywords

    • Deep learning
    • Digital pathology
    • Predictive uncertainty

    Fingerprint

    Dive into the research topics of 'Can you trust predictive uncertainty under real dataset shifts in digital pathology?'. Together they form a unique fingerprint.

    Cite this