Know What You do Not Know: Verbalized Uncertainty Estimation Robustness on Corrupted Images in Vision-Language Models

Mirko Borszukovski, Ivo Pascal de Jong*, Matias Valdenegro-Toro*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

To leverage the full potential of Large Language Models (LLMs) it is crucial to have some information on their answers' uncertainty. This means that the model has to be able to quantify how certain it is in the correctness of a given response. Bad uncertainty estimates can lead to overconfident wrong answers undermining trust in these models. Quite a lot of research has been done on language models that work with text inputs and provide text outputs. Still, since the visual capabilities have been added to these models recently, there has not been much progress on the uncertainty of Visual Language Models (VLMs). We tested three state-of-the-art VLMs on corrupted image data. We found that the severity of the corruption negatively impacted the models' ability to estimate their uncertainty and the models also showed overconfidence in most of the experiments.
Original languageEnglish
Title of host publicationTrustNLP Workshop @ NAACL 2025
Subtitle of host publicationFifth Workshop on Trustworthy Natural Language Processing
PublisherAssociation for Computational Linguistics, ACL Anthology
Publication statusSubmitted - 4-Apr-2024

Keywords

  • VLMs
  • Uncertainty
  • Computer Vision
  • Visual Question Answering

Fingerprint

Dive into the research topics of 'Know What You do Not Know: Verbalized Uncertainty Estimation Robustness on Corrupted Images in Vision-Language Models'. Together they form a unique fingerprint.

Cite this