Skip to main navigation Skip to search Skip to main content

Comparison of a Specialized Large Language Model with GPT-4o for CT and MRI Radiology Report Summarization

  • Sunyi Zheng
  • , Nannan Zhao
  • , Jing Wang
  • , Tao Yu
  • , Dongsheng Yue
  • , Wenjia Zhang
  • , Shuxuan Fan
  • , Xiaolei Wang
  • , Guilin Tang
  • , Yuxuan Sun
  • , Hongwei Wang
  • , Shui Liu
  • , Jiaxin Liu
  • , Keyi Bian
  • , Yuwei Zhang
  • , Geertruida H. de Bock
  • , Matthijs Oudkerk
  • , Xiaonan Cui*
  • , Rozemarijn Vliegenthart
  • , Zhaoxiang Ye
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

6 Citations (Scopus)

Abstract

Background: Although the general-purpose large language model (LLM) GPT-4o (OpenAI) has shown promise in radiology language processing, it remains unclear whether the performance of GPT-4o in report summarization is better than that of an LLM specifically designed for this task. 

Purpose: To compare the performance of a specialized LLM with that of GPT-4o in the comprehensive summarization of radiology reports. 

Materials and Methods: A specialized LLM for report summarization (LLM-RadSum) was developed using retrospectively collected reports from a hospital, divided into training and internal test sets (9:1 ratio). The F1 scores based on the longest common subsequences were evaluated on the internal test set and an external test set of reports from four other hospitals. Only CT and MRI reports containing findings and impressions sections were included. For comparison with GPT-4o, a human evaluation set included 1800 reports randomly selected from the internal and external test sets, ensuring balanced coverage of imaging modalities (CT, MRI) and anatomic sites (chest, neck, head, pelvis, abdomen, breast). Three senior radiologists and two clinicians assessed this set, focusing on factual consistency, impression coherence, medical safety, and clinical use. A t test was performed to compare F1 scores between models. 

Results: The training, internal test, and external test sets were composed of 956 219, 106 247, and 17 091 reports, respectively. The developed LLM-RadSum achieved median F1 scores for report summarization of 0.75 and 0.44 on the internal and external test sets and 0.58 on the human evaluation set (n = 1800). More than 81.5% (1467 of 1800) of outputs from LLM-RadSum met the standards of senior radiologists and clinicians regarding factual consistency, impression coherence, medical safety, and clinical use. In contrast, at least 27.8% (501 of 1800) of outputs from GPT-4o required adjustments in these aspects. Overall, LLM-RadSum achieved a higher median F1 score for report summarization compared with GPT-4o (0.58 [IQR: 0.44–0.77] vs 0.30 [IQR: 0.23–0.37]; P < .001); superior performance from LLM-RadSum was observed across anatomic regions, modalities, sexes, ages, and impression lengths (all P < .001). 

Conclusion: A specialized LLM for report summarization had better performance than GPT-4o, a general-purpose LLM, in generating radiology report summaries.

Original languageEnglish
Article numbere243774
Number of pages10
JournalRadiology
Volume316
Issue number2
DOIs
Publication statusPublished - Aug-2025

Fingerprint

Dive into the research topics of 'Comparison of a Specialized Large Language Model with GPT-4o for CT and MRI Radiology Report Summarization'. Together they form a unique fingerprint.

Cite this