Skip to main navigation Skip to search Skip to main content

Stress detection through prompt engineering with a general-purpose LLM

  • Nima Esmi*
  • , Asadollah Shahbahrami
  • , Yasaman Nabati
  • , Bita Rezaei
  • , Georgi Gaydadjiev
  • , Peter de Jonge
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)
31 Downloads (Pure)

Abstract

Advancements in large language models (LLMs) have opened new avenues for mental health monitoring through social media analysis. In this study, we present an iterative prompt engineering framework that significantly enhances the performance of the general-purpose LLM, GPT-4, for stress detection in social media posts, leveraging psychologist-informed hints. This approach achieved a substantial 17% accuracy improvement from 72% to 89% for the January 2025 version of GPT-4, alongside an 80% reduction in false positives compared to baseline zero-shot prompting. Our method not only surpassed domain-specific models like Mental-RoBERTa by 5% but also uniquely generates human-readable rationales. These rationales are crucial for mental health professionals, assisting them in understanding and validating the model’s outputs—a key benefit for sensitive mental health applications. These results highlight prompt engineering as a resource-efficient, transparent strategy to adapt general-purpose LLMs for specialized tasks, offering a scalable solution for mental health monitoring without the need for costly fine-tuning.
Original languageEnglish
Article number105462
Number of pages9
JournalActa Psychologica
Volume260
DOIs
Publication statusPublished - Oct-2025

Fingerprint

Dive into the research topics of 'Stress detection through prompt engineering with a general-purpose LLM'. Together they form a unique fingerprint.

Cite this