TY - UNPB
T1 - Developing Large Language Models for Quantum Chemistry Simulation Input Generation
AU - Jacobs, Pieter Floris
AU - Pollice, Robert
PY - 2024/9/2
Y1 - 2024/9/2
N2 - Scientists across domains are often challenged to master domain-specific languages (DSLs) for their research, which are merely a means to an end but are pervasive in fields like computational chemistry. Automated code generation promises to overcome this barrier, allowing researchers to focus on their core expertise. While large language models (LLMs) have shown impressive capabilities in synthesizing code from natural language prompts, they often struggle with DSLs, likely due to their limited exposure during training. In this work, we investigate the potential of foundational LLMs for generating input files for the quantum chemistry package ORCA by establishing a general framework that can be adapted to other DLSs. To improve upon GPT-3.5 Turbo as our base model, we explore the impact of prompt engineering, retrieval-augmented generation, and finetuning via synthetically generated datasets. We find that finetuning, even with synthetic datasets as small as 500 samples, significantly improves performance. Additionally, we observe that finetuning shows synergism with advanced prompt engineering such as chain-of-thought prompting. Consequently, our best finetuned models outperform the formally much more powerful GPT-4o model. All tools and datasets are made openly available for future research. We believe that this research lays the groundwork for a wider adoption of LLMs for DSLs in chemistry and beyond.
AB - Scientists across domains are often challenged to master domain-specific languages (DSLs) for their research, which are merely a means to an end but are pervasive in fields like computational chemistry. Automated code generation promises to overcome this barrier, allowing researchers to focus on their core expertise. While large language models (LLMs) have shown impressive capabilities in synthesizing code from natural language prompts, they often struggle with DSLs, likely due to their limited exposure during training. In this work, we investigate the potential of foundational LLMs for generating input files for the quantum chemistry package ORCA by establishing a general framework that can be adapted to other DLSs. To improve upon GPT-3.5 Turbo as our base model, we explore the impact of prompt engineering, retrieval-augmented generation, and finetuning via synthetically generated datasets. We find that finetuning, even with synthetic datasets as small as 500 samples, significantly improves performance. Additionally, we observe that finetuning shows synergism with advanced prompt engineering such as chain-of-thought prompting. Consequently, our best finetuned models outperform the formally much more powerful GPT-4o model. All tools and datasets are made openly available for future research. We believe that this research lays the groundwork for a wider adoption of LLMs for DSLs in chemistry and beyond.
KW - Large Language Models
KW - Domain-Specific Languages
KW - Quantum Chemistry
KW - Finetuning
KW - Prompt Engineering
KW - Retrieval-Augmented Generation
KW - Input File Generation
U2 - 10.26434/chemrxiv-2024-9g2w2
DO - 10.26434/chemrxiv-2024-9g2w2
M3 - Preprint
BT - Developing Large Language Models for Quantum Chemistry Simulation Input Generation
PB - ChemRxiv
ER -