Developing Large Language Models for Quantum Chemistry Simulation Input Generation

Pieter Floris Jacobs, Robert Pollice*

*Corresponding author voor dit werk

Onderzoeksoutput: VoordrukAcademic

16 Downloads (Pure)

Samenvatting

Scientists across domains are often challenged to master domain-specific languages (DSLs) for their research, which are merely a means to an end but are pervasive in fields like computational chemistry. Automated code generation promises to overcome this barrier, allowing researchers to focus on their core expertise. While large language models (LLMs) have shown impressive capabilities in synthesizing code from natural language prompts, they often struggle with DSLs, likely due to their limited exposure during training. In this work, we investigate the potential of foundational LLMs for generating input files for the quantum chemistry package ORCA by establishing a general framework that can be adapted to other DLSs. To improve upon GPT-3.5 Turbo as our base model, we explore the impact of prompt engineering, retrieval-augmented generation, and finetuning via synthetically generated datasets. We find that finetuning, even with synthetic datasets as small as 500 samples, significantly improves performance. Additionally, we observe that finetuning shows synergism with advanced prompt engineering such as chain-of-thought prompting. Consequently, our best finetuned models outperform the formally much more powerful GPT-4o model. All tools and datasets are made openly available for future research. We believe that this research lays the groundwork for a wider adoption of LLMs for DSLs in chemistry and beyond.
Originele taal-2English
UitgeverChemRxiv
Aantal pagina's12
DOI's
StatusSubmitted - 2-sep.-2024

Vingerafdruk

Duik in de onderzoeksthema's van 'Developing Large Language Models for Quantum Chemistry Simulation Input Generation'. Samen vormen ze een unieke vingerafdruk.

Citeer dit