ReqGPT: a fine-tuned large language model for generating requirements documents

Research output: Journal contributionsConference article in journalResearchpeer-review

Authors

Effective product development relies on creating a requirements document that defines the product's technical specifications, yet traditional methods are labor-intensive and depend heavily on expert input. Large language models (LLMs) offer the potential for automation but struggle with limitations in prompt engineering and contextual sensitivity. To overcome these challenges, we developed ReqGPT, a domain-specific LLM fine-tuned on Mistral-7B-Instruct-v0.2 using 107 curated requirements lists. ReqGPT employs a standardized prompt to generate high-quality documents and demonstrated superior performance over GPT-4 and Mistral in multiple criteria based on ISO 29148. Our results underscore ReqGPT's efficiency, accuracy, cost-effectiveness, and alignment with industry standards, making it an ideal choice for localized use and safeguarding data privacy in technical product development.

Original languageEnglish
JournalProceedings of the Design Society
Volume5
Pages (from-to)2741-2750
Number of pages10
DOIs
Publication statusPublished - 01.08.2025
Event25th International Conference on Engineering Design, ICED 2025 - Dallas, United States
Duration: 11.08.202514.08.2025

Bibliographical note

Publisher Copyright:
© The Author(s) 2025.

    Research areas

  • fine-tuning, large language models, machine learning, new product development, requirements
  • Engineering

DOI