doi: 10.17706/jsw.21.1.14-32
Preference-Driven Refinement of Prompts: A Systematic Prompt Engineering Method for Helping to Automate Software Engineering
2. Department of Computer Science, William & Mary, Williamsburg, VA, USA
*Corresponding author. Email: ashraf.elnashar@vanderbilt.edu
Manuscript submitted December 29, 2025; accepted January 10, 2026; published March 17, 2026
Abstract—Rapid gains in Large Language Model (LLM)-based tools are transforming software engineering, from auto-completing function stubs to drafting architectural RFCs. However, current use often depends on ad hoc prompting, resulting in brittle code snippets, inconsistent style guides, and unpredictable test coverage. To enable scalable and repeatable automation, systematic prompt engineering is essential for generating high-quality software artifacts (such as unit tests, refactor patches, and Application Programming Interface (API) documentation) from the same underlying model. To address this need, we propose the Preference-Driven Refinement (PDR) method for prompt engineering, designed to support automated software engineering workflows. PDR introduces an iterative loop where developers specify preferences (e.g., naming conventions, performance constraints, or security rules) after each generation. These preferences—typically captured by editing prompt phrasing or including curated examples—are encoded into subsequent prompts, enabling the model to produce outputs that adhere to project-specific standards and practices. This refinement loop creates a more automated, policy-aware interface between developers and generative models, supporting on-boarding, code review, and other software lifecycle tasks. We present empirical evaluations demonstrating how PDR leverages in-context learning and synthetic example generation to systematically improve prompt quality. Our results show that PDR reduces trial-and-error iterations and yields higher-quality outputs, though with modest increases in refinement time. These findings highlight how structured prompt refinement can help automate manual tasks in software engineering, thereby enhancing consistency, efficiency, and developer experience in AI-assisted development environments.
Keywords—Large Language Models (LLMs), prompt engineering, generative Artificial Intelligence (AI) for automating software engineering
Cite: Ashraf Elnashar, Jules White, and Douglas C. Schmidt, " Preference-Driven Refinement of Prompts: A Systematic Prompt Engineering Method for Helping to Automate Software Engineering," Journal of Software, vol. 21, no. 1, pp. 14-32, 2026.
Copyright @ 2026 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0)
General Information
ISSN: 1796-217X (Online)
Abbreviated Title: J. Softw.
Frequency: Biannually
APC: 500USD
DOI: 10.17706/JSW
Editor-in-Chief: Prof. Antanas Verikas
Executive Editor: Ms. Cecilia Xie
Google Scholar, ProQuest,
INSPEC(IET), ULRICH's Periodicals
Directory, WorldCat, etcE-mail: jsweditorialoffice@gmail.com
-
Mar 07, 2025 News!
Vol 19, No 4 has been published with online version [Click]
-
Mar 07, 2025 News!
JSW had implemented online submission system [Click]
-
Apr 01, 2024 News!
Vol 14, No 4- Vol 14, No 12 has been indexed by IET-(Inspec) [Click]
-
Apr 01, 2024 News!
Papers published in JSW Vol 18, No 1- Vol 18, No 6 have been indexed by DBLP [Click]
-
Oct 22, 2024 News!
Vol 19, No 3 has been published with online version [Click]
