Systematic Prompt Optimization for LLM-Based Backend API Generation: An Empirical Study in NestJS
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) are increasingly used as developer productivity tools for backend application programming interface (API) generation. However, prompt engineering is typically performed in an ad hoc manner, limiting reliability and code quality. This study systematically evaluates prompt design strategies for NestJS-based API endpoint generation across five realistic backend tasks. We compared baseline prompting against persona-based, structured reasoning, constraint-driven, and self-review strategies using automated functional, security, architectural, and completeness metrics. Our results show that structured and reflective prompting significantly improves code quality, achieving up to 24% relative improvement over baseline prompts. These findings demonstrate that prompt design is a critical engineering lever for production-ready, AI-assisted software development.