An Empirical Evaluation of LLM-Assisted Sketch-Based Requirements Elicitation and Prototyping

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Requirements elicitation is often described as the most challenging and error-prone phase of Software Engineering. Misinterpretations between stakeholders and system analysts frequently result in incomplete, ambiguous, or inconsistent requirements, which can propagate into project delays, rework, and system failure. In this paper, we investigate how Large Language Models (LLMs) can function as intelligent co-pilots in requirements engineering by transforming hand-drawn sketches into both (1) natural-language functional requirements and (2) HTML/CSS-based software prototypes. We ground this work in Communication Theory and Cognitive Fit Theory, positioning LLMs as mediating agents that enhance shared understanding and reduce representational gaps between stakeholders and developers. Through experimental prototyping and comparative evaluation, we assess requirement coverage, semantic correctness, prototype alignment, and perceived stakeholder alignment. Our results indicate that LLM-assisted elicitation improves efficiency, reduces ambiguity, and enables earlier validation by providing stakeholders with concrete, interactive prototypes. We conclude by discussing practical implications and outlining strategies for integrating LLMs into requirements engineering toolchains, particularly during early elicitation activities such as stakeholder interviews and on-site requirements gathering.

Article activity feed