Aligning Prompts with Ranking Goals: A Technical Review of Prompt Engineering for LLM-Based Recommendations
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Recent advancements in Large Language Models (LLMs) have enabled a paradigm shift in recommender systems, moving from modular pipelines toward instruction driven recommendation via prompt engineering. While existing literature has explored LLMs for tasks such as candidate generation, conversational recommendation, and re ranking, there remains a lack of systematic understanding of how prompts can be designed to optimize for distinct ranking objectives beyond relevance such as diversity, novelty, serendipity, and fairness. In this survey, we present a comprehensive review of prompt engineering techniques tailored for LLM based recommender systems, with a focus on ranking optimization under multi objective settings. We first introduce a taxonomy of prompt design strategies ranging from zero shot instruction templates to few shot exemplars and chain of thought prompting across different stages of recommendation (generation, ranking, re ranking). We then examine how these prompts can be aligned with specific ranking goals, and evaluate the tradeoffs between static prompting, prompt tuning, and fine tuning approaches. We review recent empirical studies and identify open challenges in prompt generalization, robustness, prompt evaluation protocols, and the absence of standardized benchmarks for multi objective recommendation tasks. Our survey concludes with actionable research directions and proposes a unified framework for evaluating prompt effectiveness across ranking objectives in LLM based recommender systems.