A Novel Retrieval-Augmented Generation Framework Using Large Language Models for Lyrics and Song Composition
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The use of artificial intelligence (AI) in music composition is now very common, but present models have the flaw of losing lyrical coherence, stylistic coherence, and vocal realism. This paper presents a novel AI-driven framework for automated song composition that integrates Natural Language Processing (NLP), Large Language Models (LLMs), and multimodal synthesis techniques. The system employs a hybrid retrieval pipeline that combines BM25 for sparse lexical matching and FAISS for dense semantic search. This Retrieval-Augmented Generation (RAG) approach, powered by GPT-4o-mini, enables the generation of lyrics that reflect the thematic consistency and stylistic nuances of an artist's previous compositions. For vocal synthesis, speaker embeddings are extracted using models such as Wav2Vec2, ECAPA-TDNN, and DeepSpeaker, enabling high-fidelity voice cloning. Synthesized vocals are further refined using the Bark and Suno models to produce expressive singing voices. Instrumental backgrounds are generated using text-to-audio (TTA) models with synchronized beats-per-minute (BPM) and melodic patterns to maintain musical harmony. The final output undergoes noise reduction, vocal-instrumental alignment, and automated mixing to produce studio-quality audio. Experimental evaluation shows improved lyrical relevance, fluency and vocal expressiveness, with an increase in the BERTScore, a lower perplexity, and stylistic alignment accuracy of 92.1%. This system demonstrates a significant advancement in AI-assisted songwriting, offering a scalable and musically coherent solution for end-to-end song generation.