Benchmarking Large Language Models for Data Pipeline Code Generation and Execution

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In today’s data-driven landscape, organizations face mounting pressure to accelerate data processing and analysis while minimizing manual engineering efforts. This paper investigates the application of Large Language Models (LLMs) to automate the creation of data pipelines, evaluating their efficacy across code-based (Apache Airflow), low-code (Azure Data Factory), and hybrid (Databricks) platforms. Through systematic experimentation, we demonstrate that LLMs like GPT-4o, Qwen 2.5-Max 72B, and DeepSeek-V3 37B can successfully generate functional pipelines for tasks ranging from basic API interactions to multi-step ETL processes, reducing development time by an estimated 40–60% in code-centric environments.Our findings reveal stark contrasts in platform adaptability: while LLMs excel in code-based systems —resolving ambiguous requirements through logical constructs like Python operators— they struggle with low-code platforms such as Azure Data Factory, where implicit configuration dependencies and JSON syntax constraints lead to systemic failures. This highlights a critical trade-off between the flexibility of code-based orchestration and the accessibility of low-code tools.The study further underscores the importance of prompt engineering in guiding LLM outputs, particularly for complex transformations and platform-specific idiosyncrasies. While LLMs show promise in accelerating routine data engineering tasks, challenges persist in ensuring security, governance, and consistency in advanced patterns like error handling. This research contributes to the evolving discourse on AI-augmented data engineering, illustrating how LLMs can transform pipeline development, not by replacing human engineers, but by enhancing productivity and enabling focus on high-value architectural innovation.

Article activity feed