How Good is my Scheduling Algorithm? A Benchmark to Compare Production Scheduling Algorithms for Real-World Production Environments
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Production scheduling in modern manufacturing faces escalating complexity due to mass customization, frequent disruptions, and stringent lead-time requirements, rendering classical benchmarks inadequate. This paper introduces a realistic, multi-instance benchmark to evaluate and compare scheduling algorithms under real-world shop‐floor constraints. Guided by a problem-centered design-science methodology, we derive high-level requirements—sufficient complexity, differentiation capability, objective-function variants, and resistance to overfitting—and implement ten real-world-relevant features, including sequence-dependent changeovers, limited buffers, personnel constraints, transportation resources, and job release dates. Our benchmark models a two-shop-floor environment with buffers and interconnected transportation, featuring five material types that follow hybrid routing patterns, and 100 randomly generated instances per variant, all encoded in JSON for seamless integration. Performance is assessed via average relative improvement over a "random" priority-rule baseline, which mitigates randomness and discourages overfitting. We demonstrate and validate the benchmark through the application of FIFO, due-date, changeover heuristics, and a reinforcement-learning algorithm, showing clear differentiation among methods, thus demonstrating the potential of the benchmark to assess the solution quality of production scheduling algorithms. We provide the benchmark and all necessary data for subsequent use in the supplements. We contribute to research and practice by providing a benchmark for comparing production scheduling algorithms in a standardized manner on solution quality in real-world production environments.