The Potential of Large Language Models in Solving Optimization Problems: An Empirical Study

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This work investigates the potential of Large Language Models (LLMs) in automating the formulation and solution of mathematical programming problems. Specifically, we evaluate the effectiveness of two distinct prompting strategies: One-Stage prompting, where LLMs directly generate solver code, and Two-Stage prompting, where the formulation of a mathematical model precedes its implementation. Our empirical study examines the performance of multiple LLMs, both open-source and proprietary, across three optimization problem categories: resource allocation, blending, and vehicle routing. For each problem, we assess the LLMs' capabilities to generate accurate mathematical formulations and executable Python code. The results provide actionable guidelines for selecting prompting strategies when deploying LLMs as decision-support co-pilots in operations research.

Article activity feed