A Deterministic Global Optimization Framework via Monte Carlo Region Integration: Rigorous Convergence, KKT Equivalence, and Multi-Objective Extension

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper introduces the Monte Carlo Stochastic Optimization Technique (MOST), a global optimization framework based on region-wise integral comparison. Unlike classical pointwise methods, MOST evaluates candidate regions through aggregated objective values, enabling a structured and global exploration of the search space. We establish a unified theoretical foundation. Deterministic geometric shrinking of regions ensures that their diameters converge to zero, while a non-circular integral separation principle guarantees global convergence. Incorporating Monte Carlo estimation, we derive exponential concentration bounds and prove almost sure convergence under suitable sampling schedules. For constrained problems, we introduce an extended functional whose minimizers are equivalent to Karush–Kuhn–Tucker (KKT) points, allowing constraint handling without projection or penalty tuning. The framework is further extended to multi-objective optimization, where convergence to Pareto–KKT stationary points is established. Numerical experiments on multimodal benchmark functions confirm the theoretical results. Overall, MOST provides a derivative-free, deterministic–probabilistic framework for global optimization that extends naturally to constrained and multi-objective settings.

Article activity feed