FormalizerInsideLLM: A Constraint-Based Reasoning Framework for Large Language Models via Axiomatic Control

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We introduce FormalizerInsideLLM, a constraint-based reasoning framework designed to rigorously enforce symbolic logic constraints within large language models (LLMs). By explicitly limiting inference to human-defined axiom sets, our framework reliably eliminates hallucinations, enabling verifiable formal reasoning and experimentation with undecidable or axiom-dependent statements. Through comparative evaluations with GPT-3.5, GPT-4o, and Gemini Pro across number theory, geometry, and combinatorics, we demonstrate that GPT-4o and Gemini Pro significantly outperform GPT-3.5 in adhering to symbolic constraints, reliably identifying undecidable problems, and correctly deriving provable statements.

Article activity feed