Towards Interpretable and Consistent Multi-Step Mathematical Reasoning in Large Language Models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Mathematical reasoning, particularly within the K–12 education context, demands models that provide not only correct answers but also transparent, interpretable solution paths. Existing large language models (LLMs) often struggle with multi-step math problems due to their limited capacity for symbolic manipulation and structured reasoning. To address these challenges, we propose MetaMath-LLaMA, a novel metacognitive modular framework designed to enhance the reasoning abilities of LLMs through dynamic task orchestration. This framework integrates three core components: a Transformer-based metacognitive scheduler that learns to allocate reasoning subtasks adaptively; a symbolic parser with semantic grounding that fuses syntactic structure with contextual embeddings; and a hybrid symbolic-neural computation unit that seamlessly transitions between deterministic symbolic logic and neural approximation. The entire model is optimized through a multi-task training scheme coupled with curriculum learning and multi-tiered self-validation to mitigate reasoning errors and improve interpretability. We expect MetaMath-LLaMA to improve classroom usability by producing clearer step-by-step solution paths, aiding educators in assessment and supporting student conceptual understanding. Our approach offers a more modular, explainable, and effective solution for handling diverse mathematical tasks in K–12 education, and it outperforms traditional monolithic reasoning systems in logical fidelity and conceptual clarity.

Article activity feed