EPMORE: Explainable Process Mixture-of-Experts

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) are primarily built on the Transformer architecture, in which all hidden layers share a fixed-dimensional representation spaces. This homogeneity constrains representational capacity, impedes interpretability, and induces computational redundancy. We propose EPMORE (Explainable Process Mixture-of-Experts), a novel architecture that models inference as a process of dimensional elevation, Ensure the entire inference/training process, intermediate states observable and explainable, and ensure the whole process is traceable end-to-end. EPMORE decomposes the entire inference/training process into a hierarchical sequence of representation spaces — from a semantic space (128 dimensions), to one or more logical spaces (512 dimensions each), and finally to fact-expert representation spaces (1024 dimensions) — allowing deeper network stages to encode progressively richer and more abstract features. A core component, Middle Output Reuse (MOR), enables each layer to produce interpretable intermediate predictions. Theoretically, forward propagation can be interpreted as representation-space expansion, while backward propagation corresponds to a dimensional contraction process. Experiments show that, compared with dense and conventional mixture-of-experts (MoE, Deepseek) baselines, EPMORE improves interpretability, activation sparsity, parameter independence, and inference performance while reducing computational cost. These findings suggest that hierarchical dimensional elevation is a promising alternative to standard Transformer design.

Article activity feed