Sculpting Large Language Models: Probing Structural Asymmetries in Layers and Embedding Spaces

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large Language Models (LLMs) are highly complex functions composed of massive parameters organized into layers and embedding dimensions. Yet, whether they exhibit internal structure—akin to biological systems or engineered constructs—remains poorly understood. In this work, we introduce a novel empirical approach to dissect LLM architectures through systematic ablation studies. Our experiments reveal that layers and embedding dimensions are not equal—some of them significantly influence model behavior, while others exhibit redundancy. These findings provide actionable insights for optimizing LLM efficiency and performance. We further propose a sculpting paradigm, wherein strategic modifications to model architecture to locate its internal structure to help to create new state-of-the-art models that both powerful and efficiency.

Article activity feed