Small Language Models as Graph Classifiers: Evaluating and Improving Permutation Robustness

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Graph classification is dominated by permutation-invariant graph neural networks. We revisit this problem from a different perspective: can small language models (SLMs) act as graph classifiers when graphs are serialized as text? Unlike GNNs, sequence-based transformers do not encode permutation invariance by construction, raising a fundamental question about structural stability under node relabeling.We provide the first systematic study of permutation robustness in small graph-as-text models. We introduce an evaluation protocol based on Flip Rate and KL-to-Mean divergence to quantify prediction instability across random node permutations. To enforce structural consistency, we propose Permutation-Invariant Training (PIT), a multi-view regularization scheme that aligns predictions across relabeled graph views, and examine its interaction with degree-aware token embeddings as a minimal inductive bias.Across benchmark datasets using parameter-efficient fine-tuning, we show that SLMs achieve competitive classification accuracy, yet standard fine-tuning exhibits non-trivial permutation sensitivity. PIT consistently reduces instability and in most evaluated settings improves accuracy, demonstrating that structural invariance in sequence-based graph models can emerge through explicit regularization.

Article activity feed