Active Abstract Finding for Generalization in LLM-Based AI Systems

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Identifying key factors that determine whether a method can be applied across different scenarios is important for improving the generalization ability of large language model (LLM)-based agents. However, existing approaches typically rely on passive observation or static datasets, making it difficult to discover the underlying abstract structures that enable methods to transfer across diverse contexts. In this paper, we propose an active abstract finding framework that enables AI systems to discover common factors across different observations and derive generalized reasoning structures. The proposed framework models abstraction as a process of identifying shared factors among observations while removing instance-specific factors. To improve the probability of discovering such structures, the AI system actively collects observations from different times, places, and contexts, and performs similarity comparison to extract common factors. The discovered abstractions are then represented symbolically and can be reused to derive reasoning methods applicable to new scenarios. Experimental results demonstrate that abstraction-oriented observation strategies improve the ability of LLMs to identify method applicability. In our experiments, the proposed strategy achieves an average accuracy of 0.64, outperforming the random observation baseline (0.50) and the place-diverse strategy (0.61), confirming the effectiveness of active abstraction in identifying key factors for generalization.

Article activity feed