Stop blaming the model: Rules, memory, and knowledge as framework for AI-assisted research

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Researchers across the social sciences and humanities are rapidly adopting large language models for tasks from text coding to data analysis. When these tools underperform, the instinct is to blame the model. Yet the more consequential source of failure is the research environment itself: disorganized projects that deny AI agents the context they need. This is not merely a matter of tidiness. Large language models are essentially epistemically opaque, meaning that no human can fully inspect how they reach their outputs. This opacity breaks the networks of trust that ordinarily underwrite scientific collaboration. In this Comment, a three-part framework is proposed to compensate for this absent trust. Rules configure the boundaries of human and machine action. Memory preserves a traceable record of decisions, failed attempts, and the evolving research direction. Curated knowledge grounds the agent's reasoning in the project's actual theoretical commitments, preventing the silent substitution of generic training data for the researcher's own concepts. Without this infrastructure, AI agents may behave inconsistently, repeat mistakes across sessions, and produce work that is technically polished but intellectually shallow. The Comment calls on researchers to invest in project organization before investing in better prompts.

Article activity feed