The Price of Inference: A Longitudinal Economic Analysis of Hierarchical LLM Credential Leakage

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The integration of Large Language Models (LLMs) into the software supply chain has fundamentally altered the nature of credential leakage. Unlike static secrets (e.g., database passwords), LLM API keys are liquid economic assets—direct bearers of computational reasoning and token quotas. This paper introduces the Theory of Economic Credential Stratification, identifying a critical divergence in how these assets are managed compared to their utility. Using CHRONOS, a custom-built forensic instrument, we analyzed exposed artifacts across GitHub. We report a paradoxical “Protection Gap”: Tier 1 (GPT-4) credentials—despite having high abuse potential for LLMjacking [5]—exhibit a mean survival time (¯tsurv) of 48 hours in nonproduction artifacts, significantly longer than lower-value keys. Furthermore, we identify “Dataset Poisoning” as a systemic blind spot: valid credentials embedded in .jsonl and .parquet training files often persist indefinitely, becoming part of the model’s latent knowledge.

Article activity feed