Operationalizing Credible AI-Assisted Carbon Footprinting: A Framework and Empirical Case Study

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid scaling of corporate product-level emissions accounting has created a data scalability crisis where traditional manual verification methods cannot keep pace. While Large Language Models (LLMs) offer promising automation capabilities, their non-deterministic nature creates credibility challenges for auditors and practitioners accustomed to deterministic traceability. This paper addresses this gap through two contributions. First, we propose a hierarchical framework of credibility criteria for AI-assisted carbon footprinting (AI-CF), distinguishing between system-level defensibility (benchmarking, consistency, repeatability) and material-level transparency (match quality indicators, reasoning traces). These criteria are grounded in the ISO definition of data quality as fitness for use and were developed through iterative stakeholder elicitation with verification firms and corporate practitioners. Second, we operationalize this framework through empirical evaluation of two deployed AI systems: an Auto-Mapper achieving 91% defensible mapping rates on non-vague inputs (dropping to 60% for ambiguous inputs) against expert ground truth, and an Advanced Modeling System achieving median 33% error relative to 269 Environmental Product Declarations across 9 product categories. We demonstrate that AI output entropy correlates with input ambiguity, suggesting that non-determinism can serve as a diagnostic signal for data quality rather than solely a liability. The framework enables a shift toward system-level validation, where auditors verify the AI process rather than randomly sampling across all individual outputs.

Article activity feed