Elusive Quality in the Age of AI: Extending the Tendering Gap Model for Public ICT and AI-Driven Systems

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Public-sector ICT procurements frequently emphasize the importance of “quality,” yet rarely define what that quality means in operational or measurable terms. This ambiguity often leads to mismatches between buyer expectations and supplier interpretations, hindering requirement clarity, evaluation rigor, and delivery outcomes. In our earlier work, we analyzed public ICT tender documents and introduced the concepts of quality proxies—symbolic expressions of quality without clear operational definitions—and the tendering gap, which describes the structural misalignment between clients’ intentions and suppliers’ interpretations. This extended study revisits and generalizes these concepts in the context of AI-driven software systems, where quality specification is further complicated by probabilistic behavior, data dependencies, and evolving regulatory expectations. Drawing on twelve public procurement cases and conceptual analysis, we demonstrate how emerging AI-related quality attributes—such as explainability, fairness, robustness, and transparency—often function as AI quality proxies: abstract normative goals that lack shared operational meaning across stakeholders. To address this challenge, we extend the tendering gap model by introducing the notion of an epistemic gap, reflecting fundamental differences in stakeholders’ understanding of what constitutes measurable quality in AI systems. We discuss how this gap arises from knowledge asymmetry, measurement uncertainty, and cross-disciplinary communication barriers, and examine its implications for requirements engineering, procurement practices, and software quality models. The paper contributes a unified conceptual framework for understanding quality ambiguity across both traditional ICT and AI-driven systems, and outlines practical directions for improving quality specification through scenario-based requirements, calibrated evaluation rubrics, and enhanced stakeholder alignment. These findings support the development of more operational, transparent, and accountable approaches to specifying and evaluating software quality in an increasingly AI-driven world.

Article activity feed