AIShield – A Framework for QA in Software Development to Detect AI Generated Code

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid adoption of AI-generated code introduces new challenges in software Quality assurance (QA). Current static analysis tools fail to address AI-specific anti- patterns such as hallucinated APIs, contextual discontinuities, and architectural inconsistencies. This paper presents AIShield Sentinel, a novel validation framework that combines abstract syntax tree (AST) analysis with pattern-based detection to identify and quantify risks unique to AI-generated code. The system introduces three key innovations: (1) a context cohesion metric (0–100 scale) to measure logical flow between code segments, (2) architectural smell detection for AI-induced design flaws, and (3) an interpretable AI-probability score (0– 100%) based on weighted pattern matching. Evaluated against 1,200 code samples (GPT-4, Claude, and human-written), AIShield Sentinel achieves 92.3% precision in detecting AI- generated code artifacts—a 41% improvement over Pylint. The tool’s structured JSON output enables seamless integration with CI/CD pipelines while providing research- grade metrics for AI code quality analysis. This work bridges a critical gap in software development by offering the specialized validation framework for AI-assisted development environments.

Article activity feed