Ontology-Driven Graph Framework for Prioritizing K-12 LLM Security Mitigations
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models (LLMs) are increasingly integrated into enterprise and institutional workflows that process sensitive and confidential information including K-12 systems. While security frameworks and guidance documents describe prominent LLM risks and recommended mitigations, K-12 schools still struggle to prioritize controls under real-world constraints. Existing approaches often provide qualitative checklists without a defensible method for selecting mitigations that most effectively reduce high-impact risk.This exploratory research presents an ontology-driven, graph-theoretic framework for prioritizing K-12 LLM security mitigations with a specific focus on confidentiality impact. Using a source LLM security ontology, we construct a multi-layered directed graph linking LLM functions, products, security risks, adversarial techniques, and mitigations. High-confidentiality functions are used as seeds in a function-seeded Personalized PageRank (PPR) analysis to identify attack techniques most structurally exposed to sensitive workflows. To evaluate mitigation effectiveness, we apply eight complementary graph analyses spanning combinatorial optimization, centrality, flow-based separation, stochastic modeling, and community detection.Across analyses, results converge on a small subset of mitigations, most notably AI telemetry logging and generative AI guardrails, as structurally dominant defenses. These controls collectively intercept a disproportionate share of PPR-weighted technique exposure, form low-cost barriers in cut-based analyses, and exhibit high probabilistic interception. Convergence across distinct objectives suggests stable structural bottlenecks rather than artifacts of graph size or algorithm choice. By integrating semantic richness from security ontologies with established attack-graph analytics, the exploratory research provides an approach for selecting high-impact mitigations in confidentiality-sensitive K-12 LLM deployments.