Democratizing Deep Expertise: A Framework for Extracting and Codifying Tacit Knowledge Using Large Language Models
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The proliferation of large language models (LLMs) opens new possibilities for capturing and scaling human expertise. Yet most applications focus on synthesizing large, documented corpora (papers, reports, manuals, articles, logs). This paper presents a novel approach for extracting and codifying undocumented, deep expertise—the “golden nuggets” of human insight—and making it accessible via a Retrieval-Augmented Generation (RAG) system. Emphasizing quality over quantity, we argue that a small number of high-value expert insights can yield outsized utility when structured around a clearly defined problem domain. Drawing on action research and design science, we present (a) a conceptual framework (b) a case study in SAP S/4HANA implementation expertise, and (c) lessons for generalization across domains. We conclude that combining SME insight with LLMs can democratize scarce knowledge and generate significant value for organizations.