Enhancing Security Operations Center Efficiency throughMulti-Model Integration of Large Language Models and SIEMSystems

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rising rates of cyberattacks necessitate advanced solutions within Security Operations Centers (SOCs). This research explores the integration of Large Language Models (LLMs) with Security Information and Event Management (SIEM) systems to enhance the triage processes of Tier 1 SOC analysts. We evaluate five LLMs—GPT-4, GPT-3.5, LLaMA 3, Mixtral 8x22B, and OpenHermes 2.5 Mistral 7B—on their ability to classify alerts as 'interesting' or 'not interesting'. Our proposed framework, consisting of an alert generation module, an LLM agent, and a reporting module, was tested using Wazuh SIEM integrated with CALDERA for adversary emulation. The results demonstrate that GPT-4 achieved the highest accuracy with a precision of 94\%, recall of 92\%, and an F1-score of 93\%, significantly outperforming the other models. The integration of LLMs accelerated preliminary triage by reducing the average processing time per alert by 40\% and decreased the cognitive load on analysts by automating up to 60\% repetitive tasks. However, challenges such as hallucinations—occurring in approximately 5\% of cases—integration complexities and privacy risks associated with handling sensitive data were observed. To address these issues, we propose a hybrid approach where LLMs act as co-pilots alongside analysts, incorporating strategies for model transparency, bias detection, and compliance with data privacy regulations. These findings offer practical insights for enhancing SOC efficiency and resilience against evolving cyber threats, emphasizing the importance of prompt optimization, continuous learning, and industry collaboration for large-scale alert training in future studies.

Article activity feed