SecFlow: An Agentic LLM-Based Framework for Modular Cyberattack Analysis and Explainability

TitleSecFlow: An Agentic LLM-Based Framework for Modular Cyberattack Analysis and Explainability
Publication TypeConference Paper
Year of Publication2025
AuthorsBlefari, F, Cosentino, C, Furfaro, A, Marozzo, F, Pironti, FAurelio
Conference NameGenerative Code Intelligence Workshop (GeCoIN)
Conference LocationBologna, Italy
Abstract

Large Language Models (LLMs) are rapidly reshaping the software engineering landscape, with applications
ranging from code synthesis to intelligent assistance in debugging and testing. In this work, we propose SecFlow,
a modular agentic framework designed for intelligent classification and contextual explanation of web-based
cyberattacks. Unlike traditional detection pipelines, SecFlow integrates fine-tuned LLM-based classifiers and
a Retrieval-Augmented Generation (RAG) component into a cohesive architecture, enabling contextualized
reasoning and transparent report generation from raw logs. We focus on a single attack vector, Server-Side
Template Injection (SSTI), to illustrate the benefits of a dynamic and explainable security pipeline. The system
autonomously identifies malicious payloads, retrieves relevant domain knowledge, and produces human-readable
reports detailing the nature of the attack, its potential impact, and recommended mitigation strategies. Extensive
evaluations show that our approach outperforms monolithic models in accuracy, robustness, and interpretability.
Furthermore, the architecture is fully extensible, allowing integration with existing security infrastructures
and future inclusion of additional classifiers or automated defense routines. Our findings highlight that LLMs,
even without directly generating code, can serve as reliable reasoning agents in software workflows where
transparency, explainability, and modular decision-making are critical.