Segev Shlomov, Xiang Deng, et al.
AAAI 2025
The complexity of modern computing environments and the growing sophistication of cyber threats necessitate a more robust, adaptive, and automated approach to security enforcement. In this paper, we present a framework leveraging large language models (LLMs) for automating attack mitigation policy compliance through an innovative combination of in-context learning and retrieval-augmented generation (RAG). We begin by describing how our system collects and manages both tool and API specifications, storing them in a vector database to enable efficient retrieval of relevant information. We then detail the architectural pipeline that first decomposes high-level mitigation policies into discrete tasks and subsequently translates each task into a set of actionable API calls. Our empirical evaluation, conducted using publicly available CTI policies in STIXv2 format and Windows API documentation, demonstrates significant improvements in precision, recall, and F1-score when employing RAG compared to a non-RAG baseline.
Segev Shlomov, Xiang Deng, et al.
AAAI 2025
Phuong Nguyen, Vatche Isahagian, et al.
IJCAI 2022
Neelamadhav Gantayat, Avirup Saha, et al.
CODS 2025
Santosh Palaskar, Vijay Ekambaram, et al.
IAAI 2024