Zhiyuan He, Yijun Yang, et al.
ICML 2024
AI systems face a growing number of AI security threats that are increasingly exploited in the real world. Hence, shared AI incident reporting practices are emerging in industry as best practice and as mandated by regulatory requirements. Although non-AI cybersecurity and non-security AI reporting have progressed as industrial and policy norms, existing collections of practices do not meet the specific requirements posed by AI security reporting. In this position paper, we argue that established processes are not well aligned with AI security reporting due to fundamental shortcomings for the distinctive characteristics of AI systems. Some of these shortcomings are immediately addressable, while others remain unresolved technically or within social systems, like the treatment of IP or the ownership of a vulnerability. Based on this position, we examine the limitations of current AI security incident reporting proposals. We conclude that the advent of AI agents will further reinforce the need to advance specialized AI security incident reporting
Zhiyuan He, Yijun Yang, et al.
ICML 2024
Teryl Taylor, Frederico Araujo, et al.
Big Data 2020
Anisa Halimi, Leonard Dervishi, et al.
PETS 2022
Nikita Janakarajan, Irina Espejo Morales, et al.
NeurIPS 2024