Workshop paper

Formalizing Observability in Agentic AI Systems

Abstract

A system can be more than the sum of its parts. Agentic systems are central to AI development because they exhibit emergent capabilities that cannot be inferred from studying individual agents alone. However, these systems are challenging to analyze: components such as agents, the LLMs powering them, and their associated tools often function as black boxes. Moreover, the diversity of implementations makes a universal approach to characterizing network properties impractical. We propose that initial studies of emergent behavior in agentic systems should focus on systems where each agent initiates the action of at most one other agent. To support this, we present a theoretical model of agentic systems that emphasizes the role of observability layers in monitoring both agent–agent and agent–environment interfaces. We further discuss how these layers facilitate the study of system-level behavior and constitute an fundamental component in the design of agentic AI systems.