Agentic AI for Autonomous Cyber Defense
Main Article Content
Abstract
In the face of increasingly diverse, large-scale and fast-moving threats, customary and reactive approaches that are heavily dependent on humans for monitoring and policy-based automation are no longer sufficient. Attackers are using automation, artificial intelligence and highly dynamic cloud-native infrastructure. As a result, security operations struggle to keep visibility current and to respond effectively. The article analyzes the future of autonomous cyber defense using an agentic AI, which can observe the environment, reason under uncertainty, plan and execute defense actions, and learn about attack diversity. Current AI-enabled cyber defense solutions largely exist in the domain of playbook-based automation and predefined static rule-based detections. Agentic AI, by virtue of its higher level of autonomy, can hypothesize malicious intent, simulate attack strategies, and implement risk-aligned defensive measures in near real-time. The paper presents an architectural model integrating the layers of perception, reasoning, planning, action, and learning for proactive and contextualized defense of distributed and heterogeneous environments. Furthermore, it discusses the role of advanced reasoning methods in balancing decision accuracy with explainability and the role of automated planning and action layers in translating high-level security goals to operational controls while preserving accountability with governance. The article also identifies the risks of over-automation, unexpected side effects, inaccuracies in models, adversarial behavior, and a lack of trust, transparency, and interpretability. The article argues that strong governance frameworks, including prioritizing human control, compliance with legal requirements and regulatory guidelines, continuous monitoring, and ethical design are essential for the safe, transparent, and accountable deployment of autonomous systems. This article argues for human-and-AI security models, which largely preserve the human responsibility and trust of existing models while enabling scalable and adaptable cyber-defenses, by framing agentic AI as a cyber-force multiplier rather than a cyber-replacement. Such models may be supported as a cyber-foundational capability if responsibly designed and governed and continuously aligned with organizational and social goals and values.