

AI is moving from something you query to something that acts. That shift is the real inflection point for security teams, because it changes what “access” means, what “identity” means, and how quickly risk can propagate across systems.
During The Road Ahead for Agentic AI and Security Operations webinar, Anomali CEO Ahmed Rubaie and Sr. Advisor Christian Karam made a clear distinction: the future of enterprise AI is not assistance, it’s agency. As AI systems begin acting across tools and environments, security can no longer focus solely on monitoring outputs. It must account for autonomous decision-making embedded directly into operational workflows.
Many enterprise teams are still mentally anchored to copilots, meaning AI as a helper that accelerates work someone already decided to do. But agentic AI pushes beyond suggestion into execution.
That matters because execution implies permissions, integrations, tool access, and the ability to move laterally. When systems can act, they can also be abused.
Christian noted, “We no longer just have or will have in the future human employees. We will have agents as employees.” Those “digital employees” will not just live inside a single app. They will touch identity systems, data sources, ticketing, cloud infrastructure, and business processes.
And once you accept that agents operate like employees, the natural next question becomes: how do you secure them like employees?
Human workers come with a mature set of security assumptions: onboarding, least privilege, monitoring, segmentation, audits, offboarding. Agentic systems require the same rigor, but the failure modes look different.
Christian flagged the practical implications noting, “They will have a different behavior, they will have their own identities, they will have to be governed.” If an agent has an identity, it can accumulate access. If it accumulates access, it becomes a target. And if it becomes compromised, it can move faster than a human ever could.
This is where many programs need to evolve. Traditional identity and access governance was built for humans and service accounts. “Digital employees” blur those lines.
Christian also warned that unrestricted access is a real risk:
“The ability of AI to find its way into an organization to map data, to map relationships, identities, is phenomenal. And if you're not restrictive by default, it can be abusive.”
This highlights a structural truth: agents are built to discover, connect, and operate across systems. That capability is precisely what makes them valuable, and also what makes them dangerous without guardrails.
Security teams are trained to detect malicious artifacts. They are less practiced at detecting intent.
Christian described a control gap that emerges in agentic environments where, “Toxicity, harmfulness, mis-intent is not something that our security team is focused on.” It’s a major shift in what defenders must evaluate.
In an agentic world, the “what happened” (indicators, logs, signatures) still matters. But the “why” behind actions matters more, because agents can execute legitimate actions for illegitimate purposes. As Ahmed noted, “Agents must be intelligent with context. Otherwise, you’re going to wreak havoc across the entire system.”
This is why agentic security needs to incorporate business context, not just technical telemetry. Christian framed it as a redesign of what defending even means noting, “We have to really redesign the logic of what does it mean to become a defender, become an enterprise defender in the core business of the enterprise.”
But that’s not a call for security teams to become business analysts. It is a call to treat business context as a first-class input to detection and response, especially as AI begins to execute actions that look “normal” in isolation.
One practical theme from the conversation was sequencing. Agentic capability needs to be earned over time.
Christian described the emerging pattern saying, “The principles of allowing it to gain more access over time has been followed right now. It's becoming kind of de facto.” In other words, avoid the temptation to deploy agents with broad access on day one. Treat them like a new hire, not a seasoned admin.
.png)
This is not about slowing innovation, but about making innovation sustainable.
Agentic AI changes the unit of security planning. Instead of securing a tool, you are securing an actor.
If you are in a security leadership role, the conversation to have internally is not “Should we adopt AI?” It is:
The organizations that answer those questions early will move faster later, because they will not be forced to redesign everything mid-flight. Ahmed encourages security leaders to lean into the AI reality and, “Spend less time chasing the past, spend more time adopting where you’re headed in the future.”
Want the full discussion, including how agents change governance, identity, and defensive thinking? Go listen to the on-demand webinar The Road Ahead for Agentic AI and Security Operations.
FEATURED RESOURCES

