

Many teams talk about “AI security” as if it is one problem. In reality, it is three problems that look similar on a slide, but require different controls, owners, and success metrics.
In the webinar The Road Ahead for Agentic AI and Security Operations, the discussion landed on a simple, usable framework:
The power of this model is that it maps directly to how risk shows up in real environments.
This is the category most people instinctively recognize. Threat actors gain speed, scale, and precision with AI. Social engineering and automated recon improves and variants proliferate faster.
Anomali CEO, Ahmed Rubaie, underscored the severity and complexity of AI-driven threats: “Machine-driven AI precision threats are a very different game…We haven’t scratched the surface yet.” The implication is straightforward: attackers will compress time, and defenders must keep pace without relying on human bottlenecks.
Practically, securing from AI means:
This is where many programs are immature, and where stakes rise quickly in agentic environments.
If AI and agents become embedded into workflows, then models, pipelines, and agent permissions become part of your critical infrastructure.
Christian Karam, Anomali Sr. Advisor, described how the market is maturing here, with more standardized governance: “There is now a track for governance, what needs to be done, how you need to govern it, how you document the deployments, the models, the partners, the integrations, the third parties, the supply chain. There's a procedure.”
That is encouraging, but it also highlights the work required. “Securing the AI” is not a single control. It is a program that typically includes:
This is the most actionable category for many security teams today, because it translates into operational improvements that reduce load and improve resilience.
Ahmed framed it as a shift away from legacy alerting: “You're actually driving outcomes. This is not the days of an alert engine and so on. Those days are gone.” That is the point. If AI is used only to generate more alerts, you have not improved security. You have increased noise.
Using AI “with” security should center on outcomes like:
The common thread across all three categories is governance. Without governance, autonomy becomes risk.
Ahmed was straightforward about where the industry is headed: “Whatever we're going to do with agentic AI... the responsible thing to do is to make sure that there is a tremendous amount of governance and auditability.”
This is not just compliance theater. In regulated industries, auditability is part of operational safety. And in any enterprise, auditability is what turns AI from “black box automation” into a controllable system.
If you want to operationalize this quickly, treat it like a portfolio:
Then apply a sequencing rule: start with bounded use cases and expand.
Christian described how organizations are moving from bottom-up experimentation to disciplined qualification saying, “We're gone into a different era where we're focused on the qualification of the use case unless it has a business case.” This is the right direction. It prevents random AI deployments from becoming ungoverned attack surfaces.
This framework matters because AI is becoming embedded into enterprise systems. That means “AI security” will stop being a side project and start being a core operating requirement.
That is the call to action for security leaders. Not to block AI, but to enable it responsibly, with a model that is clear enough to govern and practical enough to execute.
If you want the full context, examples, and leadership perspectives behind this framework, go listen to the on-demand webinar The Road Ahead for Agentic AI and Security Operations.
FEATURED RESOURCES

