All Posts
Agentic SOC Platform
1
min read

Securing the Three Pillars of AI: A Practical Framework for Security Leaders

Published on
March 23, 2026
Table of Contents

Many teams talk about “AI security” as if it is one problem. In reality, it is three problems that look similar on a slide, but require different controls, owners, and success metrics.

In the webinar The Road Ahead for Agentic AI and Security Operations, the discussion landed on a simple, usable framework:

  1. Secure from AI
  1. Secure the AI
  1. Secure with AI

The power of this model is that it maps directly to how risk shows up in real environments.

1) Securing from AI: Defend Against AI-Powered Threats

This is the category most people instinctively recognize. Threat actors gain speed, scale, and precision with AI. Social engineering and automated recon improves and variants proliferate faster.  

Anomali CEO, Ahmed Rubaie, underscored the severity and complexity of AI-driven threats: “Machine-driven AI precision threats are a very different game…We haven’t scratched the surface yet.” The implication is straightforward: attackers will compress time, and defenders must keep pace without relying on human bottlenecks.

Practically, securing from AI means:

  • Detecting and responding faster than human-only processes allow
  • Increasing automation where the risk is bounded
  • Prioritizing threats by business impact, not just technical severity

2) Securing the AI: Protect models, Agents, and Decisioning

This is where many programs are immature, and where stakes rise quickly in agentic environments.

If AI and agents become embedded into workflows, then models, pipelines, and agent permissions become part of your critical infrastructure.

Christian Karam, Anomali Sr. Advisor, described how the market is maturing here, with more standardized governance: “There is now a track for governance, what needs to be done, how you need to govern it, how you document the deployments, the models, the partners, the integrations, the third parties, the supply chain. There's a procedure.”

That is encouraging, but it also highlights the work required. “Securing the AI” is not a single control. It is a program that typically includes:

  • Model and integration inventories
  • Data lineage and provenance
  • Access controls for agents and pipelines
  • Logging, auditability, and decision traceability
  • Third-party and supply chain governance

3) Securing with AI: Apply AI to Improve Defensive Outcomes

This is the most actionable category for many security teams today, because it translates into operational improvements that reduce load and improve resilience.

Ahmed framed it as a shift away from legacy alerting: “You're actually driving outcomes. This is not the days of an alert engine and so on. Those days are gone.” That is the point. If AI is used only to generate more alerts, you have not improved security. You have increased noise.

Using AI “with” security should center on outcomes like:

  • Faster triage with better context
  • Reduced time-to-detect and time-to-contain
  • Better correlation across large-scale data
  • More consistent operational workflows

The Governance Layer is Not Optional

The common thread across all three categories is governance. Without governance, autonomy becomes risk.

Ahmed was straightforward about where the industry is headed: “Whatever we're going to do with agentic AI... the responsible thing to do is to make sure that there is a tremendous amount of governance and auditability.”

This is not just compliance theater. In regulated industries, auditability is part of operational safety. And in any enterprise, auditability is what turns AI from “black box automation” into a controllable system.

Turning the Framework Into an Execution Plan

If you want to operationalize this quickly, treat it like a portfolio:

  • From AI: update threat modeling and response for AI-enabled attacks
  • The AI: build your inventory, controls, and audit layer for models and agents
  • With AI: focus on measurable security outcomes and workflow improvements

Then apply a sequencing rule: start with bounded use cases and expand.

Christian described how organizations are moving from bottom-up experimentation to disciplined qualification saying, “We're gone into a different era where we're focused on the qualification of the use case unless it has a business case.” This is the right direction. It prevents random AI deployments from becoming ungoverned attack surfaces.

Why This Matters Now

This framework matters because AI is becoming embedded into enterprise systems. That means “AI security” will stop being a side project and start being a core operating requirement.

That is the call to action for security leaders. Not to block AI, but to enable it responsibly, with a model that is clear enough to govern and practical enough to execute.

If you want the full context, examples, and leadership perspectives behind this framework, go listen to the on-demand webinar The Road Ahead for Agentic AI and Security Operations.

FEATURED RESOURCES

March 23, 2026
Anomali Cyber Watch

Iran's Cyber War Enters a New Phase: State-directed Destruction, Synchronized Strikes, and the 24-Hour Reconstitution Problem

Read More
March 23, 2026
Public Sector
Anomali Cyber Watch

When Two Nation States Strike at Once: Why State Government CISOs Must Act This Week

Read More
March 20, 2026
Anomali Cyber Watch

Iran's Cyber War Machine Is Damaged — But Still Firing. Here's What CISOs Need to Know Now.

Read More
Explore All