

Security leaders are operating inside a widening gap, driven by the usual suspects:
AI is often positioned as the bridge across that gap, promising productivity gains without additional headcount. But for many organizations, that promise doesn’t materialize. New platforms are deployed, dashboards multiply, and activity increases, but day-to-day operations don’t change much. Analysts remain overloaded, investigations stay slow, and leadership struggles to point to clear evidence that risk has meaningfully decreased.
During a recent webinar, Chris Vincent, Chief Commercial Officer at Anomali, framed the problem bluntly. “Everyone’s promising AI-driven productivity,” he said, “but most security teams are not seeing it.”
Why Security Productivity Breaks in Practice
At the executive level, productivity initiatives often begin with misaligned expectations. Boards push for transformational improvements while funding models assume incremental change. George Moser, former Fortune 1000 CISO and now Chief Growth Officer at Anomali, described this tension as structural rather than tactical.
Organizations expect dramatic reductions in risk and response time, but don’t create the space required to redesign how work actually flows through the SOC. AI initiatives are launched as overlays on top of existing processes instead of as opportunities to remove steps, eliminate handoffs, or redefine ownership.
Most SOC teams are already operating at or near capacity. New platforms introduce additional work, including tuning detections, labeling data, supervising models, and rewriting playbooks. That work falls on the same senior analysts the organization already relies on for daily operations.
“Teams don’t fail from lack of tools,” Moser said. “They fail from lack of time.”
The result is that AI speeds up individual tasks without reducing the total amount of work that reaches human analysts. Investigations still require the same validation steps. Alerts still need to be double-checked, so productivity gains disappear under operational friction.
If AI isn’t helping security teams make meaningful gains, it might be getting in the way while attackers use it to increase the speed of their adversarial tactics. The median time for an attacker to move laterally after initial access is measured in minutes, not hours or days. In many cases, defenders are still triaging alerts long after meaningful damage has already occurred
From a practitioner standpoint, productivity issues often originate earlier than most teams expect: at the data layer.
For years, security analytics strategy emphasized ingesting as much data as possible. More data was assumed to produce better visibility. Patrick Holt, Senior Principal Product Manager at Anomali, explained why that logic fails at scale.
“As you bring in more and more data, you also bring more and more noise,” he said. Without a clear plan for how data supports decisions, volume becomes a liability rather than an asset.
This challenge is compounded by inconsistency across data sources. Logs are defined by vendors, not by analytic needs. Windows logs differ from Unix logs. Identity providers use their own field names. Even within a single platform, formats evolve over time.
“If you’re looking at vendor logs without doing anything to standardize them,” Holt said, “it’s garbage in and garbage out.”
AI systems rely on structure to reason effectively. When schemas are inconsistent, correlation becomes unreliable and trust erodes. Teams end up trying to correlate apples and oranges. Superficial similarities exist, but meaningful comparison does not.
Standardized schemas provide the foundation that allows analytics to function predictably. When logs conform to a consistent, well-documented structure, AI systems can interpret events across sources without guesswork. Decisions become faster not because the model is more advanced, but because the context is stable.
“The foundation of AI isn’t the AI itself,” Holt said. “It’s the data context it’s given.”
Productivity improves when analytics reduce uncertainty early in the workflow. That does not happen by automating everything indiscriminately.
Holt described three principles that consistently separate effective programs from noisy ones.
Speed reinforces all of this. Fast, flexible search enables analysts to ask simple questions and get answers without mastering complex query languages. When results return in seconds instead of minutes or hours, investigations continue without interruption.
Speed alone does not create productivity, but without it, even well-designed workflows stall.
From a leadership perspective, productivity must be observable and defensible. Feature lists are not evidence.
“Scale only happens when work is removed, not just shifted around,” Moser said. He urged leaders to look for outcomes that indicate real progress, including fewer alerts requiring human review, shorter investigations, reduced escalation to senior engineers, and faster time to decision.
Time-to- decision is especially critical. Faster alerting has limited value if analysts still need hours to determine whether an incident is real. IBM’s Cost of a Data Breach Report consistently shows that longer detection and containment times materially increase breach impact, reinforcing the importance of early, confident decisions rather than reactive investigation.
Trust sits at the center of this equation, and it becomes obvious when trust is earned.
Without trust, automation increases cognitive load instead of reducing it.
Even when teams recognize these challenges, modernization introduces another layer of risk. Once a platform is deployed, organizations are effectively locked in. Large-scale rip-and-replace migrations are rare for good reason.
“Big bang migrations mostly always fail,” Moser said. Detection coverage becomes fragmented, workflows change mid-response, and blind spots appear in unpredictable ways.
A more survivable approach treats modernization as coexistence rather than replacement. New platforms operate alongside existing systems with clearly defined boundaries around which workloads move first. Success criteria are established early, and rollback is treated as a design requirement rather than a failure condition.
Time-to- value matters. If early phases do not demonstrate measurable improvement, modernization efforts stall, Progress needs to be deliberate. Small, validated steps reduce risk and build confidence; rushed transitions amplify complexity.
Technology is abundant, but security teams still lack alignment between data foundations, workflows, and execution discipline.
Productivity does not emerge from buying a platform labeled with “AI.” It emerges when leaders redesign how work flows through the SOC, reduce the volume of low-value tasks reaching analysts, and demand measurable outcomes rather than aspirational promises.
As Moser concluded, platform decisions are not shortcuts to transformation. “If we stop treating platform decisions like outcomes,” he said, “we’d all be in a better place.”
Listen to the full conversation here.
FEATURED RESOURCES

