WEBINAR

John Hammond + Panther: How agentic workflows are redefining the SOC. Save your seat →

close

John Hammond + Panther: How agentic workflows are redefining the SOC. Save your seat →

close

John Hammond + Panther: How agentic workflows are redefining the SOC. Save your seat →

close

BLOG

Tuning Detections Without a Backlog

How AI Turns Triage Outcomes Into Better Rules

Katie

Campisi

Most detection engineers have a to-do list somewhere. Rules that fire too much, patterns that need scoping down, and detections that should have been updated three months ago. The list doesn't get shorter because tuning is project work, and project work never gets prioritized when there’s a long alert queue.

This post covers the structural reason triage speed alone hits a ceiling, and what it looks like when triage outcomes actually feed back into detection logic instead of disappearing into a ticket.

The Triage Ceiling

Getting investigation time from 30 minutes to 3 minutes is a real improvement, and it has a real effect on how much work a team can absorb. But alert volume isn't a function of investigation speed; it's a function of detection quality.

If the same false positive fires 40 times a week, making each dismissal faster doesn't change the rate at which it fires. The alert still exists, the analyst still processes it, and the detection never learns that this pattern, in this environment, means nothing.

This is the ceiling: as long as triage outcomes aren't feeding back into detection logic, every analyst action is a one-off event. The judgment that this alert is benign, and specifically why, stays with whoever handled it. It doesn't propagate. The next time the same rule fires against the same pattern, the process starts over.

The only way to reduce alert volume over time is to close that loop.

What Closing the Loop Actually Means

Every triage outcome contains information. A confirmed true positive tells you the detection is working. A false positive tells you something about how the rule is scoped, and often points to a specific change that would prevent the same pattern from firing again. An override tells you that organizational context matters here, and that future instances should account for it.

In Panther, that information doesn't stop at disposition. Triage outcomes feed back into the detection engine natively, because the AI agents have direct access to the detection code, the same Python-based rules that generated the alert in the first place. When AI identifies a benign pattern, it can trace back to the specific rule that fired and propose a targeted fix. The same false positive doesn't come back.

That proposed fix surfaces as a recommended next step, actionable with one click. The change includes the modification, an explanation of why it was proposed, and automated unit tests. It even connects with Github, seamlessly integrating AI-suggested changes into existing CI/CD workflows. The analyst who identified the false positive doesn't need to file a request, follow up with an engineer, or remember to revisit it six weeks later when the queue clears. The feedback loop runs without a handoff.

This is what Tealium's team saw in practice: after integrating Panther’s AI alert triage into their workflow, they reduced total alert volume by 85% and cut detection creation time from four to five hours down to ten minutes. Alert volume at that scale only comes down when detections improve continuously, with outcomes encoded back into their logic.

Detection Engineering Without the Backlog

There's a version of detection engineering that's reactive by necessity: someone escalates a noisy rule, a senior engineer carves out time, the fix gets made, and a few months later the same cycle repeats for a different rule. The backlog of tuning work grows faster than it can be addressed because every new log source, environment change, and expanding attack surface creates new opportunities for drift.

Panther's scheduled AI prompts address this proactively. Rather than waiting for analysts to flag a problematic detection, you can configure Panther AI to analyze alert volume patterns on a regular cadence, reviewing which rules are generating noise, identifying candidates for tuning, and surfacing findings before they become analyst fatigue. The output is a structured review and it runs without someone initiating it.

Loglass's two-person IT team was running investigations manually across Google Workspace, Slack, and Notion, checking each platform separately, and piecing together what happened across disconnected audit logs. After consolidating into Panther and tuning detections week over week using Panther's MCP server and Cursor, the team reduced high-severity alerts by 100% and medium-severity alerts by 96% within a single month, with about 80% of alerts now resolved automatically.

How the Feedback Loop Works Architecturally

This capability requires native access to detection logic at the code level. Panther AI can read and modify detection rules directly because the agents, the data lake, and the detection engine share the same foundation. When a triage outcome points to a rule change, the AI can trace back to the specific detection, propose a fix, and surface it as a recommended next step.

Infoblox saw this play out in the tuning process specifically. When their team onboarded a new log source, Panther’s AI identified that a cluster of alerts was coming from an IAM role associated with Kubernetes workloads running on a regular hourly interval, expected automated behavior that a new rule wouldn't inherently know about. That context, surfaced immediately, let the team tune the detection with confidence rather than spending hours manually reviewing the pattern. Their detection tuning time dropped by 70%.

What Changes When the System Gets Smarter Over Time

The practical effect of compounding detection intelligence is that coverage decisions change. Teams that spend a lot of time managing alert noise tend to be conservative about what they monitor, since each new log source is another source of potential false positives to absorb. When detections continuously improve based on outcomes, that calculus shifts.

Tealium's Donald Scherer described the shift: "We went from not wanting to monitor any more log sources to actively searching for more logs to bring in." The coverage anxiety that keeps teams from expanding visibility goes away when the team has evidence that new sources won't stay noisy.

Detection quality is a compounding asset. Every outcome that feeds back into a rule is one fewer false positive the team will see again. Over time, analysts spend less time processing recurring noise and more time investigating threats that warrant human attention. 

Want to see how Panther's closed-loop architecture handles detection tuning in your environment? Book a demo or read how Tealium and Loglass are running this in production today.

Share:

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.