
The promise of AI in the SOC has mostly been framed around speed: triage alerts faster, close tickets faster, respond faster. Speed matters. But speed alone doesn't change the structural problem. It just gets you to the bottom of the same broken queue a little quicker.
Here's a deeper look at how agentic alert triage works inside Panther — what it actually does during an investigation, how triage outcomes feed back into detection logic, and what changes when analysts stop spending their days copying indicators between tabs.
What an Agentic Investigation Actually Looks Like
Most AI-assisted triage works like a summarizer. An alert fires, the AI pulls context, writes a summary, and hands it back with a probability score. The analyst still decides. The analyst still does the work.
Panther AI Alert Triage works differently. When an alert fires, Panther launches an autonomous investigation. It pivots across your data lake, reviews historical alert patterns for the same entity, and queries connected tools via MCP — your identity provider, your code repositories, your ticketing system — pulling live context from across your environment. It evaluates what it finds against your organization's runbooks, then delivers a definitive classification: risky, benign, or inconclusive, with full transparent reasoning showing every step it took and every piece of evidence it considered.
The analyst gets a judgment backed by evidence, not a probability score to act on. Their job shifts from running the investigation to reviewing and confirming it.
The results are concrete. HealthEquity cut mean time to triage from 30–45 minutes to under 5 minutes for Tier 1 and Tier 2 alerts. Cresta saw 50% faster triage. Infoblox achieved 50% faster triage and investigation across the board.
The Loop That Makes the Problem Smaller
There's an architectural reason most AI tools don't shrink the alert problem over time. They sit on top of your SIEM as an overlay. They can triage faster, but they can't touch the detection logic underneath. When an analyst identifies a false positive, closes the alert, and moves on, the detection that fired it stays exactly the same. Tomorrow it fires again.
Panther AI has native access to the data lake, the detection code, and the alert history. That access is what makes closing the loop possible.
When an analyst confirms a benign alert, that outcome is captured along with the reasoning and context. When a pattern emerges across similar alerts, Panther AI traces it back to the specific detection rule and proposes a targeted fix: a GitHub pull request with a diff, an explanation, and unit tests. The same false positive doesn't come back and detection quality improves continuously.
Alert volume decreases because your detections are getting more accurate over time. Every triage is an investment in a smarter system.
Tealium's experience makes this tangible. Alert volume dropped by 80–90% through continuous detection improvement. Detection build time fell from 4–5 hours per rule to around 10 minutes. And as detection coverage became something they could expand confidently, the team stopped avoiding new log sources and started actively seeking them out.
As Jason, the Lead Security Engineer at Tealium, put it: "When you look at the thinking steps of the AI in the platform, it's doing all of the things that a sophisticated engineer would do on their best day — and it's doing it on every alert, every time, 24 hours a day, no fatigue."
Auto-Close and 24/7 Coverage Without a Night Shift
When Panther AI's confidence on a benign classification meets a user-configured threshold, the alert closes automatically with a complete audit trail. Every auto-closed alert includes the AI's full reasoning, the evidence considered, the confidence score, and a timestamp. Nothing is a black box.
Teams configure this conservatively to start: low-severity alerts only, scoped to specific detection tags they've already validated. As accuracy is confirmed over time, the threshold expands. The system earns autonomy gradually.
Tealium built this into what they call their T-SOC. The architecture runs in tiers: Tier 1 handles initial triage. Tier 2 audits that analysis. Tier 3 evaluates both, escalating to a human when the lower tiers disagree or confidence falls below threshold. When they agree and confidence is high, alerts close automatically.
The result: a five-person team responding to AWS anomalies within 10 minutes, around the clock, without a night shift. For compliance teams, the audit trail on every auto-closed alert is complete and human-readable — the full documentation is there without any extra work.

Teaching the AI Your Environment
Generic AI triage and AI that knows your environment are different tools. An AI that doesn't know your CI/CD service accounts will flag them every time they do something unusual. An AI that doesn't know your executive travel patterns will escalate logins your team has already reviewed a dozen times. Generic AI creates noise. Contextual AI reduces it.
Organization Profiles encode what "normal" looks like in your specific environment: approved geographies, known service accounts, maintenance windows, executive access patterns. Panther AI applies this context to every alert, on every shift. The institutional knowledge that used to live in your most experienced analyst's head is encoded in the system and available at 2 AM.
Detection Runbooks go a layer deeper. For specific alert types, they define what questions to ask, what context to gather, and what conditions make something definitively benign or risky for your organization. Investigation quality stops depending on who's handling the alert.
Loglass, a two-person IT team, tuned their detection logic to their specific environment across Google Workspace, Slack, and Notion. Today, approximately 80% of their alerts are resolved automatically. They also configured their entire workflow in Japanese using Panther's MCP server — alert summaries, detection creation, investigation — eliminating the translation overhead that had been adding meaningful delay to every response. In total, they reduced investigation workloads by 70%.
At Tealium, context changed the detection engineering experience entirely. Before, their primary detection engineer was doing solo research before every rule build — working through structure decisions and edge cases largely alone. With Panther’s AI Detection Builder and the environment context in place, it became a collaboration. "Once he had access to Panther AI," said Donald Scherer, VP of Platform and Infrastructure Security, "it felt like he had a peer collaborating with him. That's a powerful thing when you felt like you were solo on something before."

From Alert Queue to Security Thinking
Jason put it plainly: "There was a time I was so concerned about solving the next thing in front of me that I rarely had time to spend the mental resources on creative, strategic thinking. Now I have tools that can think creatively with me. Everybody else is getting gray hair because of their SIEM. With Panther, we get to be excited about ours."
At Loglass, junior team members who previously needed significant support can now run incident investigations independently using natural language.
Ready to see how Panther AI Alert Triage works in your environment? Book a demo or read the full Tealium case study and Loglass case study to learn how teams are running this in production today.
Share:
RESOURCES






