BLOG
Investigating Alerts Without Switching Tools
How AI Brings Full Context to Every Investigation
Katie
Campisi

Ask any analyst what the worst part of their job is. Most won't say "too many alerts." They'll say something closer to: "I have to check five different places just to figure out if an alert is real."
That's the context fragmentation problem. And it's not a workflow inefficiency, it's what determines whether investigations are fast or slow, thorough or incomplete, consistent or wildly variable depending on who's on shift.
This post digs into how Panther’s MCP integration and in-console AI change the investigation experience: what it looks like when your AI has full-stack context instead of just the alert payload, and why that changes the quality of every triage your team runs.
The Hidden Cost of Context-Switching
A typical alert investigation starts the same way for most teams. An alert fires, you open it, copy the IP or username, and start pivoting across tools. VirusTotal to check threat intel. Okta to see who the user is. CloudTrail to understand what the account did. Your ticketing system to check for open incidents. A Confluence doc or Notion page to find the relevant runbook. By the time you have enough context to make a call, 20 to 30 minutes have passed on a routine alert.
The deeper issue is that this process depends entirely on the analyst knowing what to look for and where to find it. A senior analyst with years of environment-specific knowledge moves through these steps quickly. A junior analyst covering an unfamiliar part of the stack figures it out in real time, and the quality of that investigation reflects it. The gap between those two outcomes isn't about effort or skill so much as it is about access to context, and right now most of that context lives in people's heads rather than in the system they're working in.
The rest of this post covers what changes when that context is available directly inside the investigation.
What "Full-Stack Context" Actually Means
Panther's AI SOC agent has access to more than just the alert. It sees the detection code that fired the alert, the alert history for the same entity, enrichments from your data lake, and the organizational knowledge your team has encoded in runbooks and profiles. That combination is what makes it possible to investigate thoroughly rather than just summarize.
MCP integration extends that context further. Panther connects directly to the tools your team already uses — GitHub, Okta, PagerDuty, Atlassian, Notion, and more — and pulls live context from those systems as part of an investigation. It doesn't copy data between tools. It queries them in real time, the same way a senior analyst would, without the tab-switching.
In practice, a single alert investigation can now include: the user's recent Okta authentication history, open PagerDuty incidents for the same asset, relevant GitHub activity from the same timeframe, and runbook documentation your team has written about this alert type — all surfaced in one operation, with transparent reasoning attached.
Cresta's Robert Kugler, Head of Security, IT & Compliance, put it plainly: "Panther's AI Alert Triage puts everything I need in a single place." That's the shift. The investigation doesn't move between tools anymore. The tools come to the investigation.
Live Context From Across Your Stack
When Panther's AI SOC agent queries Okta during an investigation, it isn't looking up a cached snapshot of user data. It's pulling the user's current authentication state, recent login history, MFA status, and group memberships in real time. If that user authenticated from a new location two hours before the alert fired, that context is part of the investigation automatically.
When it queries GitHub, it can surface whether the same user pushed code, opened a pull request, or modified a detection rule around the time of the alert — context that's almost never checked manually because it requires switching to a completely separate tool and knowing to look.
When it queries PagerDuty, it can check whether the affected service already has an open incident, whether the on-call engineer has been notified, and whether this alert is part of a pattern that's already being tracked.
None of this requires the analyst to know in advance that these sources are relevant. The AI gathers them as part of the investigation because it has access to the full picture. The analyst reviews a complete investigation rather than assembling one.
Infoblox saw this translate directly to speed: 50% faster alert triage and investigation. At HealthEquity, mean time to triage dropped from 30–45 minutes to under 5 minutes for Tier 1 and Tier 2 alerts. The time savings aren't coming from a faster analyst. They're coming from removing the assembly work entirely.
For security leaders, there's a compounding benefit here that doesn't show up in individual triage metrics. When investigation quality stops depending on who is working, you stop losing ground every time someone new joins the team or someone experienced moves on. At nCino, their Product Security Manager described the shift this way: "In the amount of time that it used to take me or another analyst to jump into a log source and start understanding behavior across that one source, Panther AI can take that same context and start applying it across multiple log sources."
Natural Language Investigation, Everywhere in the Platform
The other practical change worth covering: natural language investigation is available from anywhere in the platform, not just a dedicated search page. Analysts can ask questions from within an open alert, while building a detection, or during a threat hunt, and get answers that draw on the full context available to the AI.
This matters for junior analysts, especially. Someone who isn't fluent in SQL or PantherFlow can still investigate deeply without knowing query syntax. Senior analysts move faster because they're not writing boilerplate queries for routine lookups. And multi-turn conversation support means an investigation builds on itself — ask a follow-up question, dig into something the AI surfaced, keep building context within a single thread without starting over.
Teammates can pick up where someone left off. The full conversation history is preserved, so a handoff between analysts doesn't mean losing the context built up in the first half of an investigation.
Loglass took this further by configuring their entire security workflow in Japanese using Panther's MCP server. Alert summaries, detection creation, incident investigation — all in their native language, with no translation step at any point in the process. For a two-person IT team where fast response is critical, removing that overhead alone reduced their total investigation workload by 70%.
When Context Lives in the System, Not in People
The tab-switching problem is easy to underestimate because it sounds like a productivity inconvenience when it's actually a quality problem. How many alerts get investigated, how thoroughly, and how consistently across shifts all come back to how much institutional knowledge is encoded in your tools versus carried around by individual analysts.
Panther's Organization Profiles and Detection Runbooks are where that knowledge gets captured: what normal looks like in your environment, how your team evaluates specific alert types, which assets are sensitive enough to treat differently. Once that's in the system, the AI applies it consistently on every alert regardless of who's working. A new analyst on their second week gets the same investigation context as your most experienced team member.
The downstream effect of that consistency tends to show up in coverage decisions. Tealium grew their monitored log sources by nearly 30% after getting Panther's AI into their workflow. Donald Scherer, VP of Platform and Infrastructure Security, described the shift: "We went from not wanting to monitor any more log sources to actively searching for more logs to bring in." This is what happens when the team has enough confidence in investigation quality to expand coverage without worrying that new sources will generate noise they can't manage.
What Happens When the AI Doesn't Wait for an Alert
Panther's Scheduled Runs extend the same AI investigation capability to proactive, time-based workflows. A daily run can check IAM configurations against your benchmarks and post results to Slack before standup. A weekly run can analyze alert volume by detection rule, surface false-positive candidates, and produce a tuning report without anyone scheduling a project around it. A post-termination monitor can watch a departing employee's activity across all log sources for 30 days automatically, rather than relying on someone to remember to check.
When those scheduled runs are paired with MCP integrations, the loop can close without a human handoff at all. An off-hours anomalous login can trigger an automated identity challenge to the user, auto-close as benign if they verify, or escalate to PagerDuty with a fully enriched incident if they don't, and surface the outcome in the team's morning review either way.
Want to see how Panther's AI SOC agent handles investigation context in your environment? Book a demo or read how Loglass and Tealium are running this in production today.
Share:
RESOURCES






