
The Expertise Problem
Most security teams experience the same constraint: the expertise required to run a modern SOC doesn't scale, and hiring your way out of it became unrealistic a long time ago. The challenge starts at the foundation: the number of data sources that security teams need to monitor has grown dramatically, and operationalizing all that data remains a problem.
When we founded Panther, our goal was to let security teams centralize all their security telemetry with sustainable economics, build reliable detections-as-code with version control, and retain structured logs backed by a security data lake. But better infrastructure alone couldn't fix the deeper bottleneck.
Every critical workflow in the SOC, from triage to detection to threat hunting, depends on scarce, senior-level expertise. Your best analyst is responding to a page at 3 AM, mentally assembling context from six different tabs, making a judgment call on incomplete information, and little of what they do makes it back into the system. Context lives across dozens of tools, and institutional knowledge lives in people's heads. Alert volume grows with the attack surface. Expertise didn't.
Most AI in security today is bolted onto processes designed for humans. Add an LLM to that loop, and you get faster triage, not a smaller problem. Security operations needs to be rebuilt around a fundamentally different model: one in which AI agents are the primary workers across the SOC lifecycle, and humans shift from doing the work to supervising, guiding, and refining it. And it demands a closed loop, where agent decisions and conclusions feed back into the system so that the next run is smarter than the last. That future is now possible.
Context, Action, and Guidance
We spent the past year building and deploying AI agents across core SOC workflows, and the clearest lesson was this: agent effectiveness comes down to context, action, and guidance.
Context is the foundation. An analyst triaging a suspicious login needs to understand the user's normal behavior, the asset's criticality, and how this pattern compares to prior incidents. That requires agents to access a security data lake, threat intelligence, asset and identity inventories, baselines, and connectivity to your broader IT and security stack. Without the right data, no amount of AI model reasoning compensates.
Action is the productivity engine. Agents must be able to do what your team does: write new log parsers, build detections, and triage alerts end-to-end with the same depth as your most experienced people. The more an agent understands your tools and environment, the broader the actions it can take on your behalf.
Guidance is what aligns agents with your security team. Context and action alone get you a capable tool, not one that reflects your team's judgment. That requires guidance in the form of prompts: risk criteria, investigation playbooks, escalation logic, and organizational context that define how your team works and why. It's the difference between an agent that can triage an alert and an agent that triages aligned with your priorities and mission.
With all three in place, the real shift in operating model begins.
A Platform Built for the Closed Loop
We took these learnings and evolved Panther into the complete AI SOC platform. Rather than bolting agents into manual SIEM workflows, we embedded Panther AI agents throughout the core security operations lifecycle, from data ingestion to alert triage and resolution. Because Panther owns the data lake, detections, and agents, it can create a closed loop in which agents apply learnings guided by human operators.
Four architectural choices make this possible.
First, agents that deeply understand your data. Panther ingests any security data source at any scale and structures everything through a normalization pipeline, so agents work with data they can reason about rather than raw, unstructured logs. That comprehension is what makes correlations across sources meaningful and investigations precise.
Second, agents that speak the language of your security program. Detections are Python, queries are SQL, and workflows are natural-language prompts, not proprietary abstractions. An agent can read a detection's intent, write new logic based on a threat model, and pivot across your entire data lake in a single motion. Your team's expertise becomes something agents can directly learn from and build on. And because detections are now consumed by agents, not just people, they encode richer context (risk criteria, common false positives, prescribed next steps) that tells the next agent not just what to match, but what it means.
Third, agents that improve the system in which they operate. Because context, detections, and investigation all live in one platform, every outcome feeds back into what produced it. Triage sharpens detection logic, improved detections reduce noise, and reduced noise frees capacity for proactive coverage. How far that loop runs is governed by the autonomy your team grants, from human-approved changes to fully autonomous workflows running around the clock.
Fourth, agents that earn trust through isolation and auditability. Panther AI runs entirely within a dedicated AWS account using Amazon Bedrock. No customer data is shared across tenants or used for model training. Every AI run executes under the invoking user's identity and permissions, enforcing the same access restrictions as the rest of the platform. Write operations (creating detections, updating alerts) require explicit human approval before execution. All AI actions are recorded in full audit logs. Trust isn't assumed. It's built into the architecture.
The Compounding Advantage
When you give agents the tools to drive change and the context to understand what matters, something important happens: institutional knowledge stops living in people's heads and starts living in the system. Every investigation begins with what your most senior analyst would know. And every outcome feeds back in, with human review and approval at the decision points that matter.
Here's what that cycle looks like. Panther’s triage agent, guided by your team's risk criteria, consistently observes that a particular CloudTrail detection fires during deployment windows and is resolved as benign. That outcome is tagged and labeled on the platform as intelligence that sharpens future triage and informs improvements to detection. The pattern feeds back to the detection agent, which tightens the rule. The improved detection reduces noise, thereby sharpening triage reasoning for the remaining alerts. The freed analyst time goes toward expanding coverage into areas the team never had bandwidth for. That decision doesn't disappear. It compounds.
Over time, agents build a depth of operational context that no individual analyst could maintain because they continuously monitor every alert, every investigation outcome, and every detection change. The system doesn't just keep pace with your environment. It learns quicker than any single person can.
From Doing the Work to Guiding the Work
The shift is tangible. Instead of opening a queue of 200 alerts and working them one by one, your team reviews what the agents handled overnight: which alerts were closed and why, which detections were tuned, and what new patterns surfaced. The senior analyst who used to spend half her day on repetitive triage is now reviewing agent decisions, refining the risk criteria that guide them, and building out coverage the team never had bandwidth for. The junior analyst who would have taken months to ramp is investigating at senior depth from week one, because the agents surface the right context and the right questions automatically.
Workflows that took 30 to 60 minutes now take minutes. Detection engineering shifts from a backlog that never shrinks to a continuous process. And the work that used to drain people (the 3 AM pages, the repetitive false positives, the cognitive load of holding six tools in your head) gets absorbed by agents, so your team can focus on threat modeling, coverage strategy, and the judgment calls that actually require a human.
The expertise required to run a modern SOC still doesn't scale on its own. Most AI in security closes the alert. Panther closes the loop. And every time it runs, your team's expertise compounds.
See it in action
Most AI closes the alert. Panther closes the loop.

Share:
RESOURCES






