WEBINAR

John Hammond + Panther: How agentic workflows are redefining the SOC. Save your seat →

close

John Hammond + Panther: How agentic workflows are redefining the SOC. Save your seat →

close

BLOG

What Is Cyber Threat Hunting? A Practitioner's Guide to Proactive Threat Detection

Some threats can hide in your environment without triggering any alerts. Breaches involving compromised credentials can take up to 292 days to identify and contain, phishing attacks can take 261 days, and social engineering attacks can take 257 days on average.

Attackers who log in with valid credentials look identical to authorized users, which means every day of undetected access increases the damage. They aren't exotic zero-day attacks; they're credential abuse, living-off-the-land techniques, and legitimate tool misuse that blend in with normal activity.

Cyber threat hunting exists to close that gap. This guide covers how threat hunting works in practice, how to turn hunt findings into automated detection rules, and how to build a hunting program on a lean team.

Key Takeaways

  • Cyber threat hunting is a proactive, intelligence-driven process that finds threats that automated defenses miss.

  • Three core methodologies (structured/TTP-based, unstructured/IOC-based, and situational/risk-based) give practitioners flexible approaches to threat hunting.

  • The highest-ROI step in any hunt is converting validated findings into version-controlled, tested detection rules through detection-as-code workflows.

  • You don't need a massive team or budget to start hunting. Procedural maturity is realistic for teams of up to five people within 3 to 6 months, using cloud-native tools.

What Is Cyber Threat Hunting?

Cyber threat hunting is the proactive process of searching for adversary tactics, techniques, and procedures (TTPs) that evade automated detection systems. Rather than waiting for alerts to fire, it starts with informed assumptions about how adversaries might be operating in your environment, then searches for potential threats that automated defenses miss.

The keyword is proactive. Unlike alert-driven security operations, hunting starts with an investigative lead rather than a notification. This matters because the threats that cause the most damage are the ones that don't trigger alerts.

To understand what makes hunting distinct, it helps to compare it to two closely related disciplines: threat detection and threat intelligence.

Threat Hunting vs. Threat Detection

Threat detection is a reactive reactive discipline. You write rules, deploy them, and wait for matches. Threat hunting flips that model. Instead of waiting for known patterns to surface, analysts formulate informed assumptions about what attackers might be doing and then investigate. Detection asks, "Did something match my rules?" Hunting asks, "What's happening that my rules don't cover?"

The two disciplines are complementary, not competing. The best hunting programs feed findings back into detection rules, closing the loop.

Threat Hunting vs. Threat Intelligence

Threat intelligence provides the contextual foundation for security operations. It helps you figure out who is attacking, what they're targeting, and which techniques they prefer. It informs threat hunting but doesn't replace it. Taking a threat intelligence report, forming an investigative lead around whether the described TTPs are present in your environment, and actively searching your logs for evidence, that's hunting.

Threat Hunting Methodologies

Practitioners use three primary approaches to hunt for threats: structured, unstructured, and situational. They often combine them depending on available intelligence.

1. Structured Hunting (TTP-Based)

Structured hunting focuses on adversary behaviors mapped to frameworks like MITRE ATT&CK, hunting for techniques like unusual role assumptions or bulk cloud storage access. This approach has the broadest coverage because techniques persist even when attackers change tools and infrastructure.

2. Unstructured Hunting (IOC-Based)

Unstructured hunting begins with specific indicators of compromise, such as malicious IPs, file hashes, or domains from threat intelligence feeds. TTP-based hunting research notes that IOC-based hunting often misses threats, even when using novel indicators. It's useful during active threat campaigns but should supplement, not replace, behavioral approaches.

3. Situational Hunting (Risk-Based)

Situational hunting is driven by organizational context: a recent acquisition, a new cloud region deployment, or an industry-specific threat campaign. A situational hunt might ask whether unexpected IAM role assumptions are occurring in a newly deployed AWS region.

Threat Hunting Frameworks

Three frameworks dominate cyber threat hunting, each solving a different part of the problem, and many mature hunting programs layer all three together.

MITRE ATT&CK

MITRE ATT&CK is the foundational framework for modern threat hunting, mapping real-world adversary behaviors to specific techniques.

Practitioners use it for:

  • Hunt lead generation: Identify which techniques are most prevalent in your environment. Identity abuse and cloud storage access consistently rank near the top.

  • Coverage mapping: Track which techniques your detection rules cover and identify gaps.

  • Hunt validation: Map findings to ATT&CK techniques to build institutional knowledge.

For cloud-native teams, start with five high-impact techniques:

  1. Valid Cloud Accounts (T1078.004)

  2. Cloud Storage Access (T1530)

  3. Disable or Modify Cloud Firewall (T1562.007)

  4. Account Manipulation (T1098)

  5. Create Account (T1136)

These five techniques cover the most common cloud attack patterns, including unauthorized access, data exposure, defense evasion, and persistence. They give you a practical starting point for mapping your detection coverage gaps.

The Cyber Kill Chain

The Cyber Kill Chain models attacks as a sequential series of stages from reconnaissance through actions on objectives. Hunting for reconnaissance activity (unusual API enumeration patterns) catches attacks earlier than hunting for data exfiltration.

PEAK (Prepare, Execute, Act with Knowledge)

PEAK structures the operational workflow.

  1. Prepare covers for lead development

  2. Execute covers investigation

  3. Act with Knowledge covers response and feeding findings back into detections.

PEAK complements ATT&CK. ATT&CK tells you what to hunt for, while PEAK tells you how to organize the hunt.

The Threat Hunting Process

Executing a hunt requires a repeatable, step-by-step workflow, designed as a continuous loop because attackers never stop, and threat hunting shouldn't either.

1. Identify a Hunting Lead

Every hunt starts with an investigative lead: a testable assumption about potential adversary activity. Good sources include:

  • A new report on threat intelligence, for example, news of credential theft via metadata in cloud instances.

  • ATT&CK coverage gaps that your detection rules don't cover

  • Environmental changes, such as when your team just onboarded a new SaaS application.

The goal is to pick a lead you can validate with the data you already have, then refine it as you learn.

2. Collect and Prepare Data

Align your data sources to the hunting lead. Data quality matters more than volume, so audit your log sources before hunting. A hunt for unusual S3 access patterns can still be effective using S3 Server Access Logs even if you haven't enabled CloudTrail data events for S3.

3. Investigate and Validate

Run queries against your data to test the lead. Build timelines, correlate events across sources, and distinguish attacker activity from legitimate but unusual behavior.

4. Respond and Remediate

When a hunt confirms malicious activity, follow your incident response process. Hunts that don't find active threats often uncover misconfigurations, excessive permissions, or logging gaps.

5. Document and Feed Back Into Detections

Every validated finding should produce new automated detection rules. If you manually found credential abuse via a specific CloudTrail API pattern, encode that pattern as a detection rule so it fires automatically next time. This is where threat hunting connects directly to detection engineering.

Essential Threat Hunting Tools

Effective hunting depends on two things: the right tool categories to provide visibility across your environment, and the ability to apply those tools to cloud-native and SaaS contexts where traditional detection approaches fall short.

Core Tool Categories

Threat hunting depends on four capabilities: centralized log querying, endpoint telemetry, threat intelligence context, and scalable data storage.

1. SIEM Platforms

SIEM platforms provide centralized log querying. They aggregate log data from across your environment and provide the query interface hunters use to investigate leads. The key requirement is flexible, ad hoc querying against historical data, not just predefined rule matching.

2. Endpoint Detection and Response (EDR)

EDR provides the endpoint telemetry layer. It is often the cornerstone of most hunting programs because it provides visibility that other tools can't. EDR telemetry reveals process execution chains, credential access, and defense evasion techniques that network logs miss.

3. Threat Intelligence Platforms

Threat intelligence platforms supply the contextual layer, aggregating indicators, adversary profiles, and TTP information that inform hunting leads. Even free sources like MITRE ATT&CK, cloud provider security bulletins, and industry sharing groups can provide small teams with enough intelligence to drive effective hunts.

4. Security Data Lakes

A security data lake addresses the need for scalable data storage, storing normalized security telemetry at scale and enabling flexible queries across long retention windows. Panther's Security Data Lake architecture lets teams query historical data across all ingested sources using SQL or PantherFlow.

Hunting Across Cloud-Native and SaaS Environments

Unlike firewall or antivirus logs, cloud audit trails like CloudTrail don't contain inherently "bad" events. Every API call is legitimate in isolation. The challenge is detecting threats based on noncompliant behaviors in the context of how your applications and users normally operate.

Cloud threat hunting depends heavily on behavioral baselines: what normal IAM activity looks like, which API calls are routine, and which are anomalous. In SaaS environments, hunt across identity provider logs for MFA factor manipulation, OAuth token abuse, and bulk data access patterns.

Panther helps SOC teams hunt at this scale. With 60+ native connectors spanning AWS, GCP, Azure, Okta, Google Workspace, and other cloud and SaaS sources, teams can centralize all cloud-native telemetry into a single, highly structured security data lake.

Panther's baselines define what "normal" looks like for specific users and services, so hunters can quickly identify deviations, such as an unusual IAM role assumption or a spike in S3 access, without manually building context from scratch.

And because all ingested data is normalized and queryable through PantherFlow or natural language, analysts can move from hunting leads to investigation in minutes rather than wrestling with query syntax across fragmented log sources.

Turning Hunt Findings Into Automated Detection Rules

The lasting value of hunting comes from turning what you found into detection rules, so you don't have to re-discover the same behavior next month.

Each successful hunt expands your automated detection coverage, freeing analysts to hunt for new unknowns. The best hunting programs create a virtuous cycle: manual hunts get quickly automated, analyst time shifts to new investigations, and detection coverage continuously expands.

Why Detection-as-Code Accelerates This Cycle

The challenge in most hunting programs isn't finding threats; it's operationalizing what you found. If converting a hunt finding into a production detection rule takes days of manual work, the feedback loop stalls, and the same threats go undetected in the meantime.

Detection-as-code solves this by treating detection logic like software: version-controlled, tested, and peer-reviewed. That speeds up the hunt-to-production feedback loop. Panther supports detection-as-code natively: you can write detection rules in Python with unit tests, use the AI Detection Builder to generate initial rules from natural language descriptions, and deploy through CI/CD pipelines.

A hunt that uncovers unusual AssumeRole patterns targeting a sensitive production role, for example, becomes a tested, version-controlled detection rule that fires automatically the next time that behavior occurs. Validated hunt findings become production detection rules in minutes, not days.

How AI Accelerates the Hunting-to-Detection Loop

AI compresses investigation time and surfaces patterns that analysts would otherwise miss, but it works best when teams understand its limits. Panther AI lets analysts describe hunting leads in plain language instead of constructing complex query syntax, automatically generating PantherFlow queries for analysts to review, edit, and execute. Cresta's security team uses Panther AI to cut triage time by at least 50%, and the AI provides full-context explanations of its reasoning rather than operating as a black box.

That said, AI doesn't know that your infrastructure team runs deployments at 2 AM on Thursdays, or that a new engineer's unusual API patterns reflect onboarding rather than compromise. Use AI for data processing, pattern detection, and query generation. Keep humans in the loop for lead formation, contextual interpretation, and response decisions.

Start Hunting

You don't need a large team or a mature program to begin threat hunting.

HMM2 (procedural hunting with documented playbooks) is a great starting point. For a small team, that means enabling your cloud-native logging, building documented hunting playbooks around your highest-risk techniques, running scheduled hunts weekly, and converting every validated finding into a detection rule.

Within three to six months, you'll have a functioning hunting program that finds threats your automated defenses miss.

The tools matter less than the discipline. Pick a lead, investigate it with the data you have, validate what you find, and turn every confirmed finding into a detection rule that fires automatically next time. That loop, hunt, confirm, automate, is what separates teams that react to breaches from teams that find attackers before damage is done.

Share:

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.