
Security teams write detection rules to catch threats. But some of the most important patterns aren't about how many events you see. They're about how many different values appear. Three failed logins from one user? Probably a typo. Three failed logins from three different users, all from the same IP? That's credential stuffing.
Until now, writing rules like this in Panther was possible with caching or scheduled rules. Today, that changes with Unique Value Thresholds.
The problem: simple logic, complex code
Detecting a threshold of unique values is one of the most common patterns in security monitoring. Think of cases like multiple users failing to log in from the same IP, a single user authenticating from several different regions, or an API key being used across numerous distinct services.
The logic is simple to describe, but implementing it in Panther previously required one of two workarounds. You could wrestle with the caching helpers library, manually managing cache keys, TTLs, and string sets inside your rule() function. Or you could drop out of streaming rules entirely and write a scheduled query, losing the real-time alerting that makes streaming rules valuable in the first place.
Both paths meant more code, more room for bugs, and configuration that was confusing to read later. The YAML Threshold and DedupPeriodMinutes fields went unused, replaced by hardcoded constants buried in your Python. Anyone reviewing the rule had to read the full implementation to understand what it actually did.
The fix: a single unique() function
Unique Value Thresholds introduce one new, optional function to the Panther rule interface: unique(). If your rule defines it, Panther automatically counts the distinct values it returns, grouped by your dedup() key, and fires an alert when that count hits the Threshold you set in YAML.
Here's what the experience looks like in practice.
Before: caching helpers and manual state management
Notice the disconnect: the YAML says Threshold: 1, but the actual threshold lives in a Python constant. The dedup period in YAML is meaningless because the caching helper manages its own TTL. Anyone reading this rule has to trace through the code to understand what triggers an alert.
After: declarative, readable, done
That's it. The rule() function goes back to doing what it was always meant to do: decide whether an event is interesting. The dedup() function groups events by source IP. The unique() function tells Panther which value to count as distinct. And the YAML Threshold and DedupPeriodMinutes do exactly what their names suggest.
Twenty-plus lines of cache management code collapse into a two-line function. The configuration is honest. A teammate reading this rule for the first time can understand what it does in seconds.
Another example: user logging in from multiple IPs
The pattern applies just as cleanly to other common scenarios. Suppose you want to alert when a single user authenticates from three or more distinct IP addresses within an hour:
No caching library import. No manual key construction. No mock-handling workarounds. Just three small functions and two lines of YAML.
What happens under the hood
Panther's deduplication engine already groups events by dedup() key and counts them against your threshold. Unique Value Thresholds extend that same engine to track distinct values instead of raw event counts when a unique() function is present.
Rather than storing every unique value individually (which would be expensive at scale), Panther uses a time-bucketed hashing approach inspired by probabilistic data structures like Counting Bloom Filters. Each unique value is hashed into a fixed set of buckets, and the system estimates cardinality from the number of occupied buckets within your dedup window.
This means the feature uses a constant, predictable amount of storage per rule regardless of how many unique values flow through it. It achieves roughly 90% accuracy for cardinalities up to 1,000, which is more than sufficient for the threshold-based alerting patterns that security teams rely on. And because it integrates directly into the existing deduplication pipeline, there are no external dependencies, no extra network calls, and no additional infrastructure to manage.
Rules that don't define a unique() function continue to work exactly as they always have. There is nothing to migrate and nothing that breaks.
Who should use this
If you've ever found yourself importing panther_detection_helpers.caching to count distinct values, or if you've moved a detection to a scheduled query because counting unique values in a streaming rule was too painful, Unique Value Thresholds is built for you.
Common use cases include detecting credential stuffing (multiple usernames from one IP), identifying lateral movement (one account accessing many distinct hosts), spotting impossible travel (one user logging in from several geographic regions), and flagging API abuse (a single key hitting many different endpoints).
Get started
Unique Value Thresholds are available now in open beta. To see the feature in action, check out the panther-analysis migration PR, which converts seven existing rules from the old caching pattern to the new unique() function. These migrations are a great reference if you want to update your own rules, and they're available in the panther-analysis repo for you to pull into your environment today.
For full documentation on writing rules with unique(), see the Writing Python Detections page in the Panther docs.
Share:
RESOURCES






