Every security engineer knows this feeling… you’ve spent hours, days, or even weeks developing a new detection for your SIEM. Let’s say in this case, to alert you when a user bypasses MFA without an authorized bypass code. You’ve done your research, prepared your team, and are ready to deploy your newly made detection. Little do you know, your colleagues in the IT department have over 100 service accounts that don’t have MFA enabled.
Immediately, your SIEM responds with a flood of alerts bombarding your team. Left and right, false positives continue to pile up your ticket queue. You spend the next few hours and days investigating and triaging the alerts, then reworking the detection you created, just hoping it will work better next time. If you’ve experienced something like this, you’re not alone.
In a recent survey of security practitioners, nearly a quarter of the respondents said that the biggest challenge they face with their current SIEM is that it generates too many false positives, leading to increasing burnout across security teams. With this in mind, Panther has added a new feature, Data Replay, that allows security teams to test newly built detections prior to release, helping them to avoid “alert storms” like the scenario above.
Data replay makes it possible to test newly created detections against actual log data from your environment. A security team can use this feature to tune detections during testing to avoid high volumes of false positives, instead of encountering “surprises” following deployment. In turn, this can greatly improve detection fidelity, allowing security teams to trust that Panther alerts will be highly valuable from the start.
We saw earlier that without data replay, the bypass MFA detection we originally wrote would fire off a large number of alerts that would need to be investigated and triaged. With data replay, this situation can be completely avoided as running the detections against historical data would show that unwanted alerts would be generated. Therefore, security practitioners can fine tune new detections beforehand to avoid accidental alert storms.
This feature can be utilized after data is ingested into Panther. All data sent to Panther is first parsed & normalized automatically, and then run through our real-time detections engine for potential alerts to fire. Data can be replayed for up to 30 days, while still being immediately copied into Snowflake for out-of-the-box 1 year retention.
When data replay is utilized, data is pulled from S3 and run through a secondary real-time detections engine dedicated for this feature. This will mimic the results of the production detections engine, and generate findings in four separate fields:
By testing detections with real data before deploying them to production, security teams can avoid accidental false positives triggered by detections that do not behave as expected. Data replay allows security teams to better understand the reaction of new or modified detections in their environment and build confidence that new rules will generate valuable alerts at the right time and in the right manner.
For more information on Data Replay, please see our documentation here. And, if you’re not yet using Panther, you can request a demo here.
Q: How much historical data can I use in a replay?
A: Up to 30 days worth of data.
Q: Do I need to do anything to activate the feature?
A: All instances above version 1.33 will have data replay. Nothing more is required.
Q: Does this replace my CI/CD Workflow?
A: We suggest that this feature be used as a compliment to your CI/CD workflow. When uploading new detections with Panther Analysis Tool (PAT) running a test with data replay is a good best practice.
Q: Will Data Replay work with any detection I write for any log type?
A: Yes, it can be used for log types supported by a Panther pre-built integration, and custom log types in Panther.