TL;DR Reduce operational inefficiencies with these alert management best practices: centralize alert management, customize threat detection logic, and prioritize and correlate alerts. Choose a threat detection and response solution that gives you the flexibility and customization that simplifies alert management.
“Endless churn”, “drive you crazy”, and “ocean of noise” are how security practitioners are describing alerts and their associated feelings of burnout on Reddit’s r/cybersecurity and Y Combinator’s Hacker News. While many factors contribute to burnout, alert fatigue is a well-known problem that security teams can mitigate through practical alert management practices. Keep reading to learn about four measures your team can take to cut the noise, and fatigue.
Security teams usually rely on multiple tools to get complete visibility into their environment so they can effectively understand their attack surface and work to reduce it.
However, a disparate set of tools can negatively impact security if you do not centralize alert management. For starters, when alerts are managed on different platforms, practitioners are required to context switch regularly: conduct different management workflows on disparate platforms, switch between tools and context, and manually ensure consistency across toolsets. This causes the operational inefficiencies the industry refers to as “tool fatigue” or “tool sprawl”. At best, your team is inefficient and may have a hard time prioritizing tasks, at worst your team misses critical threat detection alerts.
Then there’s the problem of siloed data—data that’s only accessible through one threat monitoring tool. When data is siloed, systems cannot automatically correlate security events and consolidate related alerts, which leaves this job for practitioners to manage. This increases alert noise—the volume of alerts that practitioners face—and it also impacts visibility and mean time to resolution (MTTR), the mean time it takes practitioners to address threats.
Here’s what you need to do in order to centralize alert management and mitigate the negative effects of using multiple threat detection tools:
When you customize threat detections to contain specific information and cover all relevant security issues, your alerts will always be more accurate. This results in fewer false positives and false negatives, and a reduction in overall alert noise.
For example, including a deduplication period for detections eliminates duplicate alerts. Fine tuning alert conditions by specifying a threshold before an alert can be triggered reduces alert noise. Including a detailed runbook and description in each alert, as well as links to relevant documentation, improves your efficiency in responding to the security event.
Customizing detections also includes working to cover security gaps in your environment. But keep in mind that customization options vary across threat detection solutions. When you are choosing a solution to consolidate alert management, make sure to ask yourself these questions:
You can configure all alerts to list a severity level in order to prioritize and streamline your work. Typical severity levels range from “info” for events without risk that provide operational insights, to “critical” for the most pressing and potentially damaging security events. In practical terms, severity levels allow you to identify and address the alerts that matter the most, and organize the rest to review at a later time.
You can further enhance your workflow by routing alerts to different destinations based on severity level. Whatever you choose, when you prioritize alerts, you take control of which alerts you are exposed to, and where, giving you the tools to manage alert fatigue and make your alert triage and response workflow consistent.
Alert correlation is the process of identifying patterns and connections between different events and log sources to determine when multiple alerts are related to a single attack. Threat detection systems implement alert correlation using two methods: rule-based correlation, based on log attributes or contextual data; or algorithmic-based correlation, using statistics or machine learning.
Alert correlation has two important benefits for practitioners: reducing noise and improving mean time to resolution (MTTR).
Alert noise is reduced when alerts are packaged together. Instead of dealing with six separate alerts, you’re presented with one group of six correlated alerts. When the threat detection system connects the dots for you, it takes this work off your plate and fights back against alert fatigue.
This dovetails into the other important benefit of alert correlation, improved MTTR. In this case, correlated alerts provide you with more contextualized information to work with, which enables you to understand and identify the attack vector faster, and resolve the threat faster.
The best practices for alert management can be summed up in the concept of “alert tuning”: the process of adjusting—or fine tuning—alerts to reduce false positives and false negatives, so that you are alerted to genuine threats while minimizing the noise and alert fatigue caused by irrelevant, or low-priority alerts. You can tune alerts by centralizing alert management, customizing detections, and by prioritizing and correlating alerts.
Notably, alert tuning is an ongoing project that evolves with the threat landscape. Continuous investment in alert tuning is critical to mitigating common stressors like alert fatigue and slow MTTR, while also improving overall security and efficiency. Tracking your changes over time with metrics—like the rate of false positives, or MTTR—will verify your progress and the value of your investment.
Finally, make sure to pick a threat detection and response solution that supports your team with the flexibility and customization options that simplify alert tuning.
Take a page from Intercom’s book on threat detection and learn how Intercom uses Panther to build high-value detections that reduce alert fatigue.
Curious about Panther? Request a demo.