Mike Saxton is Technical Director of Defensive Cyber Operations at Booz Allen Hamilton. His primary focus is on implementing technical solutions to protect against vulnerabilities, exploit software or hardware, data threats and other emerging risks that may threaten critical system operations.
As a Federal System Integrator, we are often required to work on the systems our clients have in place. Where a traditional commercial organization has set Vendor products in place, across the Federal government there might be a series of systems from multiple vendors, and as every Security practitioner is keenly aware, they each come with their own data format. In our support of the Federal government at the multi-million endpoint scale, we’ve had to rethink the process for delivering detection at scale efficiently and effectively to meet the needs of our teams deployed at dozens of locations. Detection-as-Code has enabled us to rapidly build, test, share, and deploy detections across some of the US’ most critical governmental organizations.
A while back, I sat with our security teams and did a monthly retrospective, during which I noticed across our 5 teams, they had reviewed nearly 200 intel reports a day resulting in the deployment of over 17,000 detections that month. The teams were tired, our processes weren’t working, and we needed a new way to accomplish a Herculean task that was sustainable over the long run. One of our Analysts pitched the start of our detection-as-code process and we’ve spent nearly 7 years refining this process to deliver in environments of nearly 6 million endpoints.
Before going into our journey, it’s good to give an overview of how we view detection-as-code. To us, detection-as-code is the abstraction of a vendor product from the Detection. As mentioned earlier, a theoretical traditional environment across our delivery teams may consist of 3 different Firewall vendors, 3 different EDRs, 2 IDS/IPS’, etc. spread across 12 different sites. This type of work is typically handled by dozens of people at multiple consolidated locations. To ensure we’re operating efficiently and effectively we must be able to ingest threat intelligence and deliver detection that works at any site, at any time, and without the need to manually convert queries and signatures. Detection-as-Code allows us “write once and deploy everywhere.”
When we first started our detection-as-code journey, we started with Sigma rules that were focused on building detection based on the TTPs we were looking for. Sigma provided us our introduction to version authoring and context-driven signature development which allowed us to recall threat intelligence, author, and other aspects like why the detection was built, while also building in ATT&CK matrix alignment for coverage mapping in both detection authoring and firing. This early process was still manual, but it was our first step into moving toward a better system which showed near immediate results.
Following our adoption of Sigma, our internal Threat Hunt team built a Python library called PyHAL, (Python Hunt Analytic Library), which allowed for the transportability of detection code like Sigma but also provided the ability for computational detections we found weren’t available with traditional query languages or Sigma. While we were quickly transforming our detections, we found our processes weren’t keeping up. Having to scroll through numerous Excel spreadsheets where we stored our detections, we finally moved to git repos for storing code. Soon, our teams were leveraging a single code repo of detection logic, and able to edit, branch, merge, clone, and push changes for others.
Once we begin transitioning to code repos, we realized we could do much more than just share detection code. We soon developed our Content Development Working Group where we would store detection code, dashboards, report scripts, and everything our security teams needed to perform to the best of their ability, without needing to rebuild everything.
Starting with Sigma, adopting our PyHALs, and moving our detection code to a git repo drastically changed the way our teams operate, but it didn’t stop there. The challenge of the multi-vendor environment has only increased due to broader cloud adoption across the federal government. The scaling of data to the petabyte scale has required us to rethink our approach resulting in a federated data model where we bring “Detection to the Data.”
To meet this demand, we build, test, and deploy our code repos as before. However, in the new federated model, we rely on a series of “Sensors” to pull the data from our repos, meaning those 17,000 detections discussed above have been greatly reduced, centrally stored, and shareable across an entire enterprise in a matter of seconds. Runners and workflows allow for the translation of data to match receiving data models and have helped solve one of our greatest challenges.
While our detection push process is rapid and automated, we still manually develop and test signatures in some cases to reduce false positives and tune or refine the results. To fix this, we are nearly complete with our automated detection-as-code pipeline. This process allows us to build logic and ingest threat intelligence, automatically construct detections, test, flag, and deploy. In December, we spun out a detection engineering company called SnapAttack which is central to this process. With a rapidly changing environment, we are consistently looking for new ways to develop innovative approaches to detection and without a doubt, detection-as-code has been the most transformational aspect of our business to date.
Organizations don’t need to operate at the multi-million endpoint scale or have mature security teams to adopt detection-as-code. Panther provides a detection-as-code approach that allows organizations to store their detection in a git repository that enables security teams to understand all of their detections, who wrote them and when, and edit without losing contextual metadata and version control. Furthermore, with Panther organizations can write and customize their detections in Python. Learn more about the advantages of using Python for SIEM.
Panther can analyze log data from hundreds of systems including AWS, GCP, Microsoft 365, Google Workspaces, Crowdstrike, OSquery, and more. Contact us for a personalized demo.