DaC-Driven CI/CD: Mastering GitHub Actions and Workflows

The task for security practitioners is steadily becoming more difficult. There are more devices to secure than ever before, systems are highly distributed and complex, and they produce colossal, cloud-scale data. Set against the backdrop of a relentless and evolving threat landscape, workloads are increasing along with the need for automation and agility. 

In this blog, you’ll learn how to harness a modern threat detection workflow that is founded on automation and agility—detection-as-code (DaC) with a continuous integration and continuous delivery (CI/CD) pipeline built with GitHub Actions. In this workflow, you’ll:

  • Write and manage threat detection content using code, implementing DaC
  • Automate testing, linting, and deployment of detection content by creating a CI/CD pipeline
  • Use GitHub and GitHub Actions to manage your code and create the CI/CD pipeline 

Whether you’re familiar with these concepts or not, this blog will guide you through top-level concepts to implementation details. If you’re ready to implement a CI/CD pipeline in your DaC workflow, skip ahead to the section “How to create a CI/CD pipeline to automate DaC workflows”.

The case for modernizing threat detection

It’s well known that cybercrime is big business, a trillion dollar economy that is only projected to grow. Accordingly, attackers have adopted professional business models to develop new methods and tools in order to consistently deliver value—successful attacks that rake in money.

Recent examples include Midnight Blizzard successfully gaining access into Microsoft systems using “low and slow” methods to avoid detection. Meanwhile Scattered Spider and the Lazarus Group used highly sophisticated social engineering tactics to fool employees into granting system access through credentials or malware.

With AI in the mix, attacker and defender capabilities are increasing across the board. Just as AI is reducing toil and improving detection and response for defenders, it is also increasing the effectiveness of social engineering attacks. So while advanced and persistent attacks become more prevalent, AI capabilities are dramatically scaling up everyday phishing and malware attacks.

On the defender side, workloads are increasing. This is caused by the ongoing cybersecurity workforce shortage as well as the steady increase in the volume, velocity, and variety of security data and the work that is required to manage this data and surface threats with high-fidelity.

Staying ahead in this environment unequivocally requires threat detection and response workflows that meet the demands of increasing workloads. This means workflows that:

  • Scale without compromising performance
  • Automate routine tasks, freeing up security practitioners to perform skilled work
  • Remain flexible in response to rapidly changing attack vectors 
  • Enforce consistency and quality

Next, let’s get into the concepts: detection-as-code, continuous integration and continuous delivery, and GitHub Actions.

What is detection-as-code (DaC)?

With detection-as-Code (DaC), security practitioners author detection content using code like Python or YAML, and manage the detections using version control, just like software source code. 

But that’s just the mechanism. The reason why security practitioners consider DaC effective is because it applies time-tested software development best practices to threat detection and response workflows that support agility and automation. Detection-as-code has these core features:

  • Expressive languages and code reuse. With DaC, practitioners create detection content using programming languages that are expressive—easy to write, easy to understand, loaded with built-in tools, and flexible for a broad range of applications and code reuse
  • Version control (VC). Like most software, DaC uses a version control system like GitLab or GitHub to store and manage detection content. VC provides highly effective change tracking, auditing, and rollbacks
  • Test-driven development (TDD). In this approach, tests for new detection content are developed alongside the detection code itself, integrating quality assurance (QA) into software development
  • Automation. DaC workflows can be automated with a continuous integration and continuous delivery (CI/CD) pipeline, enforcing QA and consistency, and supporting operational scale by reducing workloads—everything that this blog is about
  • Agile workflows. Widely-used in software development, Agile is a methodology that prescribes iterative software development to deliver new features and updates incrementally to remain open to changing priorities. As you know, change is constant in cybersecurity, and a methodology like Agile and an approach like DaC only serves to strengthen the detection development lifecycle by simplifying and automating change

What is continuous integration and continuous delivery (CI/CD)?

Continuous integration and continuous delivery (CI/CD) is a longstanding software development approach that uses automation to improve the speed, efficiency, and reliability of software delivery—or threat detection, in the case of detection-as-code. 

  • Continuous integration (CI) is the practice of integrating code changes from multiple contributors into a central repository multiple times a day. CI tools automatically run tests and lint code to catch bugs before it is deployed to production
  • Continuous delivery (CD) is the practice of regularly delivering changes to software. A core CD process is the automatic deployment of code to a production environment after passing through CI build and test procedures

Setting up a CI/CD pipeline provides many benefits. Frequent and smaller code changes mean easier troubleshooting, rollbacks, and reduced overall risk. Integration and deployment automation speeds up the development cycle, and frees up practitioners’ time to focus on optimizing threat detection and response.

What are GitHub Actions and workflows?

GitHub Actions is an automation platform that GitHub provides to its users to not only develop CI/CD pipelines, but automate labeling GitHub issues, creating release notes, assigning issues to repository collaborators, and much more.

To automate a process using GitHub Actions, you create a “workflow” that executes predefined actions in response to various triggers like a scheduled event, pushing a commit, or creating a pull request. A workflow is one of the five main components that make up GitHub Actions:

  • Workflows: the overarching automation process. Workflows are written in YAML and can be as simple or complex as your project demands
  • Events: the triggers that initiate the workflow
  • Jobs: individual tasks within a workflow that run on specific “runners”
  • Runners: a virtual machine or container that executes jobs and “steps”
  • Steps: the smallest units of work within a job, usually executing a single command or script

Finally, GitHub is just one of many version control and CI/CD platforms that you can use to manage your detection content. Alternatives include GitLab, Jenkins, and CircleCI among others.

Workflow overview

In the next section, you’ll set up a CI/CD pipeline using GitHub Actions to automate a detection-as-code workflow with Panther. After everything is set up, here’s what your typical workflow will look like:

  1. To update detection content, you’ll make changes to a local copy of the detection content on your computer. The production copy that contains your SIEM’s published detection content is located in a remote repo on GitHub.
  2. After changing the local detection content, you’ll follow test-driven development (TDD) to either update and run existing tests to make sure your changes don’t break anything, or by creating and running new tests for brand new detection content.
  3. When the tests are passing, you’ll commit your changes and push them to the remote GitHub repo.
  4. Within the remote GitHub repo, you’ll open a pull request (PR) to start the review process to ultimately deploy the changes to your SIEM. The PR is the first event that triggers an automated GitHub Actions workflow to test and lint the code as part of continuous integration (CI). If any CI check fails, you need to fix the problems that your changes introduced. 
  5. When the CI checks pass, a colleague reviews the PR and either approves it or requests changes. 
  6. Upon PR approval, the changes are merged into the production branch of the detection content. Merging is the next event that triggers another GitHub Actions workflow to deploy the updated detection content to Panther as part of continuous delivery (CD). 

Steps 4 and 6 are fully automated in the CI/CD pipeline. This workflow is repeated for every change to detection content, however there may be variations your organization takes like squashing or rebasing commits before merging. There are a few other implementation details missing, but that’s what the next section is about.

How to create a CI/CD pipeline to automate DaC workflows

Now for hands-on learning. This section will guide you on how to use GitHub Actions to set up a CI/CD pipeline for Panther’s detection-as-code using the command line interface (CLI). 

Panther provides templates for GitHub Actions workflows in the panther-analysis repo to make getting started simple. So in this section, you’ll learn the basics of GitHub Actions and how to use the workflow templates effectively. 

You’ll understand:

  • The tools and prerequisites for getting started with Panther, DaC, and CI/CD pipelines with GitHub Actions
  • How to write a GitHub Actions workflow in YAML
  • How to use Panther’s CI/CD workflow templates
  • What’s next: other ways to use GitHub Actions to automate work

Quick disclaimer: The code snippets in this blog may differ from the current workflow templates in the panther-analysis repo due to later improvements by Panther’s software engineering team. Even so, you’ll still get everything you need to get started with the templates in this blog.

Prerequisites: Getting started with Panther

Here’s what you need to get started with Panther, DaC, and GitHub Actions:

1. Navigating the .github folder

Open up your copy of the panther analysis repo in your preferred code editor, and locate the .github folder within the file tree. 

The .github folder contains anything GitHub related, most often templates and GitHub Actions workflows. 

At the top level, you’ll find two templates that you can update or delete as needed:

Both pull.yml and release.yml are used internally by Panther and should be left alone.

You’ll also see a workflows folder. This is where all of the GitHub Actions workflows are saved, and where you’ll find a handful of workflow templates.

The check-packs.yml and release.yml workflows are used internally by Panther and should be left alone. You can use the remaining workflows as-is or further configure them:

  • docker.yml spins up a Docker container when changes are made to the Dockerfile to verify the image works as expected. This is an optional workflow for those who use Docker
  • lint.yml lints source code for syntax or style issues when a PR is opened or updated
  • sync-from-upstream.yml syncs your copy of panther-analysis with the base panther-analysis repo to incorporate updates to Panther-managed detections
  • test.yml runs tests for all detection content using PAT when a PR is opened or updated
  • upload.yml validates and uploads enabled detection content to the Panther Console using PAT when source code is merged into the production branch

Together, lint.yml, test.yml, and upload.yml make up the CI/CD pipeline. The rest of this blog explains how these files work.

2. Creating a new workflow

You can create a new workflow locally using the CLI or within GitHub. To create a new workflow using the CLI, follow these steps:

  1. Create a new YAML file within the .github/workflows folder on a new feature branch
  2. Set a filename that’s descriptive of the workflow
  3. Add the workflow YAML code
  4. Optionally, test your workflow locally before invoking it. There are a few options to do this, like the GitHub Actions Toolkit and act
  5. Commit your changes to git history
  6. Push your feature branch to the remote repo
  7. Open a PR, get it peer reviewed and approved, and merge in your changes to the production branch
  8. If you could not do step 4, test your workflow by invoking it manually

GitHub will automatically discover any workflow in the .github/workflows folder and invoke them when their triggering event occurs, like a push to a branch. Next, let’s dig into the YAML to clarify how this works.

3. CI testing logic in test.yml

test.yml is a continuous integration workflow that runs all tests in the repo when changes are submitted through a pull request. This ensures that new changes work as expected, in addition to all existing code—detection rules, policies, schemas, etc. 

Take a look at the YAML in test.yml:

on:
  pull_request:

jobs:
  test:
    name: Test
    runs-on: ubuntu-latest

    steps:
      - name: Checkout panther-analysis
        uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b #v4.1.4

      - name: Set python version
        uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d #v5.1.0
        with:
          python-version: "3.11"

      - name: Install pipenv
        run: pip install pipenv

      - name: Setup venv
        run: make venv

      - name: test
        run: |
          pipenv run panther_analysis_tool test
Code language: YAML (yaml)

YAML uses indentation to denote hierarchy and dashes to denote list items. Any code that begins with a # is a comment, like #v4.1.4.

At the top level of the hierarchy there are two keys:

  • on: specifies the event that triggers the workflow, in this case pull_request. This means the workflow to run anytime a pull request is opened in the repo or subsequent changes are pushed to an open pull request
  • jobs: lists one or more jobs that will run in the workflow. There’s just one job defined by the user-defined ID test

Within the test job, three keys define basic information about the job:

  • name: specifies a user-defined title for the job, in this case Test
  • runs-on: specifies the runner, in this case a Ubuntu Linux virtual machine 
  • steps: groups the individual steps, scripts or commands, that make up the job. In this case, there are five steps that handle accessing the repo, installing dependencies like PAT, and running all the unit tests in the repo

Each individual step starts with a dash. Here’s a breakdown of the keys used within steps:

  • Name: specifies a human-readable description of the step
  • Uses: invokes a built-in GitHub action to perform a task. For example, checkout@4 handles checking out the repo, and setup-python@5 handles installing Python to the virtual machine. In the code snippet above, versions are set using a commit ID instead of a version number
  • With: specifies a software version, in this case, Python version 3.11 
  • Run: invokes a command or script to run in the virtual machine

To summarize, when a pull request is opened or updated, the test.yml workflow will: 

  1. Spin up a virtual machine runner using Ubuntu in which all the steps are executed
  2. Use the checkout GitHub action to access the panther-analysis repo
  3. Install Python and set the version to 3.11 using the setup-python GitHub Action
  4. Install pipenv, a virtual Python environment and package manager
  5. Use pipenv via the venv command to install Pipfile.lock dependencies to the virtual Python environment
  6. Invoke PAT to run test for all detection content

4. CI linting logic in lint.yml

lint.yml is also a part of continuous integration. Linting discovers code syntax or style issues when changes are submitted through a pull request, verifying that new code integrates into existing code without errors.

Here’s the YAML in lint.yml:

on:
  pull_request:

jobs:
  lint:
    name: Lint
    runs-on: ubuntu-latest

    steps:
      - name: Checkout panther-analysis
        uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b #v4.1.4

      - name: Set python version
        uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d #v5.1.0
        with:
          python-version: "3.11"

      - name: Install pipenv
        run: pip install pipenv

      - name: Setup venv
        run: make venv

      - name: make lint
        run: make lint
Code language: YAML (yaml)

Much of the lint.yml workflow is the same as you saw in test.yml. The key differences are that the job ID is lint and the job name is Lint to match the purpose of this workflow, and the last step is linting instead of testing. To summarize, when a pull request is opened or updated, the lint.yml workflow will: 

  1. Spin up a virtual machine runner using Ubuntu
  2. Use the checkout GitHub action to access the panther-analysis repo at the latest commit on the pull request branch
  3. Install Python and set the version to 3.11 using the setup-python GitHub action
  4. Install pipenv, a virtual Python environment and package manager
  5. Run the venv command to use pipenv to install Pipfile.lock dependencies to the virtual Python environment
  6. Run the lint command to lint the repository code

5. CD logic in upload.yml

The final part of the pipeline is upload.yml, the continuous delivery (CD) workflow that deploys changes to your Panther instance. 

Here’s the YAML in upload.yml:

on:
  push:
    branches:
      - main

jobs:
  upload:
    name: Upload
    runs-on: ubuntu-latest
    env:
      API_HOST: ${{ secrets.API_HOST }}
      API_TOKEN: ${{ secrets.API_TOKEN }}
    steps:
      - name: Validate Secrets
        if: ${{ env.API_HOST == '' || env.API_TOKEN == '' }}
        run: |
          echo "API_HOST or API_TOKEN not set"
          exit 0

      - name: Checkout panther-analysis
        uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b #v4.1.4

      - name: Set python version
        uses: actions/setup-python@82c7e631bb3cdc910f68e0081d67478d79c6982d #v5.1.0
        with:
          python-version: "3.11"

      - name: Install pipenv
        run: pip install pipenv

      - name: Setup venv
        run: make venv

      - name: validate
        run: |
          pipenv run panther_analysis_tool validate --api-host ${{ env.API_HOST }} --api-token ${{ env.API_TOKEN }}

      - name: upload
        run: |
          pipenv run panther_analysis_tool upload --api-host ${{ env.API_HOST }} --api-token ${{ env.API_TOKEN }}

Code language: YAML (yaml)

There are a handful of differences from the previous two workflows. First, take a look at the on key:

  • push: specifies that this workflow triggers on a push event
  • branches: lists one or more branches for the workflow to monitor for push events. In this case, this workflow only runs on a push event to the main branch. You should update the name of the branch (main) to match the name of your production branch

Now check out the jobs key. Here are the key differences from test.yml and lint.yml:

  • There’s one job with the job ID upload and the name Upload
  • env lists environment variables and secrets that need to be loaded into the runner before executing the steps. The two environment variables are the GitHub secrets that correspond to your Panther instance. You set these up previously following the instructions in the “prerequisites” section
  • The first step uses an if key to run a conditional expression. In this case, the conditional checks if the GitHub secrets are empty, and if so, ends the job
  • The step named validate uses pipenv to invoke PAT to validate the API host and token
  • The step named upload handles the actual process of uploading the changed detection content to the Panther Console

To summarize, when a push is made to the main branch, the upload.yml workflow will: 

  1. Spin up a virtual machine runner using Ubuntu
  2. Import the GitHub secrets into the virtual environment
  3. Validate the GitHub secrets
  4. Use the checkout GitHub action to access the panther-analysis repo
  5. Install Python and set the version to 3.11 using the setup-python GitHub action
  6. Install pipenv, a virtual Python environment and package manager
  7. Run the venv command to use pipenv to install Pipfile.lock dependencies to the virtual Python environment
  8. Run PAT to validate the Panther API token and host name
  9. Run PAT to upload the detection content to the Panther console

6. Invoking the CI/CD workflow

With this blog’s earlier prerequisites complete and the test.yml, lint.yml, and upload.yml workflow templates within your repository, you have a CI/CD pipeline ready to use with DaC. Let’s invoke the pipeline by working through an example of updating detection logic using the CLI. 

Let’s say that you need to adjust the list of unused AWS regions in aws_unused_region.py. This detection monitors for adversaries trying to evade discovery by operating in these regions. Here’s what you’ll do:

  1. Open your repo in your preferred code editor
  2. Pull any remote changes to your local copy of the repository. If you haven’t downloaded or cloned your copy of panther-analysis to your computer, do so now
  3. Within your repo, create a new feature branch
  4. Create a new custom_rules/ subfolder within the rules/ folder. You’ll save customized detections to this folder, including brand new detections or customizations to Panther-managed detections, in order to avoid possible merge conflicts when syncing updates from the base panther-analysis repo.
  5. Copy the files rules/aws_cloudtrail_rules/aws_unused_region.py and rules/aws_cloudtrail_rules/aws_unused_region.yml into the custom_rules/ folder
  6. Open the original detection metadata file rules/aws_cloudtrail_rules/aws_unused_region.yml and disable this detection by setting Enabled: false and save your changes. Going forward, you’ll use the customized detection rule declared in the custom_rules/ folder.
  7. Within rules/custom_rules/aws_unused_region.yml, set Enabled: true and update the RuleID key to “AWS.Customized.UnusedRegion” to differentiate it from the original rule. Optionally, update the DisplayName as well. Save your changes.

8. Next, open rules/custom_rules/aws_unused_region.py to update the detection logic. Let’s say that your org does not use the Middle East region in the United Arab Emirates. On line 5, simply add “me-central-1” to the UNUSED_REGIONS set and save your changes

9. Any change to detection content needs to be matched with a test, following test driven development. Tests are declared in the YAML metadata file. Open rules/custom_rules/aws_unused_region.yml and you’ll find three tests that verify that only unused AWS regions trigger an alert. Since the test cases are covered, the next step is to verify that your changes have not broken the existing tests. Run pat test –path rules/custom_rules/ in your terminal to check

10. Next, commit your changes to version control history and push your feature branch to the remote repository

11. Within GitHub, open a new PR for your feature branch, and this will trigger the CI workflow. Any failing checks should be addressed before moving to the next step. At this point, a colleague typically reviews your PR to approve it or request further changes.

12. Merge the PR into the production branch. This will trigger the CD workflow to upload enabled detection content to the Panther console. 

13. Select the Actions tab within GitHub, and you’ll see the release workflow queued up, as well as a history of other workflows. Eventually this will update to a green check mark indicating a successful CD workflow, or a red X that means that something in the CD workflow needs to be fixed.

And that’s it! After the CD process is complete, you’ll be able to find your changes deployed in the Panther Console.

Continue automating

Setting up automated testing and deployment with a CI/CD pipeline is just the start of how you can streamline your work with GitHub Actions. Here’s what you should look into next:

  • There are many ways to set up a CI/CD pipeline. Visit the Panther docs to see another example in which both CI and CD jobs are contained in one workflow based around a push event to the production branch. 
  • Panther recommends syncing weekly by tag to keep Panther detections up-to-date. You can sync a public fork of the panther-analysis repo, or a private clone, both of which rely on the sync-from-upstream.yml GitHub Actions workflow. 
  • If you use Pantherlog to parse custom log sources, you can create a job in your CI/CD pipeline that tests your custom schemas. See the Panther documentation for an example.

Conclusion

Using code to manage detection content in the detection-as-code paradigm enables teams to streamline workflows with time-tested software development best practices: version control, automation with CI/CD pipelines, and test-driven development. These support agile workflows that scale, remain flexible in response to rapidly changing attack vectors, and enforce consistency and quality—essentials for modern security teams to stay ahead of threats.

Panther is the leading cloud-native SIEM that offers the highly flexible detection-as-code backed by a serverless security data lake. Ready to try Panther? Request a demo to get started.

Recommended Resources

Escape Cloud Noise. Detect Security Signal.
Request a Demo