To help practitioners keep up in the rapidly evolving landscape of cloud security, Panther offers robust support for all three of the big cloud providers: AWS, Azure and GCP. This post delves into Panther’s integrations with them, highlighting key threat detection use cases made possible via built-in log integrations and detection rules.
Ingest Method: CloudTrail via AWS S3
Onboarding AWS CloudTrail logs into Panther provides a comprehensive way to monitor and analyze AWS activities. Panther supports various AWS data transports, with S3 being a popular choice. Setup involves configuring a new S3 log source in Panther and creating an IAM role for Panther. For Panther to receive notifications of new data, you need to configure your CloudTrail bucket to send event notifications to Panther’s SNS topic, and the role needs trust permissions to access both the topic and the bucket. Once the Role ARN is provided, Panther can start collecting data, and rules like CloudTrail Password Policy Discovery can be applied for advanced analysis.
Detection Rule: AWS CloudTrail Password Policy Discovery
Panther’s rule for detecting AWS account password policy discovery focuses on changes made to an AWS account’s password policy. It detects when a password policy is updated or viewed, which can be indicative of an actor attempting to understand password policies as part of reconnaissance for an attack.
The specific CloudTrail event types that can indicate such an attack are GetAccountPasswordPolicy, UpdateAccountPasswordPolicy, and PutAccountPasswordPolicy . The rule checks for these event types and ensures they’re not routine AWS Service events before firing an alert.
from panther_base_helpers import aws_rule_context, deep_get
PASSWORD_DISCOVERY_EVENTS = [
"GetAccountPasswordPolicy",
"UpdateAccountPasswordPolicy",
"PutAccountPasswordPolicy",
]
def rule(event):
service_event = event.get("eventType") == "AwsServiceEvent"
return event.get("eventName") in PASSWORD_DISCOVERY_EVENTS and not service_event
def title(event):
user_arn = deep_get(event, "useridentity", "arn", default="<MISSING_ARN>")
return f"Password Policy Discovery detected in AWS CloudTrail from [{user_arn}]"
def alert_context(event):
return aws_rule_context(event)
Code language: Python (python)
Monitoring these activities is critical for maintaining robust password policies and safeguarding against unauthorized access!
As an aside: rules in Panther are meant to be modified — you can modify rules like this to match your specific use case by creating and maintaining your own fork of the panther-analysis repository, cloning rules within Panther’s workflow, or creating inline filters.
Ingest Method: Azure Signin Logs via Azure Blob
Onboarding Azure audit and sign-in logs to Panther is done via Azure Blob Storage. The steps involve creating an Azure Blob Storage source in Panther, exporting Azure audit and sign-in logs to the Blob storage container, and assigning appropriate access roles in Azure to ensure Panther can read data from the Blob storage. This method seamlessly integrates Panther with Azure, allowing for efficient log management and analysis via detection rules such as Azure Many Failed Signins.
Detection Rule: Azure Many Failed Signins
Panther’s Azure detection rules focus on identifying sign-in attempts with high risk levels and monitoring failed sign-in activities. It is designed to flag multiple failed sign-in attempts in Azure. The is_sign_in_event helper function identifies sign-in activity, and an error code above 0 indicates an error has occurred.
from global_filter_azuresignin import filter_include_event
from panther_azuresignin_helpers import (
actor_user,
azure_signin_alert_context,
is_sign_in_event,
)
from panther_base_helpers import deep_get
def rule(event):
if not is_sign_in_event(event):
return False
if not filter_include_event(event):
return False
error_code = deep_get(event, "properties", "status", "errorCode", default=0)
return error_code > 0
def title(event):
principal = actor_user(event)
if principal is None:
principal = "<NO_PRINCIPALNAME>"
return f"AzureSignIn: Multiple Failed LogIns for Principal [{principal}]"
def dedup(event):
principal = actor_user(event)
if principal is None:
principal = "<NO_PRINCIPALNAME>"
return principal
def alert_context(event):
return azure_signin_alert_context(event)
Code language: Python (python)
These indicators help in identifying potential brute force attacks, compromised credentials or password spraying attacks, helping improve Azure account security.
Ingest Method: GCP Audit logs via GCS (Google Cloud Storage)
Onboarding GCP audit logs in Panther is done via Pub/Sub or GCS. Setup for GCS involves configuring a new source in Panther and setting up prefixes and schemas according to your data structure. For Panther to receive notifications of new data, you need to configure a GCS bucket to send notifications to a Pub/Sub topic. This involves creating a bucket (if none exists) and a Pub/Sub topic for notifications, configuring the bucket to send notifications, and creating a subscription for Panther to use. Additionally, a new Google Cloud service account is created for Panther’s access. It can be set up manually in the GCP console or via a provided Terraform template. The GCP.Audit logs schema is applied during configuration.
This process ensures that you can efficiently monitor and run detections over your GCP log data for comprehensive security and compliance monitoring. One rule example is GCP Workforce Pool Created or Updated.
Detection Rule: GCP Workforce Pool Created or Updated
Panther monitors for the creation or updates to Workforce Identity Pools and Workload Identity Pools. This rule tracks the creation or modification of workforce pools in GCP, which could be used in unauthorized ways to persist access in an environment. The rule uses string matching on the methodName field and provides necessary context around the actor, workforce pool name and organization ID in the rule title, allowing for fast triage at the alert destination.
METHODS = [
"google.iam.admin.v1.WorkforcePools.CreateWorkforcePool",
"google.iam.admin.v1.WorkforcePools.UpdateWorkforcePool",
]
def rule(event):
return event.deep_get("protoPayload", "methodName", default="") in METHODS
def title(event):
actor = event.deep_get(
"protoPayload", "authenticationInfo", "principalEmail", default="<ACTOR_NOT_FOUND>"
)
workforce_pool = event.deep_get("protoPayload", "request", "workforcePool", "name").split("/")[
-1
]
resource = organization_id = event.deep_get("logName", default="<LOG_NAME_NOT_FOUND>").split(
"/"
)
organization_id = resource[resource.index("organizations") + 1]
return (
f"GCP: [{actor}] created or updated workforce pool "
f"[{workforce_pool}] in organization [{organization_id}]"
)
def alert_context(event):
return event.deep_get("protoPayload", "request", "workforcePool", default={})
Code language: Python (python)
Unusual or unauthorized modifications in these areas could indicate attempts to establish persistent access within the GCP environment, potentially signaling a compromised account or insider threat.
Integrations Reference
AWS
Azure
GCP
Comparative Analysis: Panther vs. Competing Solutions
As we explore Panther’s integrations and capabilities, it’s also beneficial to understand how Panther stands in comparison to other SIEMs in terms of ease of use and flexibility. When it comes to multi cloud support, it’s important to have a SIEM that is flexible and customizable enough to monitor your entire environment without intensive administrative and operational overhead for your team.
Panther’s ease of ingestion sets it apart in comparison with both legacy and modern, cloud-based SIEM solutions. With out of the box integration support for all 3 major cloud providers, Panther makes it easy and straightforward for your team to start ingesting and monitoring your cloud logs on day one. Because Panther provides this native integration, your team doesn’t need to spend valuable engineering time on pipeline admin or management.
Out of the box detections for all 3 cloud providers ensures immediate coverage for common use cases, while the power of Panther’s modern detections-as-code approach enables detection engineers to create customized and high fidelity rules. Legacy SIEM solutions limit threat coverage with a lack of dynamic detections, making it more challenging to achieve broad coverage for your cloud logs.
While other solutions might provide a threat detection framework of their own, they tend to lack the powerful operational rigor that Panther provides for enhanced detection tuning and testing. Panther delivers higher efficacy detections, regardless of what cloud logs you need to monitor and secure.
Panther includes out of the box integrations for commonly used alert destinations like Asana, Slack and PagerDuty, plus a webhook destination to set up any custom destinations your team requires. Other SIEMs tend to lack flexible, native alert destination support, slowing down investigation workflows and response times.
Panther distinguishes itself from the field with ease of cloud log ingestion, both out of the box and customizable detections, and expansive alert destinations. This makes Panther an appealing choice for organizations looking for a flexible cloud-centric SIEM.
Further Exploration
To stay up-to-date on Panther releases, see Panther’s Release Notes. For a deeper dive into Panther’s log ingestion and detection capabilities, visit Panther’s official documentation and panther-analysis repository.