How to monitor and harden S3 buckets with Panther
In the cloud, object storage is a ubiquitous and necessary service. It powers our digital applications and environments and contains financial data, personally identifiable information (PII), user analytics, and more. In the last several years, attackers have continued to discover vulnerabilities in AWS S3 bucket configurations that compromise user trust and cost businesses billions of dollars. As a cloud user, you are always responsible for your service configurations, while the provider is responsible for everything abstracted (hypervisor, hardware, etc).
In this tutorial, we will walk through strategies to monitor your most sensitive data in S3 using Panther with the goal of providing complete visibility on how your data is accessed.
At the end of this tutorial, you will have:
A pipeline to detect and alert on unauthorized access to your S3 bucketsAn understanding of your S3 security postureA Normalized, centralized, and searchable catalog of S3 log data
The examples below will use Panther’s open source rules and policies.
Getting Started
Prerequisites:
An existing CloudTrail
A set of buckets to monitor
Approach:
Enable S3 access logging
Create Python-based detections
Scan buckets for misconfigurations
An Overview of S3 Bucket Monitoring
The first step is to configure monitoring for all S3 bucket activity, including GET, LIST, and DELETE calls, which can be used to determine unauthorized access to our data.
Amazon provides two mechanisms for monitoring S3 bucket calls: CloudTrail (via Data Events) and S3 Server Access Logs. CloudTrail is a service to monitor all API calls focused around infrastructure changes and management. S3 Server Access Logs provide a more detailed, web-style log on traffic to our objects and buckets.
Comparing CloudTrail and S3 Server Access Logs
Given the same request, we saw about a 40 minute difference between S3 Server Access logs and CloudTrail. The logs below show the difference in the data collected:
Choosing an Approach
If you are sensitive to price and need to monitor buckets at a high-scale, S3 Server Access Logs may be the best option for you. If you need lower latency/overhead, easy centralization of data, and don’t mind spending the extra money, CloudTrail would work best for you.
You can also use a hybrid of both!
Next, let’s walk through the setup and analysis of these logs.
Configuring Data Events in CloudTrail
To configure CloudTrail for monitoring S3 buckets, you will need to add EventSelectors
to a new or existing CloudTrail that configures the bucket(s) to monitor. This setting also specifies whether the read or write events should be captured.
The CloudFormation template below can be used for creating a new CloudTrail with the proper settings:
Once your CloudTrail bucket is configured, walk through the steps here to complete your setup.
Configuring S3 Access Logs
Alternatively, you may use S3 Server Access logging, which provides a slightly different set of data. By default, we would recommend turning on S3 Server Access logs for most of your buckets.
To configure S3 Access Logging, you’ll need a bucket to receive the data in each region where your buckets exist.
Use the template below to create the bucket for receiving data:
Note: We have configured the new event notifications to send into the SNS topic created by your Panther installation. For SaaS customers, the account ID can be gathered from the General Settings page.
For each bucket you plan to capture logs from, add the LoggingConfiguration
setting as shown below:
Once data begins to flow, you’ll start to see raw logs landing in the access log bucket. Luckily, Panther takes care of normalization into JSON:
Format before:
Format after:
To complete this setup, onboard your newly created access logs bucket here.
S3 Analysis with Python Rules
Now that data is flowing, let’s use several built-in detections to alert us if something suspicious happens.
For the context of the rules below, all logs are coming from S3 Access Logs. This data tends to provide richer information for security monitoring purposes.
When writing rules, here are several common patterns to monitor for:
Identifying “known-bad” traffic flows
Finding insecure access to buckets
Detecting access errors on buckets
The examples below use Panther’s open source rules.
Known-Good Traffic Flows
For buckets that should only ever be interacted with automation and applications, you can use the rule below to model this behavior:
To add your own internal flows, add your bucket pattern to the BUCKET_ROLE_MAPPING
with the value being a list of IAM role patterns. By default, Panther monitors itself, and if you try to access the data directly as an IAM user or with another role, an alert will be generated.
Additionally, to monitor on an IP level, the rule below can be used:
This can be used when expecting bucket objects to only be reachable from a CDN or from a specific subnet from within a VPC.
Insecure Access
If bucket data was accessed without SSL, there’s potential that an attacker could intercept the traffic in plaintext to get to your sensitive data. The rule below will detect if any such connections occurred within your buckets:
Error Monitoring
Monitoring S3 request errors can help identify enumeration from attackers attempting to get to your data. This rule will fire an alert if any of the HTTP status codes are seen when attempting to GET, PUT, or DELETE objects in your buckets:
Searching Collected Logs
To better inform our real-time rules, Panther has a mechanism to search through collected data using SQL. This is available from within the UI directly and offers performance improvements such as Parquet compaction.
The structure of data falls in 3 main databases:
panther_logs: All parsed and incoming logs
panther_rule_matches: All events associated with generated alerts (copies)
panther_views: A set of metadata that allows for quick lookups on standard fields (e.g. IP address)
As an example, the equivalent SQL search to the error monitoring rule above in the Panther Data Lake would be:

The output of this search can be incorporated into your Python rule above to ensure the output does not have too many results for certain patterns.
S3 Bucket Hardening
Let’s wrap up by ensuring your S3 buckets have secure configurations. This includes data encryption, public access blocks, bucket policies, and more. You may also want to define additional policies based on your internal business logic.
Panther can natively scan your accounts to discover any low hanging fruit in your environment that could be easily exploitable. Cloud Security scans provide helpful context during an incident because vulnerable configurations can reveal root cause to attacker behaviors.
Follow the instructions here to configure Cloud Security scans for a helpful baseline.
By default, Panther is pre-installed with the following S3 policies:
Bucket Encryption: Secure the data at rest with AWS SSE or KMS
Bucket Logging: Monitor all traffic in and out of the bucket
MFA Delete: Require multi-factor authentication prior to issuing delete calls
Public Access Blocking: Prevent buckets from becoming publicly accessible
Public Read or Write ACLs: Detect buckets with publicly-accessible ACLs
Bucket Versioning: Provides multiple variants of bucket objects
Secure Access: Enforce encrypted connections to buckets
These policies can be found in the open-source panther-analysis repository. You may also disable the checks that are not applicable to your use cases.
Writing Custom Checks
Panther uses Python to express detections and policies as code. Resources are expressed as JSON resources, such as:
As an example – here’s a policy to ensure all S3 buckets enforce KMS encryption:
This will trigger alerts to send to our team for the remediation of our resources.
Conclusion
In this tutorial, we reviewed:
the various methods of collecting S3 access logs
creating a handful of detections
searching through collected data
and improving the cloud security posture of your buckets.
The result is a more secure environment with added visibility.
References
https://docs.aws.amazon.com/AmazonS3/latest/dev/logging-with-S3.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html