Get Started: AWS and Panther

Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform with many companies continuing to trust AWS for their critical infrastructure and business-specific data. With this shift to the cloud, the landscape of an organization’s attack surface has become increasingly unpredictable making it more difficult to protect. 

Commonly, security teams would leverage a legacy SIEM tool to provide coverage for business-specific infrastructure and services. However, legacy SIEM tools lack the ability to match the scale and customizability that AWS provides. Leaving security professionals with much to be desired. 

In comes Panther, a modern SIEM tool that can make security operations painless. Panther’s product can efficiently ingest, normalize, and structure AWS logs so that security teams can effectively implement the right detections for their specific AWS environment. Panther also enables a Detection-as-Code practice empowering security teams to scale their security engineering operations resiliently alongside their AWS environment as it changes and grows. 

By the end of this blog, we’ll overview onboarding different AWS log sources into Panther, and how its ingestion and enrichment process empowers security teams to deliver the best coverage to their cloud environment. 

What is Ingestion?

The majority of SIEM journeys begin at a similar point, ingestion. Ingestion refers to the process of formatting and uploading log data from external sources like hosts, applications, and cloud-based services. On the surface, this process seems simple. But it comes with a myriad of hidden roadblocks. 

As most SIEM tools require heavy security operations to scale log ingest, security teams are handcuffed to get the most up-to-date data. Ingestion in legacy SIEM tools requires teams to plan operations, scale infrastructure, and then ingest the data. Even after that, formatting the data once it’s inside a SIEM tool can be extremely cumbersome. Legacy SIEM’s require the need to create parsers for every log source. These need to be updated when log formats change making them need consistent care and attention. 

With a modern SIEM tool like Panther, security teams are able to conquer ingestion roadblocks with ease. First off, Panther is built on a serverless architecture, ingesting logs has never been easier. Not only are security operations almost obsolete, but planning ingestion goes out the window. When logs are ingested, Panther automatically scales to the needs of the situation. Creating zero planning or preparation necessary by the security team. 

Second, formatting logs doesn’t require parsers to be built for each log source. Instead, a common parsing engine already exists, and security teams only need to specify a schema. All pre-built log sources already have schema built-in and can immediately ingest logs without any configuration. When a security team needs to onboard a log source Panther doesn’t currently support, building a custom schema is extremely easy and efficient. Users can onboard data into an S3 bucket, and Panther generates a schema from live data. 

With clear advantages to getting started with pre-built log sources and custom log sources, ingestion becomes a seamless process with Panther. Let’s look at an example in more detail. 

Ingesting S3_Server_Access Logs

Server access logging provides detailed information for requests that are made to an AWS S3 bucket. Once these logs are set up in AWS to log and store in a respective S3 bucket, they can easily be onboarded into Panther. 

  1. Login to the Panther Console and go to Configure > Log Sources > Create New
  2. Search for S3 Server Access Logs
  3. Then select S3 Bucket as the transport method
Panther console showing a search for s3 in log sources with AWS S3 bucket selected as a transport method
  1. Provide your AWS Account ID, the name of the S3 bucket, and KMS encryption (if used)
  2. You’ll notice that the schema for S3_Server_Access logs is pre-built and already applied
Screenshot of the schemas field prepopulated with AWS.S3ServerAccess schema
  1. On the next screen, use the CloudFormation template either through the AWS console or CLI to create an IAM role, adding permissions for logs to be sent to Panther
  2. You’re done!

After this point, logs are streaming into Panther and already being formatted with the schema that exists. No security operations planning was required and no additional parser or setup work was needed. Making this process takes as little as 10 minutes. Of course, ingesting logs is only half the battle, as security teams also need to analyze the incoming data properly. This is where detections commonly come in handy to analyze and alert on potentially malicious log events. 

Real-Time Detections with Streaming Log Events

Detections (or rules) can be applied to ingested log events to analyze data and provide alerts on potentially malicious activity. Many legacy SIEMs require security teams to create detections in proprietary languages, made specifically for their tools. This makes detection development a specialized assignment that only certain programmers can execute. Making detection time take weeks or even months to develop or extremely high-cost for support from a managed service. 

Panther provides three benefits to onboarded with detections: 

  1. Detection-as-Code principles to manage detection development workflows
  2. Python-Based detections that allow for simple code reuse
  3. Pre-Built Detection packs to get you started in minutes

Let’s see how these three benefits can be directly applied to AWS S3_Server_Access logs that we onboarded in the previous example. 

Detection-as-Code Workflows & Custom Python Rules

Detection-as-Code is a term used to describe applying software development workflows to the creation, testing, and deployment of detections. With Panther, security teams can easily integrate developer tools such as CI/CD workflows, custom Python libraries, and version control/pull requests to manage changes in respective detections. With these methods, we can easily create an accurate detection for S3_Server_Access. 

  1. Let’s look at the S3_Server_Access log 
{
	"additionalFields": [
		"-"
	],
	"authenticationtype": "AuthHeader",
	"bucket": "students-west",
	"bucketowner": "85beabd9dfb7d514da2bb69b1cdaf330623dc055f3c5c4928d0c7f479faecdf4",
	"bytessent": 349,
	"ciphersuite": "ECDHE-RSA-AES128-GCM-SHA256",
	"errorcode": "NoSuchWebsiteConfiguration",
	"hostheader": "west.s3.us-west-2.amazonaws.com",
	"hostid": "jaifoRMADSFasdfdfadsi;ofhas;df.dk",
	"httpstatus": 404,
	"operation": "REST.GET.WEBSITE",
	"p_any_aws_arns": [
		"arn:aws:sts::1111111111:assumed-role/GlobalAdministrator/panther_user"
	],
	"p_any_ip_addresses": [
		"10.20.20.200"
	],
	"p_event_time": "2022-08-16 14:15:55",
	"p_log_type": "AWS.S3ServerAccess",
	"p_parse_time": "2022-08-16 15:12:14.142",
	"p_row_id": "ce0558e8cec89b9f86f6c2ea120c",
	"p_source_id": "2922847a-37c4-4ff3-8a81-a1411716fcfa",
	"p_source_label": "cloudtrail-critical-dev",
	"remoteip": "10.20.20.200",
	"requester": "arn:aws:sts::11111111111:assumed-role/GlobalAdministrator/panther_user",
	"requestid": "E05MAHGS2YB1VJXD",
	"requesturi": "GET /?website HTTP/1.1",
	"role": "GlobalAdministrator",
	"signatureversion": "SigV4",
	"time": "2022-08-16 14:15:55",
	"tlsVersion": "TLS",
	"totaltime": 15,
	"useragent": "OpenJDK_64-Bit_Server_VM/25.302-b08 java/1.8.0_302 vendor/Oracle_Corporation cfg/retry-mode/standard"
}Code language: JSON / JSON with Comments (json)
  1. Let’s create a detection that signals when a user with a global administrator role makes a GET request. 
  2. Once we write our detection, it should look something like below
# Detection for S3 Server Access Log

# Rule function to return true when alerting
def rule(event):
	if event.get("role") == "GlobalAdministrator" and "GET" in event.get("operation"):
		return True
	return FalseCode language: Python (python)
  1. Now we can utilize a unit test within the product or locally to test whether this detection runs properly. Based on the sample event we’re comparing to our detection, it fires properly. 
  2. Then to be more specific, we can sample the detection against 30 days worth of live data using data replay. This will give us the results of our detection below. 
  3. Once these tests are completed, if integrated with CI/CD, I can submit a pull request and have it checked by an approver

Within the span of a few hours, I am able to create a brand new detection, run specific tests to ensure its accuracy, and then have a reviewer check it for visibility. Therefore, detection-as-code gives security teams full visibility into who modified a detection, when did it change, and if it’s still working the way it’s supposed to. 

Pre-Built Detection Packs

Alternatively to writing your own detections, Panther provides out-of-the-box detection that can be modified depending on your organization’s needs. These detections are provided in what we call packs and can be accessed within the Panther console UI. Packs are based on log sources and consist of generalized detections that can be applied instantly to newly ingested log sources. To begin enabling packs specific to AWS: 

  1. Login to the Panther Console and select Build > Packs 
  2. Search for AWS and enable the Panther Core AWS Pack
Screenshot of Panther Core AWS Pack being enabled with 9 rules, 34 policies, 4 helpers, and 4 data models

The detections within this pack will now be applied to incoming logs within a few minutes. This is the fastest way to get started with detection in Panther. 

Get Started! Try Panther

With the ability to ingest numerous AWS log sources as well as apply out-of-the-box and custom detections with Panther in under an hour, security teams have immediate access to AWS to protect their AWS environment. 

Learn more about how to write custom detections at one of our detection-as-code workshops

Recommended Resources

Escape Cloud Noise. Detect Security Signal.
Request a Demo