WEBINAR

John Hammond + Panther: How agentic workflows are redefining the SOC. Save your seat →

close

John Hammond + Panther: How agentic workflows are redefining the SOC. Save your seat →

close

BLOG

What Are Security Logs? Types, Examples, and Analysis

Every investigation starts the same way: you pull up a log. Maybe it's a CloudTrail event showing an AssumeRole call from an IP you don't recognize, or a burst of Event ID 4625s against a service account at 2 AM. That log entry is your starting point, and what you understand about its structure determines whether you catch the threat or scroll past it.

The problem is that most security teams collect far more log data than they can actually use. Formats differ across every source. Fields that mean "success" in CloudTrail look nothing like "success" in Windows Event Logs.

And when you're stitching together a timeline from three different log types at 2 AM, those structural differences aren't academic; they're the gap between catching lateral movement in progress and writing a post-incident report about it three weeks later.

This guide walks through security log types field by field, shows what real entries look like across formats, covers how to analyze them effectively, and addresses the cost and complexity challenges that make log management harder than it should be.

Key Takeaways:

  • Security logs are event records generated by systems, applications, and networks that capture the identity, action, timing, and outcome of every event, forming the foundation of threat detection, forensic investigation, and compliance evidence.

  • Four primary log categories cover the detection surface: network and firewall logs, authentication and access logs, endpoint and system logs, and application and cloud logs, each mapped to specific adversary techniques.

  • Effective log analysis follows a dependency chain: centralize your logs first, establish behavioral baselines, correlate events across sources, then automate detection with code-driven rules and AI-assisted triage.

  • Log management challenges are primarily economic: balancing volume against cost, reconciling fragmented formats, and staffing SIEM operations where human costs often exceed licensing costs.

What Is a Security Log?

A security log is a record of events occurring within your organization's systems and networks. NIST SP 800-92 defines it formally: "A log is a record of the events occurring within an organization's systems and networks."

The SP 800-92 Rev. 1 expands this definition to explicitly include "physical and virtual platforms," reflecting the reality that your logs now come from AWS accounts, SaaS applications, and container orchestration platforms, not just on-prem servers. In practice, cloud-native teams also deal with cloud control-plane logs, a category that didn't exist when the original standard was published.

Why Security Logs Matter

Security logs matter because they support three core jobs at once: detecting threats, reconstructing incidents, and producing audit evidence. Each job pulls different fields from the same log data, which is why understanding log structure matters before you start building detection rules.

1. Threat Detection and Response

Security logs matter for detection because each collected source creates another chance to catch attacker behavior. ATT&CK sources map 38+ named sources directly to specific adversary techniques, and each mapping is a detection opportunity if you collect the log source.

The stakes are measurable. Organizations that detect threats internally have a 11-day median dwell time, versus 26 days when an external entity notifies them. That gap likely reflects a mix of detection method, attacker behavior, and whether discovery came from internal teams or external notification.

2. Forensic Investigation and Root Cause Analysis

Security logs are the primary evidence source for reconstructing attack timelines. Every forensic investigation depends on answering: What happened? When? Who was involved? What systems were affected? Combining host and network log analytics enables post-compromise detection that neither source achieves alone.

An endpoint log shows a suspicious process; a network log shows that process calling out to a known C2 domain. Without both, you're seeing fragments instead of attack chains. And without centralized log retention, forensic capability is limited to whatever each system keeps locally.

3. Compliance and Audit Readiness

Many compliance frameworks explicitly require log monitoring and review. PCI DSS 4.0 includes automated review mechanisms in Requirement 10.4.1.1 for audit log reviews. HIPAA requires mechanisms to record and examine activity in systems containing ePHI.

Organizations pursuing SOC 2 Type II also need monitoring evidence that covers the audit period.

Types of Security Logs

Security logs fall into four broad categories because each one captures a different slice of attacker behavior and system activity. Each category has clear strengths and blind spots, which is why most detection programs need all four.

1. Network and Firewall Logs

Network and firewall logs capture IP traffic metadata and enforcement decisions, making them your primary source for detecting C2 beaconing, lateral movement, data exfiltration, and port scanning.

AWS VPC Flow Logs record IP traffic to and from network interfaces with fields like srcaddr, dstaddr, action (ACCEPT/REJECT), and bytes; flow-direction is available as an additional field rather than part of the default field set. One critical detail: when traffic passes through a NAT gateway, dstaddr shows the NAT gateway's private IP, while pkt-dstaddr field shows the actual internet destination.

Write exfiltration rules against pkt-dstaddr, not dstaddr.

2. Authentication and Access Logs

Authentication logs are where you detect brute force attacks, credential stuffing, privilege escalation, and impossible travel.

Windows Security Event IDs are the lingua franca: 4624 (successful logon), 4625 (failed logon), 4648 (explicit credential use), and 4728/4732 (group membership changes) are core events to monitor. The LogonType field in Events 4624/4625 indicates the type of logon requested or performed, which can help infer the likely logon vector.

Type 3 (Network) indicates SMB or WMI lateral movement, and Type 10 (RemoteInteractive) indicates RDP.

AWS CloudTrail captures authentication events across the control plane. The StopLogging event, where someone disables CloudTrail itself, maps directly to MITRE ATT&CK defense evasion (T1562.008) and should always generate a high-priority alert.

3. Endpoint and System Logs

Endpoint logs capture what happens on the host after initial access: malware execution, persistence mechanisms, process injection, and credential dumping.

Sysmon on Windows provides high-fidelity host telemetry: Event ID 1 (Process Create) with CommandLine and ParentImage, Event ID 3 (Network Connection) for C2 callbacks, Event ID 10 (ProcessAccess) for LSASS credential dumping, and Event ID 22 (DNS Query) with QueryName and the process making the query.

On Linux, auditd captures execve, connect, and setuid/setgid events.

4. Application and Cloud Logs

Application and cloud logs cover API abuse, misconfiguration exploitation, and privilege escalation via cloud control planes, the fastest-growing attack surface for cloud-native teams.

  • AWS CloudTrail records AWS API activity, with eventName as a primary detection pivot, readOnly: false commonly indicating write or mutating actions, and userIdentity.type of Root generally warranting close scrutiny.

  • GCP Cloud Audit Logs include Admin Activity, Data Access, System Event, and Access Transparency logs; Data Access logs often need to be explicitly enabled. Most teams miss this configuration step and have zero visibility into data read/write operations.

What a Security Log Entry Actually Looks Like

Every security log entry answers the same investigative questions: When did it happen? Who did it? From where? What action was taken? Did it succeed?

As Jeff Bollinger, Director of Incident Response and Detection Engineering at LinkedIn, puts it, "We expect everybody's log will have at least the who and what where and... we figure out the how and why." The challenge is that every format encodes these answers differently.

Take an AWS CloudTrail event for a failed API call.

The userIdentity object nests the actor's type, arn, accountId, and accessKeyId. The sourceIPAddress shows the caller's IP. And here's where CloudTrail differs from Windows: success is encoded as the absence of an errorCode field, not the presence of a success indicator. Your parser has to treat a missing field as a positive signal.

Contrast that with a Windows Event 4625 (failed logon), where the Keywords bitmask 0x8010000000000000 explicitly marks an Audit Failure, the Status field carries an NTSTATUS code (e.g., 0xc0000234 = account locked out), and the identity is split between SubjectUserName (who's reporting the event) and TargetUserName (whose credentials were attempted).

Conflating those two fields is one of the most common detection rule errors.

Or consider a Linux auth.log entry: <34>Oct 18 02:31:00 server1 sshd: Accepted password for user from 192.168.1.10 port 22 ssh2. The timestamp has no year and no timezone. The action, username, and source IP are all embedded in unstructured message text, so you need regex to extract anything.

Understanding these structural differences determines whether your detection rules actually work when you deploy them.

How to Analyze Security Logs

Effective log analysis follows a dependency chain. Each step builds on the one before it, and skipping a step means downstream steps produce unreliable results.

1. Centralize Before You Analyze

Centralization comes first because you can't analyze logs you can't query. Before writing a single detection rule, you need a clear picture of which log sources you're collecting, which ones you're missing, and whether data is actually flowing.

That includes monitoring whether telemetry is actually flowing, because a detection rule that never fires because its log source stopped forwarding is a silent gap. This played out at Zapier, where only 20% of security events were actually being logged before they centralized. They achieved a 3.5X increase in security log monitoring after onboarding six critical data sources on day two.

2. Establish Baselines for Normal Behavior

Baselines are what make thresholds meaningful. The word "high" in an ATT&CK detection analytic, "high volume of failed logon attempts," only translates to a useful alert if you know what normal looks like for that account or system. As Jack Naglieri, Founder & CTO of Panther, notes, "You have to understand what you're monitoring and what normal means in certain systems."

Cloud-native baselining targets include authentication failure rates per account, API call volumes per service account (non-human identities follow programmatic patterns, making deviations more reliable signals), and data egress volumes by destination.

3. Correlate Events Across Sources

Cross-source correlation is how you catch behaviors that single log sources miss. MITRE ATT&CK technique pages are essentially correlation templates, and each lists the required data sources.

For example, detecting Cloud Admin Command requires correlating cloud control-plane activity logs with host-level execution to validate if commands materialized inside the guest OS, two entirely separate data source categories.

4. Automate Detection with Code-Driven Rules

Detection-as-code makes detection logic testable, reviewable, and deployable like application code. Detection-as-code treats detection logic the way your engineering team treats application code: written in real programming languages, stored in version control, tested before deployment, and deployed through CI/CD pipelines. The core principle is that detection logic is written, stored, and versioned like any other software code.

That aligns with how the host of the Detection at Scale podcast frames the discipline in an interview with Justin Anderson, Security Engineering Manager, Detection & Response at Meta: "Detection engineering kind of relies on code, it's in the name."

In practice, this means detection rules stored in a Git repository, with a CI/CD pipeline that validates syntax, runs unit tests against known log samples, and estimates false positive volume before deployment.

In Panther, you write rules in Python, SQL, or YAML, with unit tests included, and deploy them through the same CI/CD workflows your engineering team already uses.

The modern evolution also includes AI-assisted triage. AI SOC analyst surfaces enrichments, detection logic, pivot queries, and evidence behind every conclusion. The more context the AI SOC analyst has, the more accurate its triage becomes, and the more confidently your team can verify and act on the result.

Docker applied this combined approach, detection-as-code with Python-based rules and correlation across cloud log sources, and achieved an 85% reduction in false positives while tripling ingestion.

Common Security Log Management Challenges

Two constraints shape most log programs in practice: cost limits how much you can keep and analyze, and format fragmentation limits how quickly you can use what you collect. Both pressures directly affect detection coverage and investigation speed.

Volume, Cost, and Retention Trade-offs

Log management is usually constrained more by economics than by collection capability. Log volumes grow relentlessly. A typical enterprise scales from 1 TB/day to 2 TB/day over three years. The cost pressure can create real coverage gaps when organizations limit how much security data they ingest into their SIEM to stay within budget.

Perhaps the most overlooked factor: SIEM operations can require one dedicated analyst at $190,000/year just to run the platform, exceeding the $170,000/year licensing cost. For a three-to-ten-person security team, dedicating one FTE to SIEM operations alone may consume the majority of your team's capacity.

Format Fragmentation Across Sources

Format fragmentation slows analysis because every tool logs differently. Large enterprises use an average of 45 cybersecurity tools. Each tool emits logs in a different format: CEF, syslog, Windows Event Log XML, CloudTrail JSON, proprietary SaaS API responses, requiring custom parsing before any cross-source correlation is possible.

Almost half of SOCs dump all incoming data into a SIEM without a retrieval or management plan, and most still rely on manual or mostly manual processes to report metrics.

From Raw Logs to Actionable Intelligence

Security logs are only as valuable as your ability to act on them. The path from raw logs to security outcomes follows the dependency chain covered here: centralize, establish baselines, correlate across sources, and automate with detection-as-code and AI-assisted triage.

For lean security teams, three to ten people covering a cloud-native environment, the economics matter alongside the technology. Choosing a platform that scales ingestion without proportional cost increases, that uses Python instead of proprietary query languages, and that treats detection rules as testable code rather than UI configuration is what separates teams that drown in logs from teams that turn logs into intelligence.

Share:

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Bolt-on AI closes alerts. Panther closes the loop.

See how Panther compounds intelligence across the SOC.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.