The SIEM tax ends here. Read Panther’s call to action. Read more

close

The SIEM tax ends here. Read Panther’s call to action. Read more

close

The SIEM tax ends here. Read Panther’s call to action. Read more

close

BLOG

BLOG

Splunk to Panther: A Migration That Transforms Your Security Operations

Mike

Olsen

Sep 19, 2025

The decision to migrate SIEM tooling is not made lightly. For many organizations, the journey from a traditional, on-premises or managed Splunk instance to a modern, cloud-native platform like Panther is a strategic move to address fundamental challenges: cost, scalability, and the agility to combat modern threats. This isn't just a technical lift-and-shift; it's an opportunity to modernize your entire security operations center—moving from a rigid, proprietary system to a flexible, developer-friendly Security-as-Code platform.

This guide will walk you through the key technical migration areas, while highlighting the powerful gains you'll unlock with Panther.

1. Migrating Historical Data: A Strategy for Cost-Effective Forensics

When migrating from Splunk to Panther, transferring historical data is often the biggest logistical hurdle. Having past data available in Panther for historical data analysis and threat hunting, however, is well worth the migration effort. When comparing security analytics platforms, it's important to consider how each one handles data and costs. The way a platform is architected can impact long-term expenses, especially as data volumes grow. Traditional SIEMs often have licensing models that can lead to high costs for storing and retaining large amounts of security data, which is essential for compliance and forensics.

The "How": You will export your data from Splunk in a structured format (like JSON or Parquet) and ingest it into your cloud data lake. This allows you to retain years of data for compliance and historical investigations at a fraction of the cost of Splunk's hot storage.

The Panther Way: This migration transforms your approach to data storage. Instead of paying a premium for data you rarely touch, you get cost-effective, long-term retention that is still fully searchable and accessible. You can conduct deep forensic analysis on older data without the financial pressure of high ingestion limits.  The beauty of Panther's architecture is that it separates storage from compute and leverages a security data lake that provides cost-efficient data retention and analysis at scale.

2. The Power of DaC: From Endless Click Ops to Programmable Workflows

This is one of the most impactful parts of the migration for your security engineering team. Splunk's Search Processing Language (SPL) is powerful, yet can be clunky for expressing unique business logic. Panther's Detections-as-Code (DaC) methodology uses a combination of SQL and Python, which provide expressibility and flexibility for applying unique security monitoring logic.

The "How": This is a manual, intellectual migration. For each Splunk search, you will need to re-create its logic as a Python function in Panther. This isn't a one-to-one translation; it's an opportunity to optimize. A complex stats or eval command in SPL might be replaced by a more readable, maintainable Python rule or a Scheduled Rule. While Splunk's stats command provides a powerful, on-the-fly way to perform aggregations, the logic within a Panther Python rule is reusable, testable, and version-controlled. This allows teams to build a library of helpers and a more formalized detection-as-code workflow, which can be a significant advantage for long-term maintenance and collaboration. This shift from a search-time aggregation to a codified detection is a key part of the migration.  As an example of this core shift, a complex search in Splunk to detect a login from a new country would look like this:

In Panther, the same logic is a single, reusable Python function that evaluates every event as it happens.

def rule(event):
    # This rule is evaluated on every event as it streams into Panther
  
    # Check if the event is a successful login
    if event.get("eventName") == "ConsoleLogin" and event.get("signInDetails", {}).get("login_successful"):
        # The core logic checks against a pre-populated lookup or state
        if not panther_helpers.is_known_location(event):
            return True

    return False

This quick example highlights the core shift: moving from a declarative, batch-based search to a codified, real-time function. The logic is now part of your codebase, which allows for version control, automated testing, and near real-time alerts.

Additional context on the different Panther rule types:

  • Rules (Real-time): These are like Splunk's real-time alerts. They process one event at a time and are used for low-latency detection of high-signal events.

  • Scheduled Rules: These are the equivalent of Splunk's scheduled searches. They run against queries on your data lake at regular intervals (e.g., hourly, daily) and are ideal for more complex, aggregate analysis—like what a stats command would do. This is where you would build the logic for a stats equivalent, as it allows you to query the data lake and perform aggregations on a larger dataset.

  • Correlation Rules: These are designed to detect a sequence or group of events, often in different log types, which is similar to some advanced correlation searches in Splunk Enterprise Security.

  • Policies: These are focused on scanning and evaluating cloud infrastructure for misconfigurations.


The Panther Way: By embracing Python-based DaC, you gain:

  • Version Control: Your detection rules are now code, managed in a Git repository. This enables peer review, change tracking, and rollbacks—a best practice for any modern engineering team.

  • Automated Testing: You can write unit tests for your detections. Before a new rule goes live, you can test it against known log samples to ensure it behaves as expected and doesn't produce false positives. This drastically reduces alert fatigue.

  • Flexibility and Extensibility: Python's vast ecosystem of libraries allows for sophisticated, real-time enrichment—you can connect detections to external data sources and perform complex calculations. This provides an opportunity to build intelligent, context-aware detections that go beyond the capabilities of a single query.

While Splunk's lookups are incredibly versatile and are often used for a wide range of tasks (including configuration and caching), Panther's enrichment model is specifically designed to append high-value, dynamic threat intelligence and contextual data to logs at the point of ingestion. This ensures the data in the security data lake is rich and ready for analysis, which can make long-term forensics and threat hunting more efficient. This shift from search-time enrichment to ingest-time enrichment provides a strong foundation for building more effective security operations.

3. Python & Streaming Analysis: From Cron Jobs to Real-Time Detection

In many traditional SIEMs, some detection logic is run as scheduled searches (often called "cron jobs"). This means there is a time lag between a security event occurring and an alert being generated. While fine for some use cases, this approach can be detrimental to responding to time-sensitive threats where every second counts. Panther's cloud-native architecture is designed for real-time, streaming analysis.

The "How": You will map your scheduled Splunk searches to Panther's real-time detection model. Instead of waiting for a cron job to run, your Python-based detections are applied to data as it is ingested. This means the moment a log event hits Panther, it is immediately evaluated against all your active detection rules. This model shifts your focus from scheduling tasks to building durable, real-time logic.

The Panther Way: This architectural shift provides profound security and operational benefits:

  • Near-Real-Time Alerts: By analyzing logs upon ingestion, Panther dramatically reduces the time between a security event and its detection, enabling a faster and more effective response to critical incidents.

  • Zero Operational Overhead: Unlike managing a complex system of Splunk search head clusters and cron jobs, Panther's serverless architecture automatically scales to process massive volumes of streaming data without any manual intervention. This frees your team from infrastructure management and allows them to focus on threat hunting and detection engineering.

  • Unified Detection: Panther's model seamlessly combines both real-time streaming analysis and scheduled detections (for use cases like scheduled compliance checks or asset inventory) within a single, unified platform, all written in a single language: Python. 

4. Modern Ingestion: A Serverless Architecture for Scale

Splunk relies on forwarders and a complex configuration of distributed files like props.conf and transforms.conf to get data into its indexers. Managing this distributed, on-premises-style architecture can be complex and difficult to scale. Panther's architecture is built on a serverless, cloud-native model that simplifies the entire data onboarding process.

Instead of managing a web of configuration files, Panther uses a streamlined process that is more analogous to its custom schema files. The data is processed and normalized by serverless functions as it's ingested, which decouples the management of data sources from the underlying infrastructure, making the process faster and more scalable.

The "How": Instead of managing forwarders and complex configuration files, you'll configure log sources to send data directly to Panther's ingestion APIs or a cloud storage location (e.g., S3). You then define a Panther log schema and a Python-based Data Pipeline to parse and normalize the data.

The Panther Way: The cloud-native, serverless architecture means no more managing indexers, storage, or compute resources. Panther handles all the operational overhead, freeing your team to focus on threat detection, not infrastructure. This architectural separation of storage and compute also means you can achieve comprehensive security visibility across your entire environment at a fraction of the cost, eliminating the difficult choices between log volume and budget.

  • Scalability on Demand: Whether you're processing gigabytes or petabytes of data, Panther's architecture scales automatically and cost-effectively to meet your needs, ensuring you never miss a critical log event due to resource limitations.

  • True Data Ownership: The most significant advantage of this modern architecture is that Panther allows customers to ingest their data directly into their own data warehouse, such as Snowflake. This means your organization truly "owns" its security data, without having to maintain a separate copy. The data is readily available in your own Snowflake instance where your team can leverage all of its powerful features for custom analytics, business intelligence, and long-term forensics—a level of flexibility and data control that is often prohibitively difficult and costly with a traditional SIEM like Splunk.

5. Search and Dashboard Migrations: From SPL to a Piped Query Language

Splunk dashboards are powered by SPL queries. Panther offers a modern, intuitive approach to searching and visualizing data. While Panther's backend leverages SQL on the security data lake, its front-end search and dashboarding is powered by PantherFlow, a powerful and intuitive piped query language.  For more on PantherFlow https://docs.panther.com/pantherflow.

The "How": You will need to re-create your dashboards in Panther. This involves translating your Splunk SPL queries into PantherFlow statements. PantherFlow's pipelined syntax, where the output of one operator is passed to the next, makes complex queries easier to write and read.

The Panther Way: PantherFlow provides a significant usability advantage. It simplifies the process of building complex queries, making advanced threat hunting accessible to security practitioners of all skill levels. The piped syntax is more intuitive than the often-nested structure of SQL, and it is purpose-built for security investigations, with built-in operators for filtering, transforming, and visualizing data in a more natural "flow."

6. Integrating with the Modern Security Ecosystem

Splunk's add-on ecosystem is vast, but it often operates within a monolithic environment. Panther is designed to be an API-first, composable platform.

The "How": You'll need to identify the Splunk add-ons you use and find their modern equivalents or integration paths in Panther. Panther's robust API and webhook capabilities allow it to seamlessly integrate with your existing security stack.  Refer to API Documentation

The Panther Way: This move allows you to build a "best-of-breed" security ecosystem. You can connect Panther with your chosen SOAR platform (like Splunk SOAR, Torq, or Tines), your ticketing system (Jira), and your communication tools (Slack) to build a unified and highly automated workflow. Panther becomes the high-fidelity alert engine that feeds into your broader security platform.

7. Migrating Enrichments: From Disparate Lookups to a Unified Platform

In Splunk, enrichments often rely on a combination of different mechanisms: lookup tables, scripted lookups, and third-party integrations via Splunk apps. Managing these can be a fragmented process. Panther provides a streamlined, centralized approach to data enrichment.

The "How": You'll need to migrate your Splunk lookup tables and enrichment logic to Panther.

  • Lookup Tables: For static data (e.g., a list of internal IP addresses or user roles), you can create Custom Lookup Tables in Panther. You simply upload your CSV or JSON data to Panther, and it becomes a searchable and joinable dataset for your detections.

  • Threat Intelligence: For dynamic threat intelligence feeds, you can leverage Panther's native enrichment providers (e.g., IPinfo, Tor) or configure integrations with your existing threat intelligence platform via API. You can also ingest these feeds as a dedicated log source and use them as a "lookup" in your detection rules.

  • Scripted Lookups: Any custom enrichment logic you built in Splunk using Python or other scripts can be directly re-created in Panther.

The Panther Way: Panther’s approach standardizes your enrichment process. Instead of having to add and manage lookups at search time, the enriched data is available to the detection engine at the ingest stage. This provides a number of benefits:

  • Speed and Accessibility: Enriched data is already a part of the event as it’s written to your data lake. This means that a detection rule can immediately access rich context—like a user's role, a hostname's criticality, or threat intelligence—without having to perform a search-time join. This speeds up both real-time analysis and historical threat hunting.

  • Consistency and Reliability: By enriching data at the ingestion stage, you ensure every log event is enriched in the same way, every time, regardless of the person running the query. The enriched data is a persistent part of the log, providing a single source of truth for all your analytics.

  • Simplified Analysis: Your detection and query logic become simpler and faster because they don't have to include complex lookup commands. The enriched information is instantly available for streaming analysis and high-speed queries on your security data lake.

Conclusion

Migrating from Splunk to Panther is more than just a software replacement; it is a strategic investment in a more agile and efficient security program. By moving from a proprietary, on-premises mindset to a cloud-native, DaC philosophy, your team will gain:

  • Significantly lower costs through decoupled storage and serverless architecture.

  • Reduced operational overhead so your engineers can focus on security, not infrastructure.

  • Enhanced Visibility across the entire organization, from endpoints to cloud infrastructure, providing a single source of truth for all security data.

  • Increased security team velocity with a Git-based workflow and automated testing.

  • Smarter, more accurate detections with the flexibility and power of Python.

  • Improved business resiliency by proactively strengthening your overall security posture and reducing the risk of a major breach.


The path from SPL to Python and from traditional search to PantherFlow is a journey of transformation. While it requires a dedicated effort, the destination is a security program that is more resilient, scalable, and ready to meet the demands of the modern threat landscape.

Share:

Share:

Share:

Share:

RESOURCES

RESOURCES

RESOURCES

RESOURCES

Recommended Resources

Ready for less noise
and more control?

See Panther in action. Book a demo today.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.

Product
Resources
Support
Company