BLOG

BLOG

Detecting and Hunting for Cloud Ransomware Part 2: GCP GCS

Alessandra

Rizzo

Jan 9, 2026

Introduction

This is the second in a series of reports covering ransomware detection across major cloud infrastructures. Following our examination of AWS S3 in Part 1, this report focuses on Google Cloud Storage (GCS) and Cloud KMS-based attack vectors.

GCS ransomware is less documented than AWS S3; therefore, this report examines the primary attack vectors by extrapolating known AWS S3 techniques and replicating them in GCP GCS based on research conducted by the Panther Threat Research Team. We will also be exploring detection rules using Panther’s detection engine to help identify these threats, where possible. For each scenario, we provide Cloud Audit Log analysis and corresponding Panther detection logic.

Log Source Requirements

Before deploying these detections, ensure proper Cloud Audit Logs configuration for GCS and Cloud KMS events. GCP's Cloud Audit Logs capture two primary categories:

  • Admin Activity logs cover control plane operations such as CreateBucket, SetIamPolicy, CreateCryptoKey, and CreateKeyRing. These are enabled by default.

  • Data Access logs cover data plane operations including storage.objects.create, storage.objects.get, storage.objects.delete, and KMS Encrypt/Decrypt operations. These must be explicitly enabled as they are disabled by default.

Log Visibility Limitations

GCP Cloud Audit Logs provide limited visibility into ransomware-specific attack techniques, constraining detection capabilities.

  • CMEK operations generate KMS logs showing that encryption occurred via GCS service accounts, but these logs do not identify which objects or buckets were affected, preventing scope assessment.

  • CSEK attacks produce no distinguishable log signatures, as customer-supplied encryption keys are never logged and storage.objects.create events contain no encryption method indicators.

  • Configuration changes suffer similar limitations, as storage.buckets.update events do not specify which security controls were modified (versioning, lifecycle policies, retention).

GCS Ransomware Prevention

Given the logging visibility gaps, prevention is as critical as detection for GCS ransomware. Organizations should implement the following controls:

Object Versioning and Retention Policies

  • Enable versioning on all buckets containing sensitive data to allow recovery from object overwrites or deletions.

  • Use object holds for sensitive data to prevent data destruction.

  • Configure retention policies to prevent object modification or deletion for a specified period, and apply Bucket Lock to make these policies immutable.

  • Implement bucket lock to manage retention policies for buckets.

KMS Key Restrictions

VPC Service Controls:

  • Implement service perimeters to prevent data exfiltration to external projects. This blocks the exfiltration-based attack path by restricting which projects can access your storage resources.

Least-Privilege IAM

  • Apply minimal permissions for both Cloud Storage and Cloud KMS operations.

  • Separate storage administration from day-to-day access, and avoid granting storage.objects.delete or KMS key management permissions broadly.

  • Use IAM conditions to further restrict sensitive operations by time, resource, or request attributes.

It is recommended that these controls are always enabled on GCS buckets containing sensitive data to prevent potential ransomware attacks from achieving impactful data destruction.

GCS Ransomware Prerequisites

Similar to AWS S3, several preconditions are necessary for an attacker to achieve complete data loss in a victim environment. An attacker must either enumerate buckets to find ideal candidates with critical security settings disabled or disable these controls themselves. There are several relevant security controls that would prevent a total data loss if enabled:

  • Object Versioning: Retains previous versions of objects in the same bucket, protecting against accidental deletion or overwrites.

  • Retention Policies: Can be configured to prevent object deletion or modification for a specified period. Once locked, a retention policy cannot be removed or reduced.

  • Object Holds: Event-based or temporary holds prevent object deletion regardless of retention policy.

  • Bucket Lock: Once enabled, prevents the retention policy from being removed or reduced, providing WORM-like compliance.

On the detection side, given that Data Access Logs do not offer visibility into the specific configuration that was changed, we can use the following Panther detection rule to alert on any configuration update or deletion on a specific GCS bucket:

BUCKET_OPERATIONS = ["storage.buckets.delete", "storage.buckets.update"]


def rule(event):
    return all(
        [
            event.deep_get("protoPayload", "serviceName", default="") == "storage.googleapis.com",
            event.deep_get("protoPayload", "methodName", default="") in BUCKET_OPERATIONS,
            event.get("severity") != "ERROR",
        ]
    )

Other prerequisites for a successful ransomware attack include:

  • Write access to the target GCS bucket: The attacker could achieve this through stolen or compromised service account credentials, misconfigured policies granting excessive storage permissions, or compromised user accounts with excessive permissions.

  • Cloud KMS permissions (for CMEK encryption-based attacks): The attacker needs cloudkms.cryptoKeys.setIamPolicy to grant GCS service accounts access.

GCS Ransomware Scenarios

The following section examines ransomware techniques applicable to GCP, drawing from documented methods for AWS S3 we researched at Panther to extrapolate their applicability to GCP environments.

GCS ransomware attacks typically follow one of three paths: a CMEK-based approach where attackers create Cloud KMS keys and grant the victim's GCS service account encryption permissions before re-encrypting data; a CSEK-based approach where attackers generate encryption keys locally and re-encrypt objects directly without any KMS infrastructure; or an exfiltration-based approach where attackers copy data to external buckets before deleting the originals. All paths ultimately result in data loss and potential ransom demands.

Notably, the CSEK path produces fewer detectable log events, as no key creation or IAM policy changes occur. For each scenario, we provide Cloud Audit Log analysis and corresponding Panther detection logic.

Cloud KMS Key Encryption

Similar to the AWS KMS encryption attack documented by RhinoSecurityLabs, an attacker can leverage Google Cloud KMS to encrypt GCS objects with an attacker-controlled key.

On the victim side, key creation and import logs would not be available unless the attacker had sufficient permission to create a keyring and a KMS key in the victim’s project, which is unlikely. What is detectable is step 2, 3 and 4 of the attack chain.

For step 2 of the attack chain, setting an IAM policy on a KMS key to grant encryption access generates the following log:

"protoPayload": {
    "serviceName": "cloudkms.googleapis.com",
    "methodName": "SetIamPolicy",
    "resourceName": "projects/attacker-project/locations/us/keyRings/malicious-keyring/cryptoKeys/malicious-key",
    "request": {
        "@type": "type.googleapis.com/google.iam.v1.SetIamPolicyRequest",
        "policy": {
            "bindings": [{
                "members": ["serviceAccount:service-123456789@gs-project-accounts.iam.gserviceaccount.com"],
                "role": "roles/cloudkms.cryptoKeyEncrypterDecrypter"
            }]
        }
    }
}

This log shows a GCS service account being granted encryption/decryption permissions on a KMS key, which is a necessary prerequisite for this ransomware attack. To account for the possibility that an attacker is able to compromise the environment fully to the extent that the key is generated within the victim environment, we can create a detection rule that alerts on any modifications to KMS IAM policies that grant access to storage service accounts with the encryption and decryption role.

def rule(event):
    if (
        event.deep_get("protoPayload", "methodName") != "SetIamPolicy"
        or event.deep_get("protoPayload", "serviceName") != "cloudkms.googleapis.com"
        or event.deep_get("protoPayload", "status", "code")  # Operation failed
    ):
        return False

    # Extract the policy bindings from the request
    bindings = event.deep_get("protoPayload", "request", "policy", "bindings", default=[])

    for binding in bindings:
        role = binding.get("role", "")
        members = binding.get("members", [])

        # Check if granting KMS encryption/decryption permissions
        role_lower = role.lower()
        if "cryptokey" in role_lower and ("encrypt" in role_lower or "decrypt" in role_lower):
            for member in members:
                # Alert if granting to GCS service account
                if "gs-project-accounts.iam.gserviceaccount.com" in member:
                    return True

    return False

Bulk Object Re-Encryption

After the key is granted encryption permissions, as the second step of the attack chain, the attacker needs to encrypt the victim’s objects with an encryption key they control. To encrypt an existing object with an encryption key, the fastest way is to execute a rewrite command, specifying the attacker’s encryption key to encrypt the files as an optional parameter. This generates a cascade of storage.objects.create events where:

  • The rewrite command is logged in the calledSuppliedUserAgent field as command/rewrite-k

  • The authorizationInfo in the protoPayload logs that access was granted to a specific resource projects/_/buckets/victim-bucket/objects/victim-file.txt

  • The resourceName field matches the granted file access

"protoPayload": {
        "at_sign_type": "type.googleapis.com/google.cloud.audit.AuditLog",
        "authenticationInfo": {
          "oauthInfo": {
            "oauthClientId": "111111111111-example1a2b3c4d5e6f7g8h9i0j.apps.googleusercontent.com"
          },
          "principalEmail": "frodo@lotr.com"
        },
        "authorizationInfo": [
          {
            "granted": true,
            "permission": "storage.objects.create",
            "resource": "projects/_/buckets/victim-bucket/objects/victim-file.txt",
            "resourceAttributes": {}
          },
          {
            "granted": true,
            "permission": "storage.objects.delete",
            "resource": "projects/_/buckets/victim-bucket/objects/victim-file.txt",
            "resourceAttributes": {}
          }
        ],
        "methodName": "storage.objects.create",
        "requestMetadata": {
          "callerIP": "1.2.3.4",
          "callerIp": "1.2.3.4",
          "callerSuppliedUserAgent": "apitools Python/3.13.7 gsutil/5.35 (linux) analytics/enabled interactive/True invocation-id/000000000098a6854caf369de05cd1c0 command/rewrite-k google-cloud-sdk/548.0.0,gzip(gfe)",
          "destinationAttributes": {},
          "requestAttributes": {
            "auth": {},
            "time": "2025-12-15T15:35:44.477602259Z"
          }
        },
        "resourceLocation": {
          "currentLocations": [
            "us"
          ]
        },
        "resourceName": "projects/_/buckets/victim-bucket/objects/victim-file.txt",

In this case, the generated logs miss an important detail, which is the specific encryption key that was used to re-encrypt objects in the victim bucket. We can nonetheless create a rule for gsutil rewrite -k operations to specifically alert on bulk operations involving the same bucket object rewriting, which may indicate that the specific bucket’s objects are being re-encrypted.

# User agent patterns indicating object rewrite
ENCRYPTION_REWRITE_PATTERNS = [
    re.compile(r"command/rewrite-k", re.IGNORECASE),  # gsutil rewrite -k
    re.compile(r"command/rewrite-k-s", re.IGNORECASE),  # gsutil rewrite -k -s
    re.compile(r"gsutil.*rewrite", re.IGNORECASE),  # gsutil rewrite variant
]


def rule(event):
    if event.deep_get("protoPayload", "serviceName") != "storage.googleapis.com":
        return False

    # Focus on the create operation (the actual re-encryption)
    method = event.deep_get("protoPayload", "methodName", default="")
    if method not in ["storage.objects.create"]:
        return False

    # This field contains commands executed on cli
    user_agent = event.deep_get(
        "protoPayload", "requestMetadata", "callerSuppliedUserAgent", default="<UNKNOWN_USER_AGENT>"
    )

    # Check for rewrite with encryption key change
    for pattern in ENCRYPTION_REWRITE_PATTERNS:
        if pattern.search(user_agent):
            return True

    return False

Cross-Account KMS Encrypt Operations from GCS Service Accounts

Still in the second step of the attack chain, another relevant log can indicate that a ransomware attack is taking place. When objects are being encrypted with a KMS key, logs of type cloudkms_cryptokey are generated with a number of Encrypt operations. Although they do not contain the specific object that is being encrypted during this operation, it is still possible to analyze the KMS key being used to extract its project ID. In this specific case, we can compare the logName projectID (belonging to the victim) and the key projectID (belonging to the attacker) and alert on bulk encrypt operations by a cross-account KMS key.

"protoPayload":
          {
            "at_sign_type": "type.googleapis.com/google.cloud.audit.AuditLog",
            "authenticationInfo":
              {
                "principalEmail": "service-111111111111-gs-project-accounts.iam.gserviceaccount.com",
              },
            "methodName": "Encrypt",
            "resourceName": "projects/attacker-project/locations/us/keyRings/test-keyring/cryptoKeys/test-key",
            "serviceName": "cloudkms.googleapis.com",
            "status": {},
          },
        "resource":
          {
            "labels":
              {
                "crypto_key_id": "test-key",
                "key_ring_id": "test-keyring",
                "location": "us",
                "project_id": "test-project",
              },
            "type": "cloudkms_cryptokey",

The detection rule will then be:

def rule(event):
    if (
        event.deep_get("protoPayload", "serviceName") != "cloudkms.googleapis.com"
        or event.deep_get("protoPayload", "methodName") != "Encrypt"
        or "gs-project-accounts.iam.gserviceaccount.com"
        not in event.deep_get("protoPayload", "authenticationInfo", "principalEmail", default="")
    ):
        return False
      
    # Get the target project from the log name
    # Format: projects/PROJECT/logs/cloudaudit.googleapis.com%2Fdata_access
    source_project = None
    if event.get("logName").startswith("projects/"):
        parts = event.get("logName").split("/")
        if len(parts) >= 2:
            source_project = parts[1]

    kms_project = None
    if event.deep_get("protoPayload", "resourceName").startswith("projects/"):
        parts = event.deep_get("protoPayload", "resourceName").split("/")
        if len(parts) >= 2:
            kms_project = parts[1]

    if source_project and kms_project is not None and source_project != kms_project:
        return True

    return False

Bulk Encryption

For a volume-based detection, we can also monitor Encrypt operations taking place within a short timeframe to create a broader rule. This rule is configured with a threshold (e.g., 10 events within 15 minutes) to alert on bulk encryption activity rather than individual operations.

def rule(event):

    if (
        event.deep_get("protoPayload", "methodName") != "Encrypt"
        or event.deep_get("protoPayload", "serviceName") != "cloudkms.googleapis.com"
        or "gs-project-accounts.iam.gserviceaccount.com"
        not in event.deep_get(
            "protoPayload", "authenticationInfo", "principalEmail", default="<UNKNOWN_PRINCIPAL>"
        )
        or event.get("severity") == "ERROR"  # Operation failed
    ):
        return False

    return True

Key Destruction

As the last step of the attack chain, the attacker needs to disable or destroy the KMS key version to deny access to the victim to the encrypted data. Doing so results in the following UpdateCryptoKeyVersion log, where the key state is set to DISABLED or alternatively a DestroyCryptoKeyVersion log is generated for direct key destruction.

"methodName": "UpdateCryptoKeyVersion",
"request":
  {
   "@type": "type.googleapis.com/google.cloud.kms.v1.UpdateCryptoKeyVersionRequest",
   "at_sign_type": "type.googleapis.com/google.cloud.kms.v1.UpdateCryptoKeyVersionRequest",
   "cryptoKeyVersion":
   {
     "name": "projects/test-project/locations/us/keyRings/test-keyring/cryptoKeys/test-key/cryptoKeyVersions/1",
     "state": "DISABLED",
},

We can then create a detection rule that monitors changes to key states:

def rule(event):
    if event.deep_get("protoPayload", "serviceName") != "cloudkms.googleapis.com":
        return False

    method = event.deep_get("protoPayload", "methodName", default="<UNKNOWN_METHOD>")

    # Direct key version destruction
    if method == "DestroyCryptoKeyVersion":
        return True

    # Key version state change, check for dangerous states
    if method == "UpdateCryptoKeyVersion":
        if event.deep_get("protoPayload", "request", "updateMask") != "state":
            return False

        crypto_key_state = event.deep_get(
            "protoPayload", "request", "cryptoKeyVersion", "state", default="<UNKNOWN_STATE>"
        )
        dangerous_states = ["DISABLED", "DESTROY_SCHEDULED", "DESTROYED"]
        return crypto_key_state in dangerous_states

    return False

Exfiltration Path

Similar to the AWS Bling Libra attack pattern, attackers may choose not to rely on encryption keys but to exfiltrate data to external buckets they control before deleting data in the victim environment. In the logs generated by copying objects from one bucket to another, it is possible to see storage.objects.get events where the destination field contains the attacker projectID and the attacker’s bucket.

"authorizationInfo":
    [
      {
        "granted": true,
         "permission": "storage.objects.get",
         "resource": "projects/_/buckets/source-bucket/objects/sensitive.txt",
       },
     ],
       "metadata":
        {
          "destination": "projects/attacker-project/buckets/exfil-bucket/objects/sensitive.txt",
           "requested_bytes": 12345,
        }
        "methodName": "storage.objects.get"

We can create a detection rule where we can extract the project ID and bucket ID corresponding to the destination bucket and compare it to the source bucket. We can then take advantage of Panther’s dynamic severity to set cross-account object transfer to a higher severity than same-account object transfer, which still warrants monitoring.

def _parse_destination(destination):
    dest_bucket = None
    dest_project = None

    if "buckets/" in destination and "objects/" in destination:
        try:
            dest_bucket = destination.split("buckets/")[1].split("/objects/")[0]
        except (IndexError, AttributeError):
            pass

    if "projects/" in destination and "buckets/" in destination:
        try:
            project = destination.split("projects/")[1].split("/buckets/")[0]
            dest_project = project if project != "_" else None
        except (IndexError, AttributeError):
            pass

    return dest_bucket, dest_project
    
def rule(event):

    if (
        event.deep_get("protoPayload", "methodName") != "storage.objects.get"
        or event.deep_get("protoPayload", "serviceName") != "storage.googleapis.com"
        or event.get("severity") == "ERROR"  # Operation failed
        or not event.deep_get("protoPayload", "metadata", "destination")
    ):
        return False

    # Extract source and destination buckets
    source_bucket = event.deep_get("resource", "labels", "bucket_name")
    destination = event.deep_get("protoPayload", "metadata", "destination", default="")
    dest_bucket, _ = _parse_destination(destination)

    # Alert if copying to a different bucket
    if source_bucket and dest_bucket and source_bucket != dest_bucket:
        return True

    return False

Bulk Deletion

In the same fashion as the bulk exfiltration use case, we can see that deleting objects in bulk from a bucket results in the following log, where a storage.objects.delete event is generated for each deleted file.

        "protoPayload":
          {
            "at_sign_type": "type.googleapis.com/google.cloud.audit.AuditLog",
            "authenticationInfo":
              {
                "principalEmail": "denethor@lotr.com",
              },
            "methodName": "storage.objects.delete",
            "resourceName": "projects/_/buckets/test-bucket/objects/test-file.txt",
            "serviceName": "storage.googleapis.com",
            "status": {},
          },

We can then create a detection rule that monitors bulk deletions.

def rule(event):

    if (
        event.deep_get("protoPayload", "methodName") != "storage.objects.delete"
        or event.deep_get("protoPayload", "serviceName") != "storage.googleapis.com"
        or event.get("severity") == "ERROR"  # Operation failed
    ):
        return False

    return True

Post-compromise Detection

Ransom Note Detection

In the same way we have done for AWS S3, we can detect whenever common ransomware filename patterns are uploaded to a GCS bucket by monitoring for storage.objects.create events and extracting the filename from the field resourceName .

 methodName: storage.objects.create
 requestMetadata:
 callerIP: 1.2.3.4
 callerSuppliedUserAgent: google-cloud-sdk gcloud/548.0.0 command/gcloud.storage.cp invocation-id/abc123def456 environment/devshell environment-version/None client-os/LINUX client-os-ver/6.6.111 client-pltf-arch/x86_64 interactive/False from-script/False python/3.13.7,gzip(gfe)
 time: "2025-12-12T22:05:35.887374247Z"
 resourceLocation:
 currentLocations:
  - us
 resourceName: projects/_/buckets/data-bucket/objects/HOW_TO_DECRYPT_FILES.txt

We can then create a detection rule that searches for common ransomware filename patterns, as such:

# Common ransomware note filename patterns
# One shown for demonstration
# Full rule in panther-analysis repo

RANSOM_NOTE_PATTERNS = [
    # HOW_TO_DECRYPT_FILES.txt
    r"(?i)how[_-]?to[_-]?(decrypt|restore|recover)[_-]?(your[_-]?)?files.*\.(txt|html?)$",
]

COMPILED_PATTERNS = [re.compile(pattern) for pattern in RANSOM_NOTE_PATTERNS]

def rule(event):
    if event.deep_get("protoPayload", "serviceName") != "storage.googleapis.com":
        return False

    # Focus on the create operation (the actual re-encryption)
    method = event.deep_get("protoPayload", "methodName", default="<UNKNOWN_METHOD")
    if method != "storage.objects.create":
        return False

    # Check for filename
    resource = event.deep_get("protoPayload", "resourceName", default="")
    obj_name = resource.split("/objects/")[-1] if "/objects/" in resource else "<UNKNOWN_FILE>"
    return any(pattern.match(obj_name) for pattern in COMPILED_PATTERNS)

Attack Matrix

Attack Scenario

Required GCS Permissions

Additional Requirements

Recommended Preventive Controls

Panther Detection

CMEK Encryption

storage.objects.get, storage.objects.create

Attacker needs cloudkms.cryptoKeys.setIamPolicy in the victim environment

Restrict CMEK usage via Policy

GCP KMS Key Granted to GCS Service Account

GCP KMS Bulk Encryption by GCS Service Account

Bulk Object Re-encryption

storage.objects.get, storage.objects.create, storage.objects.delete

Access to attacker-controlled KMS key or CSEK

Enable Object Versioning, Retention Policies, Bucket Lock, Object Hold

GCP GCS Bulk Object Rewrite Operation

Cross-Account KMS Encrypt Operations

storage.objects.get, storage.objects.create

KMS key in external project with encrypt permissions granted to victim's GCS service account

Restrict CMEK usage via Policy

GCP KMS Cross-Project Encryption

Bulk Exfiltration

storage.objects.get

None

Organization Policy for cross-project restrictions, enable service perimeters

GCP GCS Object Copied to Different Bucket

Bulk Deletion

storage.objects.delete

None

Enable Object Versioning, Retention Policies, Bucket Lock, Object Hold

GCP GCS Bulk Object Deletion

Bucket Configuration Changes

storage.buckets.update, storage.buckets.delete

None

Bucket Lock for retention policies, IAM conditions for sensitive operations

GCP Cloud Storage Buckets Modified Or Deleted

Ransom Note Upload

storage.objects.create

None

Preventive file upload scanning

GCP GCS Ransom Note Upload

Conclusion

GCS ransomware attacks follow predictable patterns: gaining write access to buckets, leveraging Cloud KMS or customer-supplied keys for encryption, and disabling security controls that would allow recovery. However, detection capabilities are constrained by logging visibility limitations.

Given these visibility gaps, prevention is as critical as detection. Organizations should enforce object versioning and retention policies with Bucket Lock to prevent removal, restrict KMS key usage to organization-owned keys via Organization Policies, implement VPC Service Controls to prevent cross-project data exfiltration, and apply least-privilege IAM for both storage and KMS operations. The detection rules in this report provide coverage where logging allows, but the most effective defense against cloud ransomware remains making the attack prerequisites impossible to achieve in the first place.




Share:

Share:

Share:

Share:

Ready for less noise
and more control?

See Panther in action. Book a demo today.

Get product updates, webinars, and news

By submitting this form, you acknowledge and agree that Panther will process your personal information in accordance with the Privacy Policy.

Product
Resources
Support
Company