Securing Amazon S3: Lessons from the Latest Ransomware Attack

Cloud object storage, such as Amazon S3, has become the backbone of modern organizations, powering critical applications like Data Lakes, mobile applications, GenAI, and analytics. With 70% of all data in a typical cloud instance stored as objects, S3 plays a pivotal role in managing today’s data explosion. However, this reliance comes with a significant challenge—object storage requires more specialized and modern tooling than legacy server-based tools, which many organizations have still not adopted. As business-critical data continues to grow exponentially in S3 buckets, it also is becoming an increasingly valuable target for cybercriminals.

Understanding the Codefinger Ransomware Attack

Recent reports of ransomware attacks by the threat actor Codefinger highlight a significant threat to Amazon S3 environments. These attacks exploit compromised credentials and leverage S3’s native Server-Side Encryption with Customer Keys (SSE-C) to encrypt data using the victim’s own stolen encryption key. To exacerbate the situation, attackers implement what is effectively a time-bomb with a short-term lifecycle policy that deletes the encrypted data within seven days if their ransom demands are not met. 

This situation places users in an unfortunate situation. However, the real news here is that this has long been theorized as a possible attack vector by security firms like Rhino Security and AWS themselves. The Codefinger attack is just the first time this type of attack has been seen in the wild.

Also alarming is that the attack itself is rather simple and fairly easy to avoid detection.

Once the account credentials are compromised, the attackers use common native Amazon S3 commands to carry out the attack right from the native AWS CLI, eliminating the need to compromise a server or move the data to another S3 bucket - in fact, no data leaves the S3 bucket at all! It’s effectively encrypted with the compromised key in place - and since encrypting objects in S3 is such a common task across all AWS customers, it’s very hard for AWS to differentiate between normal and abnormal behavior with any of their own Active Defense tools. It looks like normal operation.

Most importantly, with a successful and very public incident like this, a precedent has been set that other threat actors are likely to imitate, quickly amplifying the risk to organizations globally. 

The Importance of Shared Responsibility

To be clear, this exploit was not a result of vulnerabilities of the S3 or AWS infrastructure, but instead an exploit done through mismanaged credentials. However, this does highlight the importance of understanding the AWS Shared Responsibility model. It clearly states that while AWS is responsible for the security of the underlying infrastructure their services are built on - each customer is responsible for securing their own environments. And while AWS provides all of their customers with the tools to mitigate these types of threats, many organizations still mistakenly believe AWS manages all aspects of their security. They ultimately realize that they were the ones responsible only after a cyber incident impacts them. 

AWS's Best Practices for Protection

In response to this attack, AWS’s security team published a detailed guide outlining best practices designed to protect against the unauthorized use of SSE-C encryption and similar attacks. The guide highlights the following four best practices:

  • Block the use of SSE-C encryption unless required by the application (Least Privilege)

  • Implement data recovery procedures

  • Monitor AWS Resources for unexpected Access patterns

  • Implement Short-term credentials

Ensuring the use of least privilege access and implementing and enforcing short-term credentials (and making sure credentials don’t end up in plain text in GitHub!) are common mitigation practices that many organizations are already following.

Challenges in Detection and Response

However monitoring AWS resources for unexpected access patterns and implementing clear data protection and recovery solutions is often easier said than done.

Security teams should be familiar with using CloudTrail and S3 Server Access logs. Despite their usefulness, these tools are not bulletproof against these types of attacks. A threat actor leveraging the compromised credentials of an authorized user will likely still evade detection - especially when considering the size and scope of logs generated in any given environment. There is also a lag time of when suspicious activity would appear in these logs. In this type of attack, the encryption can be well underway of encrypting several hundred gigabytes of data before it even appears in the logs - and terabytes before any incident response plans can be kicked off.

The Role of Data Security Posture Management (DPSM)

The additional challenge is that all data is not created equally. Cloud workloads are exploding in volume by the minute, and knowing where the important data that needs to be protected lives is an ongoing challenge. What was once a small experiment by a developer may overnight turn into a high value application, oftentimes without all other teams fully aware of the criticality of the data and its data protection needs. This is where Data Security Posture Management tools (DPSM) like those that Rubrik offers really shine. They can quickly inventory an organization's entire S3 estate to help determine which buckets may hold sensitive or otherwise critical data. Additionally, it helps identify overly permissive access that users and roles may have, this helps limit the blast radius in the event of an account compromise. Rubrik’s solution is also able to quickly assist in remediating any issues that it finds to eliminate any guesswork in how an environment should be properly secured from malicious use. To explore how these tools can strengthen your data security posture and streamline protection efforts, check out this interactive demo.

Developing a Data Protection and Recovery Plan

Lastly, organizations should implement a data protection and recovery plan for their S3 estate. This itself is a major challenge for most organizations due to data and account sprawl - as well as the added cost and complexity of implementing these across an organization's entire S3 estate. 

Enabling versioning on buckets is a common way to roll back from accidental deletions or encryptions, but in a scenario where credentials are compromised, like the one enacted by the Codefinger group, the attacker likely has the ability to disable versioning altogether, especially if MFA delete is not enabled on the bucket. Additionally, many applications and data sets have a high level of change in the underlying S3 objects, so enabling versioning leads to a large increase in overall storage costs. 

Alternatively, organizations can implement S3 Replication, where they can replicate data from one bucket to another. This is a great solution for ensuring your S3 buckets are highly available during an outage, but Replication is still vulnerable to compromised credential attacks. The attacker likely has a path to the replicated bucket as well, where they can still wreak havoc. S3 replication also requires versioning, increasing cost, and is more complex to manage at scale across 100’s or 1000’s of S3 buckets.

Both of these solutions are also not simple to recover from during an incident. Rolling back the version of a handful of objects is fairly straightforward. But what if only a portion of the objects in a bucket were compromised? While there are ways out of this, it’s manual and time consuming - extending the downtime of the related applications to days or even weeks.

The best strategy for this is to make sure each critical S3 bucket is backed up in a logical air gapped environment with a solution that also provides quick recovery.

Rubrik’s Immutable S3 Protection & Recovery

Rubrik is helping thousands of customers with this exact strategy. Rubrik’s data protection solution can automatically discover and protect all your S3 buckets and back them up to immutable storage in a separate access-controlled AWS account, all while cutting backup costs through data tiering to lower-cost storage. To simplify this further, Rubrik offers Rubrik Cloud Vault, a fully managed, isolated, off-site archive for backup data, ensuring customers always have an immutable copy of their backup data to recover from.  

To rapidly recover after a cyber incident, restore speed from backup is critical - but equally important is identifying the full scope of the incident. This is where Rubrik’s comprehensive cloud cyber resilience stands out. Through Rubrik Security Cloud, customers can continuously monitor AWS CloudTrail logs for suspicious activities and even detect anomalies within Amazon S3 buckets, like large-scale encryption and deletion events similar to those seen in this attack. With the scope of the attack identified, Rubrik can restore data from backup repositories up to 2x faster than native tools and object-level recovery; only the impacted objects need to be restored, significantly lowering the overall recovery time. 

Take Control of Your S3 Security Today

The recent S3 ransomware incident should be a wake up call. Long gone are the days that S3 only held low-value data, so it's time for organizations to adopt proactive strategies and be vigilant to safeguard their critical data. A simple way to start is by assessing your current security posture:

  • Are your access policies and user credentials configured to for least privilege to minimize risk?

  • Do you know which S3 buckets house your most sensitive or critical data?

  • When was the last time you tested your S3 recovery processes?

Threat actors are employing more advanced tactics by the day, and the explosion of data stored in cloud environments like S3 only amplifies what's at stake. By taking proactive correction now, you can ensure your critical S3 data is secure and protected and well equipped to face these evolving threats with confidence.