There’s a great speech given by Al Pacino’s character, Tony D’Amato, in the movie Any Given Sunday. He compares life to football, saying both are a game of inches because of the small margin of error.

I see data management as a game of inches. A singular datum is small, like an inch. Inches come together to create a mile. Many data come together to create a larger picture: an innovation, a company. 

It’s a game of inches to secure and protect that data because – as in life or football – the margin for error is small. If you don’t quite protect it, then it is completely vulnerable. Data backup is a game of inches, too. If your backup fails, if you fumble and don’t quite catch it, recovery may not be possible. 

But there’s one game in data management that you want to play. When it comes to data backup storage, you want to play in inches or “incrementals.” It’s the incremental backups to the right storage tier that can add up to significant savings. 

Playing to Win in the Big Data Game 

We’re well into the zettabyte era. At the end of 2020, the entire digital universe reached  59 zettabytes.i That’s over 40 times more bytes than stars in the observable universe.ii Predictions are that, by 2024, the volume of data in the world will reach 149 zettabytes.iii That’s a lot of inches.

With that data growth comes the need to store it – with the ability to rapidly scale. And whether that data is stored in the cloud or on-premises, costs can easily stack up, especially if you’re relying on legacy solutions. 

If you are leveraging storage inefficiently, then you may be paying premiums on data you aren't using. Research shows that:

  • up to 73% of big data goes unused in organizationsiv

  • unstructured data typically accounts for 80% of corporate datav

  • unstructured data is increasing by upwards of 55% per yearvi

On top of that, many organizations are required to hold on to that cold data for years to meet compliance or auditing requirements. 

As data stores grow, so does the complexity of managing where to keep all that data. Many IT departments waste precious time and resources manually managing data – sending it from the “hot” storage class to the “cold” class. The backup and retrieval of that data takes away the time of skilled resources that could be better spent on innovative initiatives. It would be so much more efficient if you could just set it and forget it.

You can.

With a well-architected solution, you can automatically store data where you need it, for as long as you need it.

An Incremental-Forever Approach to Backup Drives Cost Savings

All IT professionals have heard it before: to ensure business continuity, it’s critical to have a strong disaster recovery strategy that can combat any data threat. (Check out our blog that talks about disaster recovery for your VMware applications.) One hardship many run into is that a full backup – the replication of entire machines – can be slow, extremely expensive, and complex to execute point-in-time recovery. 

That’s why we at Rubrik are thrilled to announce new support for Amazon S3 Glacier and Amazon S3 Glacier Deep Archive as part of the recent Rubrik Andes 5.3 release. Our approach allows you to send incremental-forever backupsvii to inexpensive cold storage with the same SLA policy engine that customers use to automate and streamline their data lifecycle operations. Amazon S3 Glacier Deep Archive can reduce storage costs by up to a factor of 23 when compared to storing data for the same amount of time in Amazon S3 Standard class, translating into substantial savings.

No-Fumble Data Search and Retrieval Functionality

With Rubrik, in the event that you need to search and initiate recovery, it’s easy to do so swiftly since we ensure your data is always available and recoverable. 

While the data itself goes to Amazon S3 Glacier, Rubrik stores your metadata on the Amazon S3 Standard so that everything is easily searchable. Rubrik preserves simplicity with two-click search and recovery in the same workflow. While the data goes to Amazon S3 Glacier or Amazon S3 Glacier Deep Archive, the metadata remains on Amazon S3 Standard to enable global searching. 

Rubrik also allows our customers to select which method they would like to use to retrieve their data. AWS offers three different tiers when retrieving data from Amazon S3 Glacier: standard (the default setting that typically retrieves data within 3-5 hours), bulk (the cheapest option used to retrieve large/petabyte amounts of data and typically completes within 5-12 hours), and expedited (used for urgent retrievals, it’s the fastest option at the highest cost).

If the retrieval is not a rush, the first 10GB is free to retrieve through AWS. However, if it is a rush, Rubrik offers the option to pay for expedited data retrieval. This setting can be easily configured on the archival settings page. 

Bring Home up to 23x Cost-Savings with Rubrik and Amazon S3 Glacier 

Huddle up for a review of the play. Rubrik now:

1) supports AWS storage classes to Amazon S3 Glacier and Amazon S3 Glacier Deep Archive in our SLAs

2) allows our customers to take advantage of more cost-effective storage classes

3) helps you score up to 23x cost savings over Amazon S3 Standard class

With Rubrik Andes 5.3, we are expanding your choice of storage class in AWS with support for Amazon S3 Glacier and Amazon S3 Glacier Deep Archive. At Rubrik, we’ve turned data backup on AWS into a game of inches. Because just as in Any Given Sunday, when we add up all those incrementals, we know that’s going to make all the difference. 

To learn more about how Rubrik delivers simplified data management solutions for AWS, download the eBook “Accelerate Application Delivery on AWS with Powerful Cloud Data Management.”

1 Statista, “Volume of data/information worldwide from 2010 to 2024.” https://www.statista.com/statistics/871513/worldwide-data-created/

2 World Economic Forum, “How much data is generated each day?”  https://www.weforum.org/agenda/2019/04/how-much-data-is-generated-each-day-cf4bddf29f/

3 Statista, “Volume of data/information worldwide from 2010 to 2024.” https://www.statista.com/statistics/871513/worldwide-data-created/

4 Tech Republic, “How to effectively manage cold storage big data.”  https://www.techrepublic.com/article/how-to-effectively-manage-cold-storage-big-data/

5 Tech Republic, “Unstructured data: A cheat sheet.” https://www.techrepublic.com/article/unstructured-data-the-smart-persons-guide/

6  Datamation, “Structured vs. Unstructured Data.” https://www.datamation.com/big-data/structured-vs-unstructured-data.html#:~:text=Unstructured%20data%20makes%20up%2080,on%20the%20business%20intelligence%20table.

7 Rubrik Andes 5.3 release supports max 40 incrementals