A while back, I was reading about the woes of one Marko Karppinen as he described the incredible ease of getting data into a public cloud, and the equally opposite horrors of getting that data back out. His post, which can be found here, outlines his crafty plan to store around 60 GB of audio data into an archive for later retrieval and potential encoding. The challenge, then, is ensuring that data can later be pulled down into an on-premises location without breaking the bank or implied SLAs (Service Level Agreements). And this, folks, is the rub when using legacy architecture that bolts-on public cloud storage (essentially object storage) without fleshing out all of the financial and technological challenges.
I’ve teased apart this idea when describing the Cloud Native property of Converged Data Management in an earlier post. “Getting data into the cloud is for amateurs. Getting data back out is for experts.” If using a public cloud for storage becomes 100s of times more expensive than intended, while also requiring a significant time investment for the technical teams involved, then it’s not a solution for data protection. While it’s true that the blog post I’m referencing is a single person with their own personal data set, the truth doesn’t lay far away for a modern, enterprise data center, either.
With Rubrik’s r300 Series Appliance, the floor is swept clean of all layers of complexity that hinder using public (or private) cloud storage as an archive target due to a clear and concise abstraction layer. The workflow is elegant:
- Point the distributed cluster of appliances to an archive target.
- Associate the archive target with one or more policies.
- Retrieve files, folders, or entire workloads with two clicks, regardless of the data location.
Any data retrieval tasks that need to reach out to the archive merely sip data. That’s because you transfer deduplicated, compressed, and unique blocks over the wire. The end result is restoring vastly larger amounts of data than what’s being paid for. In essence: storing and retrieving gigabytes for the cost of kilobytes.
There’s no need to fuss over the archive’s backup pruning, data layout, file system, archive duration, and so forth. It’s all handled natively by Rubrik. That is the strength of using a converged data management platform. The complete data set is managed from the moment of protection across any number of appliances, replication targets, and archive storage while remaining visible and accessible.
Sound interesting? Download our Object Store Archival data sheet and stay tuned for Part 2 in this series when we take a look at how one of our customers leverages AWS for archival and disaster recovery.