Architecture
Converged Data Management Unwrapped - Instant Data Access

Now that the Rubrik team has returned from an energizing trip to VMworld Barcelona and TechTarget’s Backup 2.0 Road Tour, it’s time to peel the onion a bit further on Converged Data Management in the fourth installment of this deep dive series. In this post, I’ll put the magnifying glass up against Instant Data Access – which is the ability to see and interact with data in a global and real-time manner – to better understand why it’s a critical property in the architecture shown below:

CDMDiagram-01First, let’s take a step back. There’s a lot of hemming and hawing on the Internet about data lakes, big data, the Internet of Things (IoT), and so forth. The rub is that data is growing (exponentially), is largely unstructured, and there’s no way to put our hands around it in any meaningful way without machine learning and a plethora of automation. ImageNet is a great example of this in a real world, practical format.

This has led to some really nifty architectures being crafted to deal with large data sets and a rise in the popularity in storage systems that are eventually consistent, such as object storage clustering, rather than strongly consistent as typically witnessed with a traditional, dual-controller storage array. The working data sets are simply too large to cram into legacy storage architectures if financial reason comes into play. For those dealing with requirements around archive – often set to “infinite retention policies” for health care organizations and legal firms – the headache of determining where to store data likely rings true.

However, it’s not just a data size problem; the protection process is also a bit wonky. Historically, data protection workflows were crafted to capture a snapshot of the working data set on weekly basis. These are referred to as full backups. Smaller tasks were set to execute on a nightly basis that then capture the delta changes, either as differentials (everything that changed since the last full) or incrementals (a dependency chain of changes). These backup files were written daily to a magnetic tape, which was a cheap and simple method for storing data, and shuffled out of the data center using a variation of the Grandfather-Father-Son (GFS) rotation.

The tapes themselves were cataloged and indexed by a master server, and often assigned a role – such as specific cartridges for daily, weekly, monthly, and annual data points in a first-in, first-out (FIFO) model. Tape readers were staged at any data center that may need to hydrate the archived data, which requires keeping around the various hardware models required to read the LTO generations plus the software needed to ingest the catalog data for recovery purposes.

As someone who had to deal with this system for over a decade on the user side of things, I can attest that there are a number of challenges with this model and architectural points that I feel are broken. For example, any data that has been archived requires a non-trivial amount of operational effort to locate and retrieve. While it’s true that catalog systems can reveal where the data lives – such as “Tape 230 of Set 4 in Locker 6” – there’s no way to retrieve the data without hitting a tape mailbox, robotic library, or calling the archive retrieval firm. We put up with this behavior because the technical world did not have data space efficiency technologies like deduplication, and we didn’t have object storage to offer eleven 9’s of availability across meaningful geographical distances for financially responsible costs. There was no other sane choice. But much like the 8-track and cassette tapes used for music, that world has faded away.

Old School Data Protection

Long term archive in a slightly older package

The use of on- or off-premises object storage offers too many advantages to ignore. Primarily, it allows for data to span across buckets that could live in one or multiple data centers while still remaining online. Additionally, the underlying hardware is largely irrelevant to the data protection system – making archaic technologies like RAID simply vanish. Coupled with RESTful API calls to an object storage to GET or PUT a piece of data, the data protection and retrieval workflows remain snappy, efficient, and nimble, while resulting in instant data access.

Interestingly enough, this model also drives down cost. Unlike the steady capital expense of tape, which is a unit of cost that remains locked up inside of an archival warehouse and rarely sees the light of day, an object system with data space efficiencies can offer the affordability of a typical, fixed capital expenditure (on-premises) or the flexibility of an operational expense (off-premises, cloud). It also allows for dynamic control over retention policies without having to schlep around hardware and magnetic tapes. Having been in situations where I was forced to consume hundreds of tapes per month, and then never see them again, I saw first hand how much sad panda can result from continuing to use an architecture that was designed for a different era of data creation. Plus, the amount of control over data sovereignty, compliance, and buy vs lease are entirely up to the technical architects constructing the system when using new archival methods.

find-file-chrisw

With Rubrik, Instant Data Access means having a Google-like search experience across all data and servers that are being protected by the enterprise for real time search and restores. The screenshot above shows a quick search for my Users directory in a Windows Server virtual machine. Data locality becomes trivial – the search has looked across all on-premises, archived, cloud, and replicated backups using a distributed, flash-backed content index to find the data I’m looking for. In the event that I’d need to restore from a public cloud archive, only the unique, deduplicated, and compressed portions of the file that are missing from the Rubrik r300 Appliance would be retrieved. This equates to sending only a small bit of data over the WAN, which results in a speedy transfer and lower network cost from the public cloud provider.

That’s it for my stroll through memory lane to talk about Instant Data Access. Make sure to catch up with the team and I as we tour around the North Americas at various VMware User Group (VMUG), Backup 2.0 Road Tour, and other community / user focused events!



Close search icon

Contact Sales