Here's Why You Should Shelve Backup Jobs for Declarative Policies
Changing out legacy, imperative data center models for the more fluid declarative models really gets me excited, and I’ve written about the two ideas in an earlier post. While the concept isn’t exactly new for enterprise IT – many folks enjoy using declarative solutions from configuration management tools such as Puppet – the scope of deployment has largely been limited to compute models for running workloads. The data protection space has largely been left fallow and awaiting some serious innovation.
In fact, this is something I hear from Rubrik’s channel partners and customers quite frequently because their backup and recovery world has forever been changed by the simplicity and power of converged data management. To quote Justin Warren in our Eigencast recording, backup should be boring and predictable rather than exciting and adventurous because the restoration process failed, and you’re now responsible for missing data. That’s never fun.
Thinking deeper on this idea, it brings me to one of the more radical ideas that a new platform brings: the lack of needing to schedule backup jobs. Creating jobs and telling them when exactly to run, including dependency chains, is the cornerstone of all legacy backup solutions. As part of their imperative model, you have to describe each step to the software stack – including the time to run the job. This is because the job is largely ignorant of the data center architecture and requires you to hold it’s hand to avoid clobbering workloads or network capacity. It also means that annoying things like Daylight Saving Time can cause all of the imperatively defined jobs to fail because the system was not designed to handle the odd quirks that we humans put ourselves through. 🙂
Instead, let’s tease apart how Rubrik’s declarative, converged data management system works. Rather than telling a backup job when to run, you declare the RPO value within a policy. The distributed task scheduler system, which runs within the distributed cluster of Rubrik appliances, determines how best to protect the environment using a vast amount of telemetry data. It prioritizes the fact that workloads should not be harmed, and that SLA requirements should be met. Couple this with the efficient application handling, as written about here, and you have a system that turns desired state into production ready protection using lightweight policy inputs. Plus, the cluster is smart enough to nimbly dodge around pitfalls like time changes.
I prefer to think of it like trying to juggle bowling pins. If you’re a fairly good juggler and are only managing a few bowling pins, it’s easy. Much like setting up a handful of backup jobs for a small environment – easy! As more and more bowling pins are added to the pile, you become hard pressed to juggle them all, and are likely missing information that conversely affects the health of your environment.
If this post struck a chord with you, learn how you can use one policy engine to define backup and replication schedules. Additionally, watch my webinar for more information on Rubrik’s converged data management platform.