Architecture

Rubrik -  - Automation Rules the Kingdom: Why Quality is Important For You

Automation Rules the Kingdom: Why Quality is Important For You

Automated End-to-End Testing: Ensuring Quality We take quality seriously. For both our customers and developers’ satisfaction, it is essential to provide consistent product performance and speed of development with confidence that existing use cases are not broken. To ensure agile development, here’s why quality is essential to your organization and how our strategy makes automated end-to-end testing fast, reliable, and responsive. Importance of Quality For Customers: Every company claims to deliver high quality to their customers, but this is especially critical for Rubrik. Our product is responsible for managing highly valuable data that powers our customers’ businesses. In the backup and recovery industry, our solution needs to be on active duty at the exact moment our customers experience trouble within systems protected by us. Given that these problems are complex, providing an extremely simple user experience alleviates troubleshooting. Of course, this simple user experience can only be simple as long as all the underlying pieces are performing reliably. For Engineers: Engineers want to innovate without breaking existing functionality that customers depend on. If the fundamentals fail, customers cannot upgrade without losing data. It’s often difficult to innovate without affecting the interoperating pieces. In Rubrik’s case, we integrate at all levels…
Rubrik -  - Here’s Why You Should Shelve Backup Jobs for Declarative Policies

Here’s Why You Should Shelve Backup Jobs for Declarative Policies

Changing out legacy, imperative data center models for the more fluid declarative models really gets me excited, and I’ve written about the two ideas in an earlier post. While the concept isn’t exactly new for enterprise IT – many folks enjoy using declarative solutions from configuration management tools such as Puppet – the scope of deployment has largely been limited to compute models for running workloads. The data protection space has largely been left fallow and awaiting some serious innovation. In fact, this is something I hear from Rubrik’s channel partners and customers quite frequently because their backup and recovery world has forever been changed by the simplicity and power of converged data management. To quote Justin Warren in our Eigencast recording, backup should be boring and predictable rather than exciting and adventurous because the restoration process failed, and you’re now responsible for missing data. That’s never fun. Thinking deeper on this idea, it brings me to one of the more radical ideas that a new platform brings: the lack of needing to schedule backup jobs. Creating jobs and telling them when exactly to run, including dependency chains, is the cornerstone of all legacy backup solutions. As part of their…
Rubrik -  - How Cloud Native Archive Destroys Legacy Cost Models

How Cloud Native Archive Destroys Legacy Cost Models

A while back, I was reading about the woes of one Marko Karppinen as he described the incredible ease of getting data into a public cloud, and the equally opposite horrors of getting that data back out. His post, which can be found here, outlines his crafty plan to store around 60 GB of audio data into an archive for later retrieval and potential encoding. The challenge, then, is ensuring that data can later be pulled down into an on-premises location without breaking the bank or implied SLAs (Service Level Agreements). And this, folks, is the rub when using legacy architecture that bolts-on public cloud storage (essentially object storage) without fleshing out all of the financial and technological challenges. I’ve teased apart this idea when describing the Cloud Native property of Converged Data Management in an earlier post. “Getting data into the cloud is for amateurs. Getting data back out is for experts.” If using a public cloud for storage becomes 100s of times more expensive than intended, while also requiring a significant time investment for the technical teams involved, then it’s not a solution for data protection. While it’s true that the blog post I’m referencing is a single…
Rubrik -  - Why We Built Our Own VSS Provider

Why We Built Our Own VSS Provider

In last week’s post, Kenny explained how we designed Rubrik to eliminate the effects of VMware application stun. We couple flash with a distributed architecture to deliver faster ingest that linearly scales with cluster growth. We reduce the number of data hops by collapsing discrete backup hardware/software into a single software fabric. We tightly manage the number of operations hitting the ESXi hosts to speed up consolidation. Our own VSS Provider also contributes to this effort. In this week’s post, Part 2 of our App Consistency series, I’ll explain why we built our own VSS agent and how we take app-consistent snapshots. Maintaining application and data consistency is industry standard practice for any backup solution worth its salt. To backup transactional applications installed on a Windows server (SQL, Exchange, Oracle), we utilize Microsoft’s native Volume Shadow Copy Service (VSS). Taking an application-consistent snapshot not only captures all of the VM’s data at the same time, but also waits for the VM to flush I/O operations and transactions in process. We Hate Bad Days and Sleepless Nights Failed backup jobs are a leading cause of bad days and sleepless nights, which is why we took extra care to mitigate risk factors when protecting…
Rubrik -  - Eliminating the Effects of Application Stun

Eliminating the Effects of Application Stun

Application stunning during the snapshot process is a topic that often bubbles up in customer conversations on data protection for VMware environments. To level set, application stun goes hand-in-hand with any snapshot operation. VMware stuns (quiesces) the virtual machine (VM) when the snapshot is created and deleted. Cormac Hogan has a great post on this here. Producing a snapshot of a VM disk file requires the VM to be stunned, a snapshot of the VM disk file to be ingested, and deltas to be consolidated into the base disk. If you’re snapping a highly transactional application, like a database, nasty side effects appear in the form of lengthy backup windows and application time-outs when the “stun-ingest-consolidate” workflow is not efficiently managed. When a snapshot of the base VMDK is created, VMware will create a delta VMDK. Write operations are redirected to the delta VMDK, which expands over time for an active VM. Once the backup completes, the delta VMDK needs to be consolidated with the base VMDK. Longer backup windows lead to bigger delta files, resulting in a longer consolidation process. If the rate of I/O operations exceeds the rate of consolidation, you’ll end up with application time-outs.Rubrik was designed…
Rubrik -  - Meet Cerebro, the Brains Behind Rubrik’s Time Machine

Meet Cerebro, the Brains Behind Rubrik’s Time Machine

Fabiano Botelho, father of two and star soccer player, explains how Cerebro was designed. Previously, Fabiano was the tech lead of Data Domain’s Garbage Collection team. Rubrik is a scale-out data management platform that enables users to protect their primary infrastructure. Cerebro is the “brains” of the system, coordinating the movement of customer data from initial ingest and propagating that data to other data locations, such as cloud storage and remote clusters (for replication). It is also where the data compaction engine (deduplication, compression) sits. In this post, we’ll discuss how Cerebro efficiently stores data with global deduplication and compression while making Instant Recovery & Mount possible. Cerebro ties our API integration layer, which has adapters to extract data from various data sources (e.g., VMware, Microsoft, Oracle), to our different storage layers (Atlas and cloud providers like Amazon and Google). It achieves this by leveraging a distributed task framework and a distributed metadata system. See AJ’s post on the key components of our system. Cerebro solves many challenges while managing the data lifecycle, such as efficiently ingesting data at a cluster-level, storing data compactly while making it readily accessible for instant recovery, and ensuring data integrity at all times. This is what…
Rubrik -  - Contrasting a Declarative Policy Engine to Imperative Job Scheduling

Contrasting a Declarative Policy Engine to Imperative Job Scheduling

One of the topics du jour for next-generation architecture is abstraction. Or, more specifically, the use of policies to allow technical professionals to manage ever-growing sets of infrastructure using a vastly simpler model. While it’s true I’ve talked about using policy in the past (read my first and second posts of the SLA Domain Series), I wanted to go a bit deeper into how a declarative policy engine is vastly different from an imperative job scheduler. And, why this matters for the technical community at large. This post is fundamentally about declarative versus imperative operations. In other words: Declarative – Describing the desired end state for some object Imperative – Describing every step needed to achieve the desired end state for some object Traditional architecture has long been ruled by the imperative operational model. We take some piece of infrastructure and then tell that same piece of infrastructure exactly what it must do to meet our desired end state. With data protection, this has resulted in backup tasks / jobs. Each job requires a non-trivial amount of hand holding to function. This includes configuration items such as: Which specific workloads / virtual machines must be protected Where to send data and how to store that…
Rubrik -  - Converged Data Management Unwrapped – Cloud Native

Converged Data Management Unwrapped – Cloud Native

Intelligently placing data into a variety of different formats and across geographical locations is non-trivial. With data protection, however, this isn’t just a nice to have; it’s often a functional design requirement. Doing so provides layers of safeguards against data-specific failures as well as local or regional catastrophes. In this fifth and final deep dive series post, I’m going to pick apart how Converged Data Management offers a truly Cloud Native experience for data protection workflows, and how that differs from the traditional approaches. There’s a fundamental difference between adapting a platform to take advantage of cloud data services, such as public object storage with Amazon S3, versus natively making it part of the platform. This difference can be extrapolated into one metric – simplicity. The amount of complexity that surrounds a platform drives greater inefficiencies and increased chances for error. Additionally, the base foundation of a platform ultimately dictates what features and properties are available for a long-term strategy. Without re-writing the platform from scratch – which is something largely avoided in the enterprise market – the choices are limited. After all, the spin-out and spin-in model used by large corporations to innovate doesn’t exist without reason. In the…
Rubrik -  - Converged Data Management Unwrapped – Instant Data Access

Converged Data Management Unwrapped – Instant Data Access

Now that the Rubrik team has returned from an energizing trip to VMworld Barcelona and TechTarget’s Backup 2.0 Road Tour, it’s time to peel the onion a bit further on Converged Data Management in the fourth installment of this deep dive series. In this post, I’ll put the magnifying glass up against Instant Data Access – which is the ability to see and interact with data in a global and real-time manner – to better understand why it’s a critical property in the architecture shown below: First, let’s take a step back. There’s a lot of hemming and hawing on the Internet about data lakes, big data, the Internet of Things (IoT), and so forth. The rub is that data is growing (exponentially), is largely unstructured, and there’s no way to put our hands around it in any meaningful way without machine learning and a plethora of automation. ImageNet is a great example of this in a real world, practical format. This has led to some really nifty architectures being crafted to deal with large data sets and a rise in the popularity in storage systems that are eventually consistent, such as object storage clustering, rather than strongly consistent as typically…

    Close search icon

    Contact Sales