Over the past 30 years, we’ve grown  used to data protection solutions that require backup agents to be deployed within an OS. Historically, no other options were available, and the available solutions were designed to protect mostly physical machines, at least in the x86 world.

Most people define  a backup agent (sometimes known as a backup client)  as a piece of software provided by the data protection solution that  performs the actual backup job from within the workload. This job is usually comprised of several tasks, including:

  • Preparing the workload for online backup, which involves quiescing the filesystem and running applications to put them in a consistent state.
  • Identifying new and changed data, such as blocks or files, since the previous backup operation.
  • Processing new data, which might include actions such as deduplication, compression, and encryption.
  • Transporting the backed up data to a backup storage target.

Backup agents are deployed, controlled, and maintained by a master server that tells each agent what to do. The master server also collects and stores information about the backed up data, called metadata, and logs everything that happens. This is basically a client-server relationship that requires network communication between the backup infrastructure and each workload being protected. Here’s where things start to get complicated. Backup agents, even though proven to provide good capabilities, bring complexity at different levels:

  • Deployment: Agents need to be deployed to each protected workload.
  • Multiple agents: Depending on the type of data being backed up and the desired recovery capabilities, some workloads may require deploying and configuring several specialized agents. For instance, one agent might perform a system state backup to allow for OS recovery while another performs file and folder backup.   
  • Backup plan: Agents can require specific types of backup jobs that need to be configured, maintained, and scheduled separately, adding to the overall complexity.
  • Platform support: Backup admins must deploy the right version of the agents according to the version of the OS, applications, and filesystems.
  • Maintenance: Agents can have deep interaction with the OS, including components down to the kernel level. This requires the OS to be rebooted when an agent is installed or upgraded.

img

Long Live Virtualization!

Luckily, the 2000s experienced an IT revolution with the rise of x86 virtualization. The IT infrastructure landscape started to change dramatically and we discovered new ways to protect data. This was particularly true when VMware released the vSphere Storage APIs – Data Protection (formerly known as VADP) in 2009, allowing third-party vendors to provide agentless data protection solutions to protect VMs. No need to deal with agents anymore, and all of the above pain points suddenly disappeared! Sounds like the ideal solution, doesn’t it?

img

Despite being a game-changer, agentless backup is not the solution to everything. There are still use cases where agents can help solve specific challenges.

An agentless backup, or an image-level backup, backs up the entire VM object, including the content of the virtual disks (OS, files and folders, applications) and the container with everything that describes it (name, unique ID, virtual hardware configuration). Therefore, a single-pass backup job is able to backup everything.

However, the recovery granularity greatly depends on the data protection solution and its capabilities. In addition, everything in a given VM gets the same Recovery Point Objective (RPO) because all data within the VM will be backed up at the same time by the same backup job, inheriting the RPO from the job’s scheduling. But for many companies,  RPO requirements may not be the same for the OS system state, the files and folders within the VM, and the databases hosted in this VM.

To solve these challenges, organizations must either select the same RPO for the data in a given VM or use some agents still. Or they can choose a modern approach to backup and recovery.

Meet Rubrik Cloud Data Management

Let’s face it: the real problem is not with backup agents, it’s the complexity that the legacy ones bring. We also don’t live in a world that’s 100% virtualized, so agents are still a big requirement for many IT organizations.

But what if we could use smart backup agents that remove all complexity and allow us to set RPOs to each protected data source, even if multiple types of data are hosted within the same machine–whether virtual, physical, Windows, Linux, AIX, or Solaris?

At Rubrik, we leverage smart agents called Connectors, also known as Rubrik Backup Service (RBS). A connector is a lightweight service that can be deployed and updated automatically without rebooting the target OS. It can also interact with operating systems, file systems, and applications to provide consistency and granular backup and recovery, whether the workload lives on-premises or in the cloud. A good use case is a SQL Server hosting multiple databases with different levels of criticality. Some databases may require a 15-minute RPO with transaction logs backup, whereas others may require to be backed up only once a day. In such a situation, users simply apply the corresponding SLA domains to individual databases, providing the desired RPO to each of them. This is just one example of how RBS can take the pain out of data protection for the modern enterprise.

img

Agentless or not, backup should remain simple, yet flexible. What about you? Is your data protection strategy 100% agentless?

To learn more, read our blog on Rubrik’s adaptative data consistency.